1 of 5 free roles viewed today. Upgrade to premium for unlimited from only $19.99 with a 2-day free trial.

Python Developer (only W2) - Data Engineer & ETL

This role is for a Python Developer (Data Engineer & ETL) on a W2 contract for an unspecified length, offering competitive pay. Key skills include Python, Linux, and ETL processes. A BS degree or relevant certification is required, along with 3-7 years of experience.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
🗓️ - Date discovered
March 13, 2025
🕒 - Project duration
Unknown
🏝️ - Location type
Unknown
📄 - Contract type
W2 Contractor
🔒 - Security clearance
Unknown
📍 - Location detailed
Plano, TX
🧠 - Skills detailed
#Perl #MariaDB #Documentation #Java #Scala #RDBMS (Relational Database Management System) #Python #Cloud #MySQL #Spark (Apache Spark) #Scrum #SQL Server #Deployment #XML (eXtensible Markup Language) #"ETL (Extract #Transform #Load)" #Agile #Databases #PostgreSQL #Bash #Security #Data Backup #Normalization #Programming #Clustering #API (Application Programming Interface) #Batch #Linux #Data Pipeline #SQL (Structured Query Language) #DBA (Database Administrator) #S3 (Amazon Simple Storage Service) #JavaScript #Oracle #Database Design #Data Framework #PySpark #Data Integration #Databricks #Data Engineering #BI (Business Intelligence) #Debugging #HDFS (Hadoop Distributed File System)
Role description
You've reached your limit of 5 free role views today.
Upgrade to premium for unlimited access - from only $19.99.

Job Title: (Software Engineer/Developer)

We are looking for an experienced Software engineer with focus on Data Engineering, ETL processes, preferably with exposure to both batch and streaming data. The candidate should have familiarity with use of Databases and DataLake infrastructure and associated tools for ingestion, transformation and efficient querying across distributed data frameworks to include understanding of performance and scalability issues and query optimization.

Responsibilities:

  • Design, build, and maintain workflows/pipelines to process continuous stream of data with experience in end-to-end design and build process of Near-Real-Time and Batch Data Pipelines.

  • Expected to work closely with other data engineers and business intelligence engineers across teams to create data integrations and ETL pipelines to drive projects from initial concept to production deployment

  • Maintaining and supporting incoming data feed into the data pipeline from multiple sources, including external customer feeds in CSV or XML file format to Publisher/Subscriber model automatic feeds.

  • Active development of ETL processes using Python, PySpark, Spark or other highly parallel technologies, and implementing ETL/data pipelines

  • Continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers

  • Ability to provide quick ingestion tools and corresponding access API’s for continuously changing data schema, working closely with Data Engineers around specific transformation and access needs

  • Participate in L1 team rotation during business hours, in 1-week blocks.

Preferred:

  • 1-2 year's experience developing applications with Relational Databases, preferably with experience in SQL Server and/or MySQL.

  • Some exposure to database optimization techniques for speed, complexity, normalization etc.

Skills and Attributes:

  1. Ability to have effective working relationships with all functional units of the organization

  1. Excellent written, verbal and presentation skills

  1. Excellent interpersonal skills

  1. Ability to work as part of a cross-cultural team

  1. Self-starter and Self-motivated

  1. Ability to work without lots of supervision

  1. Works under pressure and is able to manage competing priorities.

Technical qualifications and experience level:

  1. 3-7 years in development using Java, Python, PySpark, Spark, Scala, and object-oriented approaches in designing, coding, testing, and debugging programs

  1. Ability to create simple scripts and tools using Linux, Perl, Bash

  1. Development of cloud-based, distributed applications

  1. Understanding of clustering and cloud orchestration tools

  1. Working knowledge of database standards and end-user applications

  1. Working knowledge of data backup, recovery, security, integrity, and SQL

  1. Familiarity with database design, documentation, and coding

  1. Previous experience with DBA case tools (frontend/backend) and third-party tools

  1. Understanding of distributed file systems and their optimal use in the commercial cloud (HDFS, S3, Google File System, Databricks)

  1. Familiarity with programming languages API

  1. Problem-solving skills and ability to think algorithmically

  1. Working Knowledge on RDBMS/ORDBMS like MariaDb, Oracle and PostgreSQL

  1. Knowledge of SDLC (Waterfall, Agile and Scrum)

  1. BS degree in a computer discipline or relevant certification

   • Key Requirements:

  1. Strong on Python

  1. Strong in the Linux environment (development in, navigating, and execution )

  1. Some experience with Web UI development, using JavaScript frameworks, not simply JavaScript.

  1. Understanding of web-based architecture

  1. Ability to do some database relating development, both with use of code to read and update databases and also ability to create and manage DB tables.