

United Software Group Inc
Sr Python Developer & Lead
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr Python Developer & Lead for a 6-month contract in Detroit, MI. Requires 7+ years IT experience, 5+ years in data engineering with Python and PySpark, expertise in Airflow, CI/CD pipelines, and strong SQL skills.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
480
-
ποΈ - Date
November 18, 2025
π - Duration
More than 6 months
-
ποΈ - Location
On-site
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Detroit, MI
-
π§ - Skills detailed
#PySpark #Agile #Programming #Airflow #SQL (Structured Query Language) #GCP (Google Cloud Platform) #Data Ingestion #GIT #Python #GitLab #Linux #Cloud #Scala #Docker #Deployment #Unix #Data Architecture #Version Control #Spark (Apache Spark) #Data Engineering #Computer Science #Batch #Documentation #Cloudera #Code Reviews #Data Pipeline #Tableau #Containers #Automated Testing #GitHub #Jira
Role description
Sr Python Developer & Lead
Duration : 6 months
Onsite working : Detroit , MI ---Need in person interview
Job Requirements
"The Senior Data Engineer & Technical Lead (SDET Lead) will play a pivotal role in delivering major data engineering initiatives within the Data & Advanced Analytics space. This position requires hands-on expertise in building, deploying, and maintaining robust data pipelines using Python, PySpark, and Airflow, as well as designing and implementing CI/CD processes for data engineering projectsKey Responsibilities
1. Data Engineering: Design, develop, and optimize scalable data pipelines using Python and PySpark for batch and streaming workloads.
1. Workflow Orchestration: Build, schedule, and monitor complex workflows using Airflow, ensuring reliability and maintainability.
1. CI/CD Pipeline Development: Architect and implement CI/CD pipelines for data engineering projects using GitHub, Docker, and cloud-native solutions.
1. Testing & Quality: Apply test-driven development (TDD) practices and automate unit/integration tests for data pipelines.
1. Secure Development: Implement secure coding best practices and design patterns throughout the development lifecycle.
1. Collaboration: Work closely with Data Architects, QA teams, and business stakeholders to translate requirements into technical solutions.
1. Documentation: Create and maintain technical documentation, including process/data flow diagrams and system design artifacts.
1. Mentorship: Lead and mentor junior engineers, providing guidance on coding, testing, and deployment best practices.
1. Troubleshooting: Analyze and resolve technical issues across the data stack, including pipeline failures and performance bottlenecks.
Cross-Team Knowledge Sharing: Cross-train team members outside the project team (e.g., operations support) for full knowledge coverage. Includes all above skills, plus the following;
Β· Minimum of 7+ years overall IT experienceΒ· Experienced in waterfall, iterative, and agile methodologies"
Technical Experience:
"1. Hands-on Data Engineering : Minimum 5+ yearsof practical experience building production-grade data pipelines using Python and PySpark.
1. Airflow Expertise: Proven track record of designing, deploying, and managing Airflow DAGs in enterprise environments.
1. CI/CD for Data Projects : Ability to build and maintain CI/CD pipelinesfor data engineering workflows, including automated testing and deployment
β’
β’ .
1. Cloud & Containers: Experience with containerization (Docker and cloud platforms (GCP) for data engineering workloads. Appreciation for twelve-factor design principles
1. Python Fluency : Ability to write object-oriented Python code manage dependencies, and follow industry best practices
1. Version Control: Proficiency with
β’
β’ Git
β’
β’ for source code management and collaboration (commits, branching, merging, GitHub/GitLab workflows).
1. Unix/Linux: Strong command-line skills
β’
β’ in Unix-like environments.
1. SQL : Solid understanding of SQL for data ingestion and analysis.
1. Collaborative Development : Comfortable with code reviews, pair programming and using remote collaboration tools effectively.
1. Engineering Mindset: Writes code with an eye for maintainability and testability; excited to build production-grade software
1. Education: Bachelorβs or graduate degree in Computer Science, Data Analytics or related field, or equivalent work experience."
Unique Skills
"
β’ Graduate degree in a related field, such as Computer Science or Data Analytics
β’ Familiarity with Test-Driven Development (TDD)
β’ A high tolerance for OpenShift, Cloudera, Tableau, Confluence, Jira, and other enterprise tools"
Sr Python Developer & Lead
Duration : 6 months
Onsite working : Detroit , MI ---Need in person interview
Job Requirements
"The Senior Data Engineer & Technical Lead (SDET Lead) will play a pivotal role in delivering major data engineering initiatives within the Data & Advanced Analytics space. This position requires hands-on expertise in building, deploying, and maintaining robust data pipelines using Python, PySpark, and Airflow, as well as designing and implementing CI/CD processes for data engineering projectsKey Responsibilities
1. Data Engineering: Design, develop, and optimize scalable data pipelines using Python and PySpark for batch and streaming workloads.
1. Workflow Orchestration: Build, schedule, and monitor complex workflows using Airflow, ensuring reliability and maintainability.
1. CI/CD Pipeline Development: Architect and implement CI/CD pipelines for data engineering projects using GitHub, Docker, and cloud-native solutions.
1. Testing & Quality: Apply test-driven development (TDD) practices and automate unit/integration tests for data pipelines.
1. Secure Development: Implement secure coding best practices and design patterns throughout the development lifecycle.
1. Collaboration: Work closely with Data Architects, QA teams, and business stakeholders to translate requirements into technical solutions.
1. Documentation: Create and maintain technical documentation, including process/data flow diagrams and system design artifacts.
1. Mentorship: Lead and mentor junior engineers, providing guidance on coding, testing, and deployment best practices.
1. Troubleshooting: Analyze and resolve technical issues across the data stack, including pipeline failures and performance bottlenecks.
Cross-Team Knowledge Sharing: Cross-train team members outside the project team (e.g., operations support) for full knowledge coverage. Includes all above skills, plus the following;
Β· Minimum of 7+ years overall IT experienceΒ· Experienced in waterfall, iterative, and agile methodologies"
Technical Experience:
"1. Hands-on Data Engineering : Minimum 5+ yearsof practical experience building production-grade data pipelines using Python and PySpark.
1. Airflow Expertise: Proven track record of designing, deploying, and managing Airflow DAGs in enterprise environments.
1. CI/CD for Data Projects : Ability to build and maintain CI/CD pipelinesfor data engineering workflows, including automated testing and deployment
β’
β’ .
1. Cloud & Containers: Experience with containerization (Docker and cloud platforms (GCP) for data engineering workloads. Appreciation for twelve-factor design principles
1. Python Fluency : Ability to write object-oriented Python code manage dependencies, and follow industry best practices
1. Version Control: Proficiency with
β’
β’ Git
β’
β’ for source code management and collaboration (commits, branching, merging, GitHub/GitLab workflows).
1. Unix/Linux: Strong command-line skills
β’
β’ in Unix-like environments.
1. SQL : Solid understanding of SQL for data ingestion and analysis.
1. Collaborative Development : Comfortable with code reviews, pair programming and using remote collaboration tools effectively.
1. Engineering Mindset: Writes code with an eye for maintainability and testability; excited to build production-grade software
1. Education: Bachelorβs or graduate degree in Computer Science, Data Analytics or related field, or equivalent work experience."
Unique Skills
"
β’ Graduate degree in a related field, such as Computer Science or Data Analytics
β’ Familiarity with Test-Driven Development (TDD)
β’ A high tolerance for OpenShift, Cloudera, Tableau, Confluence, Jira, and other enterprise tools"





