Jobs via Dice

Sr. Python Developer ( W / Lead Experience) : Auburn Hills, MI, 48321 : : Contract on W2

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr. Python Developer/Lead in Auburn Hills, MI, on a W2 contract. Requires 7+ years IT experience, 5+ years with Python and PySpark, and a graduate degree in a related field. Key skills include CI/CD, Airflow, and TDD.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
January 6, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Auburn Hills, MI
-
🧠 - Skills detailed
#Data Pipeline #Cloud #Docker #Spark (Apache Spark) #Python #Cloudera #Batch #Data Architecture #Jira #PySpark #Computer Science #Documentation #Deployment #Scala #GitHub #Data Engineering #Airflow #Tableau
Role description
Dice is the leading career destination for tech experts at every stage of their careers. Our client, Marvel Technologies Inc, is seeking the following. Apply via Dice today! Sr. Python Developer/Lead Auburn Hills, MI, 48321 The Senior Data Engineer & Technical Lead (SDET Lead) will play a pivotal role in delivering major data engineering initiatives within the Data & Advanced Analytics space. This position requires hands-on expertise in building, deploying, and maintaining robust data pipelines using Python, PySpark, and Airflow, as well as designing and implementing CI/CD processes for data engineering projects Key Responsibilities • Data Engineering: Design, develop, and optimize scalable data pipelines using Python and PySpark for batch and streaming workloads. • Workflow Orchestration: Build, schedule, and monitor complex workflows using Airflow, ensuring reliability and maintainability. • CI/CD Pipeline Development: Architect and implement CI/CD pipelines for data engineering projects using GitHub, Docker, and cloud-native solutions. • Testing & Quality: Apply test-driven development (TDD) practices and automate unit/integration tests for data pipelines. • Secure Development: Implement secure coding best practices and design patterns throughout the development lifecycle. • Collaboration: Work closely with Data Architects, QA teams, and business stakeholders to translate requirements into technical solutions. • Documentation: Create and maintain technical documentation, including process/data flow diagrams and system design artifacts. • Mentorship: Lead and mentor junior engineers, providing guidance on coding, testing, and deployment best practices. • Troubleshooting: Analyze and resolve technical issues across the data stack, including pipeline failures and performance bottlenecks. • Cross-Team Knowledge Sharing: Cross-train team members outside the project team (e.g., operations support) for full knowledge coverage. Experience Required: • Minimum of 7+ years overall IT experience; Hands-on Data Engineering • Minimum 5+ yearsof practical experience building production-grade data pipelines using Python and PySpark • Graduate degree in a related field, such as Computer Science or Data Analytics • Familiarity with Test-Driven Development (TDD) • A high tolerance for OpenShift, Cloudera, Tableau, Confluence, Jira, and other enterprise tools