

Lead PySpark Developer – AWS(6 Month Contract – W2)
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Lead PySpark Developer – AWS, a 6-month W2 contract based in Owings Mills, MD, offering a full-time, on-site position. Requires 10+ years in big data, 7+ years in AWS, and expertise in PySpark, Apache Spark, and Python.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
-
🗓️ - Date discovered
June 27, 2025
🕒 - Project duration
More than 6 months
-
🏝️ - Location type
On-site
-
📄 - Contract type
W2 Contractor
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
Owings, MD
-
🧠 - Skills detailed
#SQL (Structured Query Language) #Security #Computer Science #Code Reviews #AWS (Amazon Web Services) #Data Engineering #Kubernetes #Data Pipeline #Docker #Scala #"ETL (Extract #Transform #Load)" #Data Governance #PySpark #dbt (data build tool) #NoSQL #Airflow #Leadership #Deployment #PostgreSQL #Compliance #Spark (Apache Spark) #Big Data #Snowflake #Cloud #Python #DevOps #Apache Spark
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Note
• This is a W2 position.
• It is a 100% on-site role based in Owings Mills, MD.
• Candidates must have strong hands-on experience in PySpark, Apache Spark, Python, and AWS along with data engineering, cloud architecture, and ETL pipeline development.
• Visa sponsorship and relocation assistance are not available.
• Work Type: Full-time | On-site
• Contract Duration: 6 Months
• Notice Period: Immediate Joiner / Up to 30 Days Preferred
Job Summary
• We are hiring on behalf of a global IT services provider for a Lead PySpark Developer to build, optimize, and scale big data solutions using Apache Spark and AWS. The role will lead the design and deployment of ETL pipelines, collaborate with cross-functional teams, and drive cloud data platform initiatives.
Key Responsibilities
• Design and lead PySpark-based big data applications and data pipelines
• Architect scalable ETL workflows for structured/unstructured data
• Optimize Spark jobs for performance, caching, and partitioning
• Collaborate with data engineers, scientists, and business users
• Work with AWS services and cloud-native data platforms
• Apply DevOps and CI/CD practices to data engineering workflows
• Mentor junior developers, conduct code reviews, and enforce standards
• Ensure data governance, security, and compliance
Key Requirements
• Bachelor’s degree in Computer Science or equivalent
• 10+ years of experience in big data and distributed systems
• 7+ years of AWS cloud computing experience
• Expert in PySpark, Apache Spark, and Python
• Proficient in SQL and NoSQL (DB2, PostgreSQL, Snowflake)
• Experience with workflow tools like Airflow, and platforms like DBT
• Exposure to CI/CD, Docker, Kubernetes, and DevOps pipelines
• Excellent problem-solving and leadership skills
• Strong communication and team collaboration abilities