VMC Soft Technologies, Inc

Lead PySpark Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Lead PySpark Engineer in Owings Mills, MD, offering $50/hr on a W2 contract. Requires 10+ years in big data, strong PySpark and SQL skills, AWS experience, and team leadership abilities. Local candidates only.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
400
-
🗓️ - Date
December 30, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Owings Mills, MD
-
🧠 - Skills detailed
#NoSQL #Version Control #AWS (Amazon Web Services) #SQL (Structured Query Language) #Airflow #Scala #Databases #Apache Spark #PostgreSQL #Data Security #Kubernetes #Docker #Distributed Computing #PySpark #Snowflake #"ETL (Extract #Transform #Load)" #Data Engineering #Spark (Apache Spark) #Security #Data Science #Deployment #Python #Compliance #Data Modeling #Cloud #Unit Testing #Big Data #DevOps
Role description
• • • • • CONTRACT W2 ROLE • • • • • • • • • • CONTRACT W2 ROLE • • • • • • • • • • CONTRACT W2 ROLE • • • • • Job Title: Lead PySpark Engineer Location: Owings Mills, MD (LOCAL CANDIDATES ONLY!) Rate: $50 /hr on W2 (Benefits) Independent Visa Candidates Only. JD: Role Name - Lead PySpark Engineer ROLE\_DESCRIPTION - • 10+ years of experience in big data and distributed computing. • Very Strong hands-on experience with PySpark, Apache Spark, and Python. • Strong Hands on experience with SQL and NoSQL databases (DB2, PostgreSQL, Snowflake, etc.). • Proficiency in data modeling and ETL workflows. • Proficiency with workflow schedulers like Airflow • Hands on experience with AWS cloud-based data platforms. • Experience in DevOps, CI/CD pipelines, and containerization (Docker, Kubernetes) is a plus. • Strong problem-solving skills and ability to lead a team • Lead the design, development, and deployment of PySpark-based big data solutions. • Architect and optimize ETL pipelines for structured and unstructured data. • Collaborate with Client, data engineers, data scientists, and business teams to understand requirements and provide scalable solutions. • Optimize Spark performance through partitioning, caching, and tuning. • Implement best practices in data engineering (CI/CD, version control, unit testing). • Work with cloud platforms like AWS • Ensure data security, governance, and compliance. • Mentor junior developers and review code for best practices and efficiency Note : If you are interested please share me your resumes to srinivas@vmcsofttech.com or else reach me at 4804076930.