Mindlance

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Senior Data Engineer for a 6+ month contract, based in McLean, VA, Richmond, VA, or Plano, TX (Hybrid). Requires strong skills in Python, PySpark, SQL, and cloud technologies (AWS, Azure, GCP) for data pipeline optimization.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
600
-
πŸ—“οΈ - Date
January 13, 2026
πŸ•’ - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
United States
-
🧠 - Skills detailed
#Azure #Kafka (Apache Kafka) #SQL (Structured Query Language) #GCP (Google Cloud Platform) #Cloud #Snowflake #Python #Big Data #AWS (Amazon Web Services) #REST (Representational State Transfer) #Delta Lake #Data Pipeline #Scala #Spark (Apache Spark) #Lambda (AWS Lambda) #Spark SQL #PySpark #Data Quality #"ETL (Extract #Transform #Load)" #Data Engineering
Role description
Title: Senior Data Engineer Location: McLean, VA/ Richmond, VA/ Plano, TX (Hybrid – 3 Days a week)/ Remote Duration: 6+ Months of Contract Looking for a Senior Data Engineer. This role is crucial for building and optimizing the data pipelines that manage, optimize and automate marketing applications. Key Responsibilities: Design, build, and maintain scalable, high-performance data pipelines and architecture. Develop data models and ETL/ELT processes to ensure data quality and accessibility. Collaborate with product and engineering teams to deliver a holistic ecosystem of data and capabilities. Relevant Technical Skills: Candidates should possess very strong Data Engineering skills, Experience with the following technologies is essential: Big Data Technologies: Python, PySpark, SQL Cloud & ETL Tools: Lambda, Glue, Delta Lake, Snowflake, AWS, Azure or GCP Real-time Data: Kafka Integration: APIs, REST β€œMindlance is an Equal Opportunity Employer and does not discriminate in employment on the basis of – Minority/Gender/Disability/Religion/LGBTQI/Age/Veterans.”