

Mindlance
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer in Richmond, VA, and McLean, VA, lasting 12 months, offering a pay rate of "X". Requires 5+ years in data engineering, expertise in AWS, Python, Snowflake, and banking/financial experience preferred.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
560
-
🗓️ - Date
November 20, 2025
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
McLean, VA
-
🧠 - Skills detailed
#Databricks #Snowflake #Scala #"ETL (Extract #Transform #Load)" #Data Ingestion #Data Analysis #S3 (Amazon Simple Storage Service) #Python #Apache Spark #Data Migration #Data Processing #Compliance #Security #Data Engineering #Data Modeling #Data Governance #Data Quality #SQL (Structured Query Language) #DevOps #Spark (Apache Spark) #Migration #Agile #Cloud #Data Pipeline #AWS (Amazon Web Services) #Java #Lambda (AWS Lambda)
Role description
Job Role: Data Engineer
Location: Richmond, VA and McLean, VA- 3 days Hybrid
Duration: 12 Months
Top Skills:
• AWS
• Python
• Snowflake
• Databricks
Banking and Financial experience is good to have.
Job Description
• We are seeking a highly skilled Senior Data Engineer to design, build, and optimize scalable ETL pipelines for enterprise-level data processing. The role involves working closely with cross-functional teams to manage data ingestion, transformation, and integration within a secure cloud environment.
• Key Responsibilities:
• Design, develop, and maintain ETL pipelines for large-scale data ingestion, transformation, and loading.
• Handle file extraction and loading processes—receiving client files and securely transferring them into the organization’s ecosystem.
• Develop robust and efficient data processing solutions using Java and Apache Spark.
• Utilize AWS services such as Glue, Lambda, and Step Functions to orchestrate and automate data workflows.
• Ensure data quality, integrity, and compliance with internal security and governance standards.
• Collaborate with data analysts, architects, and DevOps teams to deliver high-performing data solutions.
• Troubleshoot, optimize, and enhance existing data pipelines for performance and scalability.
• Required Skills & Experience:
• 5+ years of experience in data engineering or related roles.
• Strong expertise in Java and Apache Spark for data transformation and processing.
• Hands-on experience with AWS cloud services (Glue, Lambda, Step Functions, S3, EMR, etc.).
• Proven experience in ETL development, data migration, and file-based data ingestion.
• Strong understanding of data modeling, data governance, and best practices in data pipeline design.
• Excellent problem-solving and communication skills.
• Ability to work onsite in McLean, VA (preferred) or Wilmington, DE (hybrid setup).
Nice to Have:
• Experience with CI/CD pipelines for data workflows.
• Knowledge of Python or SQL for additional data transformation tasks.
• Familiarity with Agile delivery methodology.
EEO: “Mindlance is an Equal Opportunity Employer and does not discriminate in employment on the basis of – Minority/Gender/Disability/Religion/LGBTQI/Age/Veterans.”
Job Role: Data Engineer
Location: Richmond, VA and McLean, VA- 3 days Hybrid
Duration: 12 Months
Top Skills:
• AWS
• Python
• Snowflake
• Databricks
Banking and Financial experience is good to have.
Job Description
• We are seeking a highly skilled Senior Data Engineer to design, build, and optimize scalable ETL pipelines for enterprise-level data processing. The role involves working closely with cross-functional teams to manage data ingestion, transformation, and integration within a secure cloud environment.
• Key Responsibilities:
• Design, develop, and maintain ETL pipelines for large-scale data ingestion, transformation, and loading.
• Handle file extraction and loading processes—receiving client files and securely transferring them into the organization’s ecosystem.
• Develop robust and efficient data processing solutions using Java and Apache Spark.
• Utilize AWS services such as Glue, Lambda, and Step Functions to orchestrate and automate data workflows.
• Ensure data quality, integrity, and compliance with internal security and governance standards.
• Collaborate with data analysts, architects, and DevOps teams to deliver high-performing data solutions.
• Troubleshoot, optimize, and enhance existing data pipelines for performance and scalability.
• Required Skills & Experience:
• 5+ years of experience in data engineering or related roles.
• Strong expertise in Java and Apache Spark for data transformation and processing.
• Hands-on experience with AWS cloud services (Glue, Lambda, Step Functions, S3, EMR, etc.).
• Proven experience in ETL development, data migration, and file-based data ingestion.
• Strong understanding of data modeling, data governance, and best practices in data pipeline design.
• Excellent problem-solving and communication skills.
• Ability to work onsite in McLean, VA (preferred) or Wilmington, DE (hybrid setup).
Nice to Have:
• Experience with CI/CD pipelines for data workflows.
• Knowledge of Python or SQL for additional data transformation tasks.
• Familiarity with Agile delivery methodology.
EEO: “Mindlance is an Equal Opportunity Employer and does not discriminate in employment on the basis of – Minority/Gender/Disability/Religion/LGBTQI/Age/Veterans.”






