

Jobs via Dice
Senior Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer in Manassas Park, VA (Hybrid), offering a full-time contract. Requires 10+ years of data engineering experience, expertise in Python or Scala, cloud platforms, and data warehousing concepts. Expected duration is over 6 months.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
December 4, 2025
π - Duration
More than 6 months
-
ποΈ - Location
Hybrid
-
π - Contract
W2 Contractor
-
π - Security
Unknown
-
π - Location detailed
Manassas Park, VA
-
π§ - Skills detailed
#Data Storage #Storage #Python #Data Quality #Kafka (Apache Kafka) #Distributed Computing #Data Warehouse #Agile #Data Science #Spark (Apache Spark) #SQL (Structured Query Language) #BigQuery #Data Catalog #Kubernetes #Scala #Synapse #Data Pipeline #Apache Spark #Data Lake #GCP (Google Cloud Platform) #ADF (Azure Data Factory) #Compliance #Airflow #Azure #Redshift #Azure Data Factory #Data Security #DevOps #Hadoop #"ETL (Extract #Transform #Load)" #Jenkins #MLflow #GIT #Snowflake #Datasets #Business Analysis #Batch #Security #Docker #Scrum #Cloud #BI (Business Intelligence) #AWS (Amazon Web Services) #Data Engineering #Databricks #ML (Machine Learning)
Role description
Dice is the leading career destination for tech experts at every stage of their careers. Our client, Raas Infotek LLC, is seeking the following. Apply via Dice today!
Job Title: Sr. Data Engineer
Contract: Full Time W2
Location: Manassas Park, VA (Hybrid)Full-Time
Job Summary
We are seeking a highly skilled Senior Data Engineer to join our team in Manassas, VA. The ideal candidate will be responsible for designing, developing, and optimizing scalable data pipelines, cloud-based data solutions, and analytical infrastructure to support enterprise data initiatives. This role requires hands-on engineering expertise, strong problem-solving skills, and the ability to collaborate with cross-functional teams.
Responsibilities
β’ Design and implement scalable and secure data pipelines for batch and real-time processing.
β’ Architect, build, and maintain Data Lake and data warehouse solutions across Azure/AWS/Google Cloud Platform cloud environments.
β’ Develop ETL/ELT processes using tools such as Apache Spark, Databricks, Airflow, Hadoop, Kafka, and cloud-native services.
β’ Optimize data storage and retrieval performance to support advanced analytics, BI dashboards, and machine learning workloads.
β’ Work closely with Data Scientists, Architects, and Business Analysts to understand data requirements and translate them into technical solutions.
β’ Implement CI/CD pipelines for data engineering solutions using Git, Jenkins, or similar tools.
β’ Ensure data quality, governance, security, and compliance best practices.
β’ Troubleshoot data pipeline issues, perform root cause analysis, and continuously improve system reliability.
Required Skills
β’ 10+ years of hands-on Data Engineering experience.
β’ Strong expertise in Python or Scala for data development.
β’ Proficiency in SQL and performance tuning for large datasets.
β’ Deep understanding of distributed computing frameworks: Apache Spark, Hadoop, Hive, Kafka.
β’ Experience with cloud platforms (Azure preferred, AWS or Google Cloud Platform acceptable).
β’ Strong knowledge of data warehousing concepts (Snowflake, Redshift, Synapse, BigQuery).
β’ Experience with orchestration tools like Airflow, Azure Data Factory, or Glue.
β’ Hands-on experience with CI/CD, DevOps practices, and containerization (Docker/Kubernetes).
β’ Experience with Databricks or MLflow.
β’ Knowledge of streaming technologies (Kafka, Kinesis).
β’ Exposure to data security frameworks and data catalog tools.
β’ Experience working in Agile/Scrum environments.
β’ Familiarity with machine learning pipelines and MLOps concepts.
Dice is the leading career destination for tech experts at every stage of their careers. Our client, Raas Infotek LLC, is seeking the following. Apply via Dice today!
Job Title: Sr. Data Engineer
Contract: Full Time W2
Location: Manassas Park, VA (Hybrid)Full-Time
Job Summary
We are seeking a highly skilled Senior Data Engineer to join our team in Manassas, VA. The ideal candidate will be responsible for designing, developing, and optimizing scalable data pipelines, cloud-based data solutions, and analytical infrastructure to support enterprise data initiatives. This role requires hands-on engineering expertise, strong problem-solving skills, and the ability to collaborate with cross-functional teams.
Responsibilities
β’ Design and implement scalable and secure data pipelines for batch and real-time processing.
β’ Architect, build, and maintain Data Lake and data warehouse solutions across Azure/AWS/Google Cloud Platform cloud environments.
β’ Develop ETL/ELT processes using tools such as Apache Spark, Databricks, Airflow, Hadoop, Kafka, and cloud-native services.
β’ Optimize data storage and retrieval performance to support advanced analytics, BI dashboards, and machine learning workloads.
β’ Work closely with Data Scientists, Architects, and Business Analysts to understand data requirements and translate them into technical solutions.
β’ Implement CI/CD pipelines for data engineering solutions using Git, Jenkins, or similar tools.
β’ Ensure data quality, governance, security, and compliance best practices.
β’ Troubleshoot data pipeline issues, perform root cause analysis, and continuously improve system reliability.
Required Skills
β’ 10+ years of hands-on Data Engineering experience.
β’ Strong expertise in Python or Scala for data development.
β’ Proficiency in SQL and performance tuning for large datasets.
β’ Deep understanding of distributed computing frameworks: Apache Spark, Hadoop, Hive, Kafka.
β’ Experience with cloud platforms (Azure preferred, AWS or Google Cloud Platform acceptable).
β’ Strong knowledge of data warehousing concepts (Snowflake, Redshift, Synapse, BigQuery).
β’ Experience with orchestration tools like Airflow, Azure Data Factory, or Glue.
β’ Hands-on experience with CI/CD, DevOps practices, and containerization (Docker/Kubernetes).
β’ Experience with Databricks or MLflow.
β’ Knowledge of streaming technologies (Kafka, Kinesis).
β’ Exposure to data security frameworks and data catalog tools.
β’ Experience working in Agile/Scrum environments.
β’ Familiarity with machine learning pipelines and MLOps concepts.




