

Ingress IT Services
Senior Data Engineer|| W2
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer on a 12-month W2 contract, remote in the USA. Requires 8–10 years of experience, strong skills in Python, SQL, Spark, and cloud platforms like AWS. Familiarity with data modeling and ETL/ELT is essential.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
376
-
🗓️ - Date
February 3, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Remote
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Herndon, VA
-
🧠 - Skills detailed
#Synapse #Security #Data Security #Data Warehouse #Docker #Data Architecture #Snowflake #"ETL (Extract #Transform #Load)" #Spark (Apache Spark) #BigQuery #GCP (Google Cloud Platform) #SQL Queries #Data Pipeline #Data Processing #Scala #Azure #Databricks #Data Science #Data Engineering #Batch #Data Modeling #Datasets #GIT #SQL (Structured Query Language) #Data Quality #Redshift #Terraform #Logging #Python #Airflow #Monitoring #ML (Machine Learning) #AWS (Amazon Web Services) #Cloud #Kafka (Apache Kafka) #DevOps
Role description
Job Title: Senior Data Engineer (Remote – USA)
Contract- W2 only.
Experience Level: 8–10 Years
Employment Type: 12 months contract
Location: Remote (Must reside in the United States)
About the Role
We are seeking an experienced Senior Data Engineer to design, build, and optimize large-scale data pipelines and analytics solutions. The ideal candidate has deep expertise in modern data engineering technologies, cloud platforms, and scalable data architecture. This role works closely with data scientists, analysts, and product teams to enable high-quality, reliable data delivery across the organization.
Key Responsibilities
• Design, build, and maintain scalable ETL/ELT pipelines for structured and unstructured datasets.
• Develop and optimize data models, warehouse schemas, and workflow orchestration.
• Architect and implement solutions on cloud platforms such as AWS, Azure, or GCP.
• Build real-time and batch data processing systems using modern tools (Spark, Kafka, Airflow, etc.).
• Ensure data quality, integrity, and governance across all pipelines.
• Implement best practices for data security, monitoring, logging, and reliability.
• Collaborate with cross-functional teams to understand data needs and deliver self-service data solutions.
• Troubleshoot performance issues, optimize SQL queries, and enhance system scalability.
• Document architecture, data flows, and process improvements.
Required Qualifications
• 8–10 years of experience in Data Engineering or related fields.
• Strong proficiency in:
• Python or Scala
• SQL (expert level)
• Spark, Databricks, or similar distributed processing frameworks
• Airflow, Prefect, or other orchestration tools
• Kafka, Kinesis, or streaming technologies
• Hands-on experience with modern data warehouses such as:
• Snowflake, BigQuery, Redshift, or Synapse
• Deep understanding of data modeling, ETL/ELT, and data architecture best practices.
• Proven experience working with cloud platforms (AWS preferred).
• Familiarity with CI/CD, Git, Docker, Terraform, or other DevOps tools.
Preferred Qualifications
• Experience building Lakehouse or Data Mesh architectures.
• Background working in highly regulated or enterprise-scale environments.
• Knowledge of ML data pipelines and feature engineering tools.
• Strong problem-solving skills and ability to mentor junior engineers.
Job Title: Senior Data Engineer (Remote – USA)
Contract- W2 only.
Experience Level: 8–10 Years
Employment Type: 12 months contract
Location: Remote (Must reside in the United States)
About the Role
We are seeking an experienced Senior Data Engineer to design, build, and optimize large-scale data pipelines and analytics solutions. The ideal candidate has deep expertise in modern data engineering technologies, cloud platforms, and scalable data architecture. This role works closely with data scientists, analysts, and product teams to enable high-quality, reliable data delivery across the organization.
Key Responsibilities
• Design, build, and maintain scalable ETL/ELT pipelines for structured and unstructured datasets.
• Develop and optimize data models, warehouse schemas, and workflow orchestration.
• Architect and implement solutions on cloud platforms such as AWS, Azure, or GCP.
• Build real-time and batch data processing systems using modern tools (Spark, Kafka, Airflow, etc.).
• Ensure data quality, integrity, and governance across all pipelines.
• Implement best practices for data security, monitoring, logging, and reliability.
• Collaborate with cross-functional teams to understand data needs and deliver self-service data solutions.
• Troubleshoot performance issues, optimize SQL queries, and enhance system scalability.
• Document architecture, data flows, and process improvements.
Required Qualifications
• 8–10 years of experience in Data Engineering or related fields.
• Strong proficiency in:
• Python or Scala
• SQL (expert level)
• Spark, Databricks, or similar distributed processing frameworks
• Airflow, Prefect, or other orchestration tools
• Kafka, Kinesis, or streaming technologies
• Hands-on experience with modern data warehouses such as:
• Snowflake, BigQuery, Redshift, or Synapse
• Deep understanding of data modeling, ETL/ELT, and data architecture best practices.
• Proven experience working with cloud platforms (AWS preferred).
• Familiarity with CI/CD, Git, Docker, Terraform, or other DevOps tools.
Preferred Qualifications
• Experience building Lakehouse or Data Mesh architectures.
• Background working in highly regulated or enterprise-scale environments.
• Knowledge of ML data pipelines and feature engineering tools.
• Strong problem-solving skills and ability to mentor junior engineers.






