

ALL KNOWN SERVICES
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer, onsite in Virginia or Frisco, TX, for 6 months at a W2 pay rate. Requires 7-10 years of experience, strong skills in Databricks, Kafka, and Azure Event Hub, with a focus on data pipeline development and streaming engineering.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
416
-
🗓️ - Date
May 7, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
On-site
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Virginia, United States
-
🧠 - Skills detailed
#Automated Testing #Delta Lake #Scripting #Kafka (Apache Kafka) #"ETL (Extract #Transform #Load)" #Data Ingestion #API (Application Programming Interface) #Python #Data Quality #SQL (Structured Query Language) #Leadership #Batch #Azure Data Factory #GIT #Migration #Data Engineering #ADF (Azure Data Factory) #Documentation #Apache Airflow #Airflow #Data Pipeline #Spark (Apache Spark) #Azure #Spark SQL #DevOps #Databricks #PySpark #Monitoring
Role description
Strictly W2 Only - No C2C
Job Title: Data Engineer
Location: Virginia / Frisco, TX (Onsite) - Only Locals
Duration: 6 Months
Work Type: Onsite
⚠️ MS Teams video prescreen mandatory
Key Responsibilities
Data Pipeline Development
• Design and build production-grade data pipelines using: Databricks, PySpark, Spark SQL
• Develop: Bronze/Silver/Gold data products
• Enterprise-scale data ingestion pipelines
Streaming & Event-Driven Engineering
• Implement ingestion using: Kafka, Azure Event Hub, API integrations, Batch ingestion frameworks
Microsoft Fabric & Lakehouse
• Work with: Microsoft Fabric, OneLake, Fabric IQ, Delta Lake, Unity Catalog
• Apply medallion architecture standards
Data Reliability & KTLO
• Own: Monitoring, Alerting, Incident response, Pipeline reliability improvements
Data Quality & Governance
• Implement: Data quality checks, SLA adherence, Transformation validations, Documentation & lineage tracking
DevOps & CI/CD
• Support: Git workflows, CI/CD pipelines, Automated testing, Incident runbooks
Leadership & Collaboration
• Mentor junior/mid-level data engineers
• Participate in architecture and pipeline design reviews
Must Haves
• 7–10 years of Data Engineering experience
• Strong hands-on expertise with: Databricks, Microsoft Fabric, PySpark / Spark SQL
• Experience building: Production-grade data pipelines, Data products, Streaming/event-driven ingestion systems
• Strong experience with: Kafka, Azure Event Hub
• Hands-on knowledge of: Delta Lake, Unity Catalog, Medallion Architecture
• Experience with: Azure Data Factory / Airflow, CI/CD & DevOps practices
• Strong SQL skills for: Validation, Transformations, Source system analysis
⚠️ Missing Fabric + Databricks + Kafka + Azure Event Hub → Rejection
Nice to Have
• Apache Airflow
• Azure Data Factory
• Delta Live Tables
• Spark Structured Streaming
• ERP/TMS/WMS integrations
• Supply chain or logistics domain experience
• Python utility scripting
• Legacy platform migration experience
Strictly W2 Only - No C2C
Job Title: Data Engineer
Location: Virginia / Frisco, TX (Onsite) - Only Locals
Duration: 6 Months
Work Type: Onsite
⚠️ MS Teams video prescreen mandatory
Key Responsibilities
Data Pipeline Development
• Design and build production-grade data pipelines using: Databricks, PySpark, Spark SQL
• Develop: Bronze/Silver/Gold data products
• Enterprise-scale data ingestion pipelines
Streaming & Event-Driven Engineering
• Implement ingestion using: Kafka, Azure Event Hub, API integrations, Batch ingestion frameworks
Microsoft Fabric & Lakehouse
• Work with: Microsoft Fabric, OneLake, Fabric IQ, Delta Lake, Unity Catalog
• Apply medallion architecture standards
Data Reliability & KTLO
• Own: Monitoring, Alerting, Incident response, Pipeline reliability improvements
Data Quality & Governance
• Implement: Data quality checks, SLA adherence, Transformation validations, Documentation & lineage tracking
DevOps & CI/CD
• Support: Git workflows, CI/CD pipelines, Automated testing, Incident runbooks
Leadership & Collaboration
• Mentor junior/mid-level data engineers
• Participate in architecture and pipeline design reviews
Must Haves
• 7–10 years of Data Engineering experience
• Strong hands-on expertise with: Databricks, Microsoft Fabric, PySpark / Spark SQL
• Experience building: Production-grade data pipelines, Data products, Streaming/event-driven ingestion systems
• Strong experience with: Kafka, Azure Event Hub
• Hands-on knowledge of: Delta Lake, Unity Catalog, Medallion Architecture
• Experience with: Azure Data Factory / Airflow, CI/CD & DevOps practices
• Strong SQL skills for: Validation, Transformations, Source system analysis
⚠️ Missing Fabric + Databricks + Kafka + Azure Event Hub → Rejection
Nice to Have
• Apache Airflow
• Azure Data Factory
• Delta Live Tables
• Spark Structured Streaming
• ERP/TMS/WMS integrations
• Supply chain or logistics domain experience
• Python utility scripting
• Legacy platform migration experience






