ALL KNOWN SERVICES

Azure Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an Azure Data Engineer in Frisco, TX/Overland Park, KS, for 6 months, offering W2 pay. Requires 7+ years of experience in Data Engineering, strong skills in Databricks, PySpark, SQL, Snowflake, and Azure.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
400
-
🗓️ - Date
May 7, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Texas, United States
-
🧠 - Skills detailed
#Azure #Snowflake #Scala #SQL (Structured Query Language) #Datasets #PySpark #Spark (Apache Spark) #Spark SQL #Data Engineering #ADF (Azure Data Factory) #Storage #"ETL (Extract #Transform #Load)" #Databricks #Azure Data Factory #Monitoring #Compliance #Data Governance
Role description
Strictly W2 Only - No C2C Job Title: Data Engineer (Azure | Databricks | Snowflake) Location: Frisco, TX / Overland Park, KS (Hybrid – 3 Days Onsite) - Only Locals Duration: 6 Months Work Type: Hybrid ⚠️ Video prescreen via MS Teams Key Responsibilities Data Engineering & Pipeline Development • Build scalable ingestion pipelines using: Databricks, PySpark, SQL • Process: Third-party enrichment data, Identity/attribute datasets, CAPI integrations, Rewards/promotions data Databricks & Spark Engineering • Work extensively with: Databricks notebooks/jobs, Spark troubleshooting, Medallion architecture • Optimize: Partitioning, Shuffle/skew issues, Spark performance Snowflake & Data Warehousing • Build governed datasets in Snowflake • Support analytics and warehouse workloads • Design scalable data models Reliability & Operations • Implement: Data validation, Idempotency, Replay/backfill strategies, Deduplication logic • Support: Monitoring, Alerting, SLA adherence Governance & Compliance • Apply: Unity Catalog, Lineage tracking, Access controls, PII governance Cost Optimization • Optimize: Cluster sizing, Compute/storage usage, Cost vs SLA trade-offs Must Haves • 7+ years of Data Engineering experience • Strong hands-on experience with: Databricks, PySpark, SQL, Snowflake, Azure • Deep understanding of: Spark fundamentals, Partitioning, Shuffle optimization, Spark log troubleshooting • Experience with: Azure Data Factory (ADF), ETL/ELT pipelines, Data governance • Hands-on knowledge of: Unity Catalog, Access controls, PII handling • Strong ownership mindset with ability to work independently ⚠️ Missing Databricks + Spark + Snowflake + Azure → Rejection Nice to Have • Delta Live Tables (DLT) • Streaming/event-driven ingestion • Identity resolution concepts • CAPI integrations • SLA monitoring dashboards • Config-driven export frameworks