ACR Technology

Lead Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Lead Data Engineer in California, USA, onsite for 6+ months at $55.00 - $65.00 per hour. Requires advanced SQL, Azure, Databricks, Snowflake, Python, and healthcare data systems experience. W2 only; local candidates preferred.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
520
-
πŸ—“οΈ - Date
March 20, 2026
πŸ•’ - Duration
Unknown
-
🏝️ - Location
On-site
-
πŸ“„ - Contract
W2 Contractor
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
San Francisco, CA 94114
-
🧠 - Skills detailed
#Azure Databricks #BI (Business Intelligence) #Kafka (Apache Kafka) #Data Warehouse #DevOps #Data Governance #ADF (Azure Data Factory) #Airflow #Snowflake #BigQuery #Scala #PySpark #ADLS (Azure Data Lake Storage) #Microsoft Power BI #Spark (Apache Spark) #Batch #Cloud #dbt (data build tool) #Data Quality #ML (Machine Learning) #SQL (Structured Query Language) #Synapse #FHIR (Fast Healthcare Interoperability Resources) #SSIS (SQL Server Integration Services) #Migration #"ETL (Extract #Transform #Load)" #Azure #Data Mart #Data Engineering #Data Integration #REST (Representational State Transfer) #REST API #Python #SQL Server #Data Architecture #Databricks #Monitoring #Azure SQL
Role description
We’re Hiring: Lead Data Engineer Location: California, USA (Bay Area – Local Candidates Only) - Onsite Employment Type: W2 Only - NO H1 Experience: 6+ Years of Genuine Hands-on Experience Required We are seeking an experienced, hands-on Senior Data Engineer to design, build, and scale modern data platforms supporting enterprise analytics, business intelligence, and healthcare data integrations. This role involves working with SQL Server, Azure, Databricks, Snowflake, and Python in real-world production environments, with a strong focus on cloud modernization and scalable data architecture. Key Responsibilities:- Design and maintain scalable ETL/ELT pipelines across cloud platforms Lead migration of on-prem SQL Server workloads to Azure (ADF, Synapse, ADLS, Fabric) Architect and optimize data warehouses, data marts, and lakehouse architectures Work with healthcare data systems (EHR, HL7, FHIR, HIPAA-compliant environments) Develop integrations using REST APIs, CDC, and event-driven architectures Implement dimensional modeling and performance tuning strategies Build batch and streaming pipelines using Spark and Kafka Collaborate with BI, ML, and business stakeholders Establish monitoring, data quality checks, and CI/CD processes Core Skills:- Advanced SQL and performance tuning Azure (ADF, Synapse, Azure SQL, ADLS, Fabric) Databricks, Snowflake, BigQuery Python (PySpark), Scala Spark (batch & streaming), Kafka ETL tools (ADF, SSIS, Airflow, dbt) BI tools (Power BI preferred) DevOps and data governance best practices Nice to have: Healthcare domain experience, Databricks administration, real-time architecture experience, ML pipeline exposure. Interested candidates (local to Bay Area only, with 6+ years of genuine experience) may share their resume at: devang@acrtechnology.com Pay: $55.00 - $65.00 per hour Work Location: In person