

ACR Technology
Sr. Data Engineer (Only W2)
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr. Data Engineer (W2) in California, requiring 6+ years of hands-on experience. Pay ranges from $56.76 to $68.35 per hour. Key skills include SQL, Azure, Databricks, healthcare data systems, and ETL/ELT pipeline design.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
544
-
ποΈ - Date
February 27, 2026
π - Duration
More than 6 months
-
ποΈ - Location
On-site
-
π - Contract
W2 Contractor
-
π - Security
Unknown
-
π - Location detailed
San Francisco, CA 94114
-
π§ - Skills detailed
#DevOps #ADF (Azure Data Factory) #Migration #ADLS (Azure Data Lake Storage) #Python #"ETL (Extract #Transform #Load)" #SQL Server #Airflow #Data Architecture #Monitoring #Synapse #Batch #Azure Databricks #Databricks #Data Quality #SSIS (SQL Server Integration Services) #REST (Representational State Transfer) #BigQuery #Azure #Snowflake #dbt (data build tool) #Scala #Azure SQL #FHIR (Fast Healthcare Interoperability Resources) #ML (Machine Learning) #Data Governance #Data Engineering #PySpark #SQL (Structured Query Language) #Data Mart #BI (Business Intelligence) #Microsoft Power BI #Spark (Apache Spark) #REST API #Cloud #Data Integration #Kafka (Apache Kafka) #Data Warehouse
Role description
Weβre Hiring: Senior Data EngineerLocation: California, USA (Bay Area β Local Candidates Only)Employment Type: W2 Only (All Visas Accepted Except H1)Experience: 6+ Years of Genuine Hands-on Experience Required
We are seeking an experienced, hands-on Senior Data Engineer to design, build, and scale modern data platforms supporting enterprise analytics, business intelligence, and healthcare data integrations.
This role involves working with SQL Server, Azure, Databricks, Snowflake, and Python in real-world production environments, with a strong focus on cloud modernization and scalable data architecture.
Key Responsibilities
Design and maintain scalable ETL/ELT pipelines across cloud platforms
Lead migration of on-prem SQL Server workloads to Azure (ADF, Synapse, ADLS, Fabric)
Architect and optimize data warehouses, data marts, and lakehouse architectures
Work with healthcare data systems (EHR, HL7, FHIR, HIPAA-compliant environments)
Develop integrations using REST APIs, CDC, and event-driven architectures
Implement dimensional modeling and performance tuning strategies
Build batch and streaming pipelines using Spark and Kafka
Collaborate with BI, ML, and business stakeholders
Establish monitoring, data quality checks, and CI/CD processes
Core Skills
Advanced SQL and performance tuning
Azure (ADF, Synapse, Azure SQL, ADLS, Fabric)
Databricks, Snowflake, BigQuery
Python (PySpark), Scala
Spark (batch & streaming), Kafka
ETL tools (ADF, SSIS, Airflow, dbt)
BI tools (Power BI preferred)
DevOps and data governance best practices
Nice to have: Healthcare domain experience, Databricks administration, real-time architecture experience, ML pipeline exposure.
Interested candidates (local to Bay Area only, with 6+ years of genuine experience) may share their resume at:devang@acrtechnology.com
Job Types: Full-time, Contract
Pay: $56.76 - $68.35 per hour
Work Location: In person
Weβre Hiring: Senior Data EngineerLocation: California, USA (Bay Area β Local Candidates Only)Employment Type: W2 Only (All Visas Accepted Except H1)Experience: 6+ Years of Genuine Hands-on Experience Required
We are seeking an experienced, hands-on Senior Data Engineer to design, build, and scale modern data platforms supporting enterprise analytics, business intelligence, and healthcare data integrations.
This role involves working with SQL Server, Azure, Databricks, Snowflake, and Python in real-world production environments, with a strong focus on cloud modernization and scalable data architecture.
Key Responsibilities
Design and maintain scalable ETL/ELT pipelines across cloud platforms
Lead migration of on-prem SQL Server workloads to Azure (ADF, Synapse, ADLS, Fabric)
Architect and optimize data warehouses, data marts, and lakehouse architectures
Work with healthcare data systems (EHR, HL7, FHIR, HIPAA-compliant environments)
Develop integrations using REST APIs, CDC, and event-driven architectures
Implement dimensional modeling and performance tuning strategies
Build batch and streaming pipelines using Spark and Kafka
Collaborate with BI, ML, and business stakeholders
Establish monitoring, data quality checks, and CI/CD processes
Core Skills
Advanced SQL and performance tuning
Azure (ADF, Synapse, Azure SQL, ADLS, Fabric)
Databricks, Snowflake, BigQuery
Python (PySpark), Scala
Spark (batch & streaming), Kafka
ETL tools (ADF, SSIS, Airflow, dbt)
BI tools (Power BI preferred)
DevOps and data governance best practices
Nice to have: Healthcare domain experience, Databricks administration, real-time architecture experience, ML pipeline exposure.
Interested candidates (local to Bay Area only, with 6+ years of genuine experience) may share their resume at:devang@acrtechnology.com
Job Types: Full-time, Contract
Pay: $56.76 - $68.35 per hour
Work Location: In person





