

Danta Technologies
Data Engineer Lead
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer Lead, offering a remote contract position with compensation based on experience. Requires 10+ years in IT, 4+ years in data engineering, and expertise in Azure Data Factory, Databricks, and Python.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
440
-
ποΈ - Date
April 11, 2026
π - Duration
Unknown
-
ποΈ - Location
Remote
-
π - Contract
W2 Contractor
-
π - Security
Unknown
-
π - Location detailed
Fremont, CA
-
π§ - Skills detailed
#Deployment #Visualization #Data Engineering #Apache Spark #"ETL (Extract #Transform #Load)" #Monitoring #Python #Data Warehouse #Data Architecture #Batch #Tableau #Azure Databricks #Data Ingestion #Databricks #PySpark #Data Modeling #Data Quality #OBIEE (Oracle Business Intelligence Enterprise Edition) #ADF (Azure Data Factory) #Migration #Data Governance #Azure DevOps #Amazon Redshift #BI (Business Intelligence) #Talend #Data Management #Metadata #DevOps #Snowflake #Compliance #Microsoft Power BI #Spark (Apache Spark) #Computer Science #Scala #Informatica #Kafka (Apache Kafka) #Azure Data Factory #Data Orchestration #Data Pipeline #Data Lake #Data Processing #Spark SQL #Azure #GitHub #Jenkins #Data Science #Redshift #Synapse #Azure Event Hubs #Security #SQL (Structured Query Language) #Big Data #Data Lifecycle #Documentation #Azure SQL
Role description
Employment Eligibility Statement
Due to specific project and client requirements, this position is open to U.S. Citizens and U.S. Lawful Permanent Residents (Green Card holders) . Sponsorship is not available at this time.
Danta Technologies evaluates all candidates in compliance with the Immigration and Nationality Act (INA) and EEOC guidelines .
All hiring decisions are made without regard to race, color, religion, sex, gender identity, sexual orientation, national origin, age, disability, veteran status, or any other protected characteristic.
Note: βOpen to candidates residing anywhere in the United Statesβ
Join a leading client in the Semiconductor & Manufacturing industry and drive enterprise-scale data transformation initiatives!
Core Focus
Designing scalable data architectures, building ETL/ELT pipelines, working with big data tools, and ensuring data quality, governance, and performance.
They want a senior-level Data Lead who can own the entire data platform on Azureβnot just build pipelines, but design, lead, and optimize end-to-end data solutions.
Mandatory Skills: Azure Data Factory (ADF), Azure Databricks & PySpark, Azure Synapse, Azure SQL, Python, and Spark SQL
Experience Requirements
β’ 10 + years of IT experience in the Development/ Architecture Role
β’ 4+ years of Proven experience in Data Engineering and architecture roles.(with Azure Data Factory , Azure Databricks & PySpark, Azure Synapse and Azure SQL).
Qualifications
β’ Bachelor's or master's degree in computer science, Engineering, or a related field.
Skill Requirements
β’ Strong Experience with ETL/ELT tools like ADF, Informatica , Talend etc., and data warehousing technologies like Azure Synapse, Azure SQL, Amazon redshift , Snowflake , Google Big Query etc.
β’ Strong hands-on experience with Azure Data Factory (ADF) for data orchestration (for building and managing pipelines), Azure Databricks for big data processing and analytics, and Apache Spark for distributed data processing).
β’ Adept in with big data tools(Databricks , Spark etc..)
β’ Experience with Power BI, Tableau/OBIEE etc.
β’ Proficiency in PySpark, Python, and Spark SQL.
β’ Experience with CI/CD pipelines for data platforms.
β’ Solid understanding of data warehouse best practices, development standards, and methodologies..
β’ Strong expertise in SQL and relational data modeling.
β’ Experience designing data lakes, data warehouses, or Lakehouse architectures.
β’ Good understanding of data governance, metadata management, and security best practices.
β’ Solid understanding of distributed data processing and performance tuning.
β’ Experience with ETL/ELT patterns and best practices.
β’ Strong analytical, problem-solving skills and ability to work in a fast-paced, dynamic environment.
β’ Excellent communication and documentation skills.
Key Responsibilities
β’ Design and implement end-to-end data architecture for batch and real-time data processing to implement enterprise-grade data solutions using Azure and Microsoft Fabric.
β’ Design, build, and optimize ETL/ELT pipelines for ingestion, transformation, and publishing using ADF, Spark, and Databricks.
β’ Architect and optimize data pipelines using Azure Data Factory (ADF).
β’ Develop and manage scalable data processing frameworks using Databricks and Apache Spark.
β’ Develop efficient data transformation logic using PySpark and Spark SQL.
β’ Build reusable, high-performance data models for analytics and reporting.
β’ Develop and maintain data ingestion, transformation, and orchestration workflows.
β’ Ensure data quality, consistency, security, and governance across platforms.
β’ Define and enforce data engineering best practices, including CI/CD, versioning, testing, and monitoring.
β’ Mentor and guide data engineers, fostering a culture of innovation and excellence.
β’ Collaborate with data engineers, analysts, data scientists, and business stakeholders.
β’ Evaluate and integrate new tools and technologies to enhance data capabilities.
β’ Own the data lifecycle, from ingestion and transformation to consumption and visualization.
β’ Support production deployments, monitoring, and troubleshooting of data pipelines.
β’ Document data architecture, processes, and best practices.
Nice To Have
β’ DevOps & CI/CD: Azure DevOps, GitHub Actions, Jenkins
β’ Real-time streaming (Azure Event Hubs, Kafka), Microsoft Fabric
Notes:- All qualified applicants will receive consideration for employment without regard to race, color, religion, religious creed, sex, national origin, ancestry, age, physical or mental disability, medical condition, genetic information, military and veteran status, marital status, pregnancy, gender, gender expression, gender identity, sexual orientation, or any other characteristic protected by local law, regulation, or ordinance.
Benefits: Danta offers a compensation package to all W2 employees that are competitive in the industry. It consists of competitive pay, the option to elect healthcare insurance (Dental, Medical, Vision), Major holidays and Paid sick leave as per state law.
The rate/ Salary range is dependent on numerous factors including Qualification, Experience and Location.
Employment Eligibility Statement
Due to specific project and client requirements, this position is open to U.S. Citizens and U.S. Lawful Permanent Residents (Green Card holders) . Sponsorship is not available at this time.
Danta Technologies evaluates all candidates in compliance with the Immigration and Nationality Act (INA) and EEOC guidelines .
All hiring decisions are made without regard to race, color, religion, sex, gender identity, sexual orientation, national origin, age, disability, veteran status, or any other protected characteristic.
Note: βOpen to candidates residing anywhere in the United Statesβ
Join a leading client in the Semiconductor & Manufacturing industry and drive enterprise-scale data transformation initiatives!
Core Focus
Designing scalable data architectures, building ETL/ELT pipelines, working with big data tools, and ensuring data quality, governance, and performance.
They want a senior-level Data Lead who can own the entire data platform on Azureβnot just build pipelines, but design, lead, and optimize end-to-end data solutions.
Mandatory Skills: Azure Data Factory (ADF), Azure Databricks & PySpark, Azure Synapse, Azure SQL, Python, and Spark SQL
Experience Requirements
β’ 10 + years of IT experience in the Development/ Architecture Role
β’ 4+ years of Proven experience in Data Engineering and architecture roles.(with Azure Data Factory , Azure Databricks & PySpark, Azure Synapse and Azure SQL).
Qualifications
β’ Bachelor's or master's degree in computer science, Engineering, or a related field.
Skill Requirements
β’ Strong Experience with ETL/ELT tools like ADF, Informatica , Talend etc., and data warehousing technologies like Azure Synapse, Azure SQL, Amazon redshift , Snowflake , Google Big Query etc.
β’ Strong hands-on experience with Azure Data Factory (ADF) for data orchestration (for building and managing pipelines), Azure Databricks for big data processing and analytics, and Apache Spark for distributed data processing).
β’ Adept in with big data tools(Databricks , Spark etc..)
β’ Experience with Power BI, Tableau/OBIEE etc.
β’ Proficiency in PySpark, Python, and Spark SQL.
β’ Experience with CI/CD pipelines for data platforms.
β’ Solid understanding of data warehouse best practices, development standards, and methodologies..
β’ Strong expertise in SQL and relational data modeling.
β’ Experience designing data lakes, data warehouses, or Lakehouse architectures.
β’ Good understanding of data governance, metadata management, and security best practices.
β’ Solid understanding of distributed data processing and performance tuning.
β’ Experience with ETL/ELT patterns and best practices.
β’ Strong analytical, problem-solving skills and ability to work in a fast-paced, dynamic environment.
β’ Excellent communication and documentation skills.
Key Responsibilities
β’ Design and implement end-to-end data architecture for batch and real-time data processing to implement enterprise-grade data solutions using Azure and Microsoft Fabric.
β’ Design, build, and optimize ETL/ELT pipelines for ingestion, transformation, and publishing using ADF, Spark, and Databricks.
β’ Architect and optimize data pipelines using Azure Data Factory (ADF).
β’ Develop and manage scalable data processing frameworks using Databricks and Apache Spark.
β’ Develop efficient data transformation logic using PySpark and Spark SQL.
β’ Build reusable, high-performance data models for analytics and reporting.
β’ Develop and maintain data ingestion, transformation, and orchestration workflows.
β’ Ensure data quality, consistency, security, and governance across platforms.
β’ Define and enforce data engineering best practices, including CI/CD, versioning, testing, and monitoring.
β’ Mentor and guide data engineers, fostering a culture of innovation and excellence.
β’ Collaborate with data engineers, analysts, data scientists, and business stakeholders.
β’ Evaluate and integrate new tools and technologies to enhance data capabilities.
β’ Own the data lifecycle, from ingestion and transformation to consumption and visualization.
β’ Support production deployments, monitoring, and troubleshooting of data pipelines.
β’ Document data architecture, processes, and best practices.
Nice To Have
β’ DevOps & CI/CD: Azure DevOps, GitHub Actions, Jenkins
β’ Real-time streaming (Azure Event Hubs, Kafka), Microsoft Fabric
Notes:- All qualified applicants will receive consideration for employment without regard to race, color, religion, religious creed, sex, national origin, ancestry, age, physical or mental disability, medical condition, genetic information, military and veteran status, marital status, pregnancy, gender, gender expression, gender identity, sexual orientation, or any other characteristic protected by local law, regulation, or ordinance.
Benefits: Danta offers a compensation package to all W2 employees that are competitive in the industry. It consists of competitive pay, the option to elect healthcare insurance (Dental, Medical, Vision), Major holidays and Paid sick leave as per state law.
The rate/ Salary range is dependent on numerous factors including Qualification, Experience and Location.





