

E-Solutions
Azure Data Tech Lead
β - Featured Role | Apply direct with Data Freelance Hub
This role is for an Azure Data Tech Lead in Alpharetta, Georgia, with a contract length of "unknown" and a pay rate of "unknown." Key skills include Azure, Databricks, Spark (Python), SQL, and data engineering experience in enterprise environments.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
November 12, 2025
π - Duration
Unknown
-
ποΈ - Location
On-site
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Alpharetta, GA
-
π§ - Skills detailed
#Azure Data Factory #"ETL (Extract #Transform #Load)" #SQL (Structured Query Language) #Batch #GIT #Data Ingestion #Data Engineering #BI (Business Intelligence) #PySpark #Apache Spark #Data Lakehouse #Delta Lake #Terraform #Azure Databricks #Data Processing #Kafka (Apache Kafka) #Spark (Apache Spark) #"ACID (Atomicity #Consistency #Isolation #Durability)" #Data Lake #Data Science #Security #Azure #PostgreSQL #Data Quality #ADLS (Azure Data Lake Storage) #Data Pipeline #Cloud #Data Architecture #DevOps #ML (Machine Learning) #Scala #Databricks #Data Governance #Python #Azure DevOps #ADF (Azure Data Factory) #Infrastructure as Code (IaC)
Role description
Job Title: Azure Data Tech Lead
Location: Alpharetta, Georgia
"Core Skills: Azure, Databricks, ADLS, Spark (Python), SQL, ETL, Delta Lake, PostgreSQL, Data Architecture, Batch & Real-time Processing, Data Modelling
Overview
We are looking for an experienced Senior/Lead Data Engineer with expertise in designing and delivering scalable, high-performing data solutions on the Azure ecosystem. The ideal candidate will have deep hands-on experience with Databricks, Spark, modern data lakehouse architectures, data modelling, and both batch and real-time data processing. You will be responsible for driving end-to-end data engineering initiatives, influencing architectural decisions, and ensuring robust, high-quality data pipelines.
Key Responsibilities
β’ Architect, design, and implement scalable data platforms and pipelines on Azure and Databricks.
β’ Build and optimize data ingestion, transformation, and processing workflows across batch and real-time data streams.
β’ Work extensively with ADLS, Delta Lake, and Spark (Python) for large-scale data engineering.
β’ Lead the development of complex ETL/ELT pipelines, ensuring high quality, reliability, and performance.
β’ Design and implement data models, including conceptual, logical, and physical models for analytics and operational workloads.
β’ Work with relational and lakehouse systems including PostgreSQL and Delta Lake.
β’ Define and enforce best practices in data governance, data quality, security, and architecture.
β’ Collaborate with architects, data scientists, analysts, and business teams to translate requirements into technical solutions.
β’ Troubleshoot production issues, optimize performance, and support continuous improvement of the data platform.
β’ Mentor junior engineers and contribute to building engineering standards and reusable components.
Required Skills & Experience
β’ Hands-on data engineering experience in enterprise environments.
β’ Strong expertise in Azure services, especially Azure Databricks, Functions, and Azure Data Factory (preferred).
β’ Advanced proficiency in Apache Spark with Python (PySpark).
β’ Strong command over SQL, query optimization, and performance tuning.
β’ Deep understanding of ETL/ELT methodologies, data pipelines, and scheduling/orchestration.
β’ Hands-on experience with Delta Lake (ACID transactions, optimization, schema evolution).
β’ Strong experience in data modelling (normalized, dimensional, lakehouse modelling).
β’ Experience in both batch processing and real-time/streaming data (Kafka, Event Hub, or similar).
β’ Solid understanding of data architecture principles, distributed systems, and cloud-native design patterns.
β’ Ability to design end-to-end solutions, evaluate trade-offs, and recommend best-fit architectures.
β’ Strong analytical, problem-solving, and communication skills.
β’ Ability to collaborate with cross-functional teams and lead technical discussions.
Preferred Skills
β’ Experience with CI/CD tools such as Azure DevOps and Git.
β’ Familiarity with IaC tools (Terraform, ARM).
β’ Exposure to data governance and cataloging tools (Azure Purview).
β’ Experience supporting machine learning or BI workloads on Databricks."
Job Title: Azure Data Tech Lead
Location: Alpharetta, Georgia
"Core Skills: Azure, Databricks, ADLS, Spark (Python), SQL, ETL, Delta Lake, PostgreSQL, Data Architecture, Batch & Real-time Processing, Data Modelling
Overview
We are looking for an experienced Senior/Lead Data Engineer with expertise in designing and delivering scalable, high-performing data solutions on the Azure ecosystem. The ideal candidate will have deep hands-on experience with Databricks, Spark, modern data lakehouse architectures, data modelling, and both batch and real-time data processing. You will be responsible for driving end-to-end data engineering initiatives, influencing architectural decisions, and ensuring robust, high-quality data pipelines.
Key Responsibilities
β’ Architect, design, and implement scalable data platforms and pipelines on Azure and Databricks.
β’ Build and optimize data ingestion, transformation, and processing workflows across batch and real-time data streams.
β’ Work extensively with ADLS, Delta Lake, and Spark (Python) for large-scale data engineering.
β’ Lead the development of complex ETL/ELT pipelines, ensuring high quality, reliability, and performance.
β’ Design and implement data models, including conceptual, logical, and physical models for analytics and operational workloads.
β’ Work with relational and lakehouse systems including PostgreSQL and Delta Lake.
β’ Define and enforce best practices in data governance, data quality, security, and architecture.
β’ Collaborate with architects, data scientists, analysts, and business teams to translate requirements into technical solutions.
β’ Troubleshoot production issues, optimize performance, and support continuous improvement of the data platform.
β’ Mentor junior engineers and contribute to building engineering standards and reusable components.
Required Skills & Experience
β’ Hands-on data engineering experience in enterprise environments.
β’ Strong expertise in Azure services, especially Azure Databricks, Functions, and Azure Data Factory (preferred).
β’ Advanced proficiency in Apache Spark with Python (PySpark).
β’ Strong command over SQL, query optimization, and performance tuning.
β’ Deep understanding of ETL/ELT methodologies, data pipelines, and scheduling/orchestration.
β’ Hands-on experience with Delta Lake (ACID transactions, optimization, schema evolution).
β’ Strong experience in data modelling (normalized, dimensional, lakehouse modelling).
β’ Experience in both batch processing and real-time/streaming data (Kafka, Event Hub, or similar).
β’ Solid understanding of data architecture principles, distributed systems, and cloud-native design patterns.
β’ Ability to design end-to-end solutions, evaluate trade-offs, and recommend best-fit architectures.
β’ Strong analytical, problem-solving, and communication skills.
β’ Ability to collaborate with cross-functional teams and lead technical discussions.
Preferred Skills
β’ Experience with CI/CD tools such as Azure DevOps and Git.
β’ Familiarity with IaC tools (Terraform, ARM).
β’ Exposure to data governance and cataloging tools (Azure Purview).
β’ Experience supporting machine learning or BI workloads on Databricks."





