

E-Solutions
Azure Data Tech Lead
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an Azure Data Tech Lead in Alpharetta, Georgia, with a contract length of "unknown" and a pay rate of "unknown." Key skills include Azure, Databricks, Spark (Python), SQL, and data engineering experience in enterprise environments.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
November 12, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Alpharetta, GA
-
🧠 - Skills detailed
#Azure Data Factory #"ETL (Extract #Transform #Load)" #SQL (Structured Query Language) #Batch #GIT #Data Ingestion #Data Engineering #BI (Business Intelligence) #PySpark #Apache Spark #Data Lakehouse #Delta Lake #Terraform #Azure Databricks #Data Processing #Kafka (Apache Kafka) #Spark (Apache Spark) #"ACID (Atomicity #Consistency #Isolation #Durability)" #Data Lake #Data Science #Security #Azure #PostgreSQL #Data Quality #ADLS (Azure Data Lake Storage) #Data Pipeline #Cloud #Data Architecture #DevOps #ML (Machine Learning) #Scala #Databricks #Data Governance #Python #Azure DevOps #ADF (Azure Data Factory) #Infrastructure as Code (IaC)
Role description
Job Title: Azure Data Tech Lead
Location: Alpharetta, Georgia
"Core Skills: Azure, Databricks, ADLS, Spark (Python), SQL, ETL, Delta Lake, PostgreSQL, Data Architecture, Batch & Real-time Processing, Data Modelling
Overview
We are looking for an experienced Senior/Lead Data Engineer with expertise in designing and delivering scalable, high-performing data solutions on the Azure ecosystem. The ideal candidate will have deep hands-on experience with Databricks, Spark, modern data lakehouse architectures, data modelling, and both batch and real-time data processing. You will be responsible for driving end-to-end data engineering initiatives, influencing architectural decisions, and ensuring robust, high-quality data pipelines.
Key Responsibilities
• Architect, design, and implement scalable data platforms and pipelines on Azure and Databricks.
• Build and optimize data ingestion, transformation, and processing workflows across batch and real-time data streams.
• Work extensively with ADLS, Delta Lake, and Spark (Python) for large-scale data engineering.
• Lead the development of complex ETL/ELT pipelines, ensuring high quality, reliability, and performance.
• Design and implement data models, including conceptual, logical, and physical models for analytics and operational workloads.
• Work with relational and lakehouse systems including PostgreSQL and Delta Lake.
• Define and enforce best practices in data governance, data quality, security, and architecture.
• Collaborate with architects, data scientists, analysts, and business teams to translate requirements into technical solutions.
• Troubleshoot production issues, optimize performance, and support continuous improvement of the data platform.
• Mentor junior engineers and contribute to building engineering standards and reusable components.
Required Skills & Experience
• Hands-on data engineering experience in enterprise environments.
• Strong expertise in Azure services, especially Azure Databricks, Functions, and Azure Data Factory (preferred).
• Advanced proficiency in Apache Spark with Python (PySpark).
• Strong command over SQL, query optimization, and performance tuning.
• Deep understanding of ETL/ELT methodologies, data pipelines, and scheduling/orchestration.
• Hands-on experience with Delta Lake (ACID transactions, optimization, schema evolution).
• Strong experience in data modelling (normalized, dimensional, lakehouse modelling).
• Experience in both batch processing and real-time/streaming data (Kafka, Event Hub, or similar).
• Solid understanding of data architecture principles, distributed systems, and cloud-native design patterns.
• Ability to design end-to-end solutions, evaluate trade-offs, and recommend best-fit architectures.
• Strong analytical, problem-solving, and communication skills.
• Ability to collaborate with cross-functional teams and lead technical discussions.
Preferred Skills
• Experience with CI/CD tools such as Azure DevOps and Git.
• Familiarity with IaC tools (Terraform, ARM).
• Exposure to data governance and cataloging tools (Azure Purview).
• Experience supporting machine learning or BI workloads on Databricks."
Job Title: Azure Data Tech Lead
Location: Alpharetta, Georgia
"Core Skills: Azure, Databricks, ADLS, Spark (Python), SQL, ETL, Delta Lake, PostgreSQL, Data Architecture, Batch & Real-time Processing, Data Modelling
Overview
We are looking for an experienced Senior/Lead Data Engineer with expertise in designing and delivering scalable, high-performing data solutions on the Azure ecosystem. The ideal candidate will have deep hands-on experience with Databricks, Spark, modern data lakehouse architectures, data modelling, and both batch and real-time data processing. You will be responsible for driving end-to-end data engineering initiatives, influencing architectural decisions, and ensuring robust, high-quality data pipelines.
Key Responsibilities
• Architect, design, and implement scalable data platforms and pipelines on Azure and Databricks.
• Build and optimize data ingestion, transformation, and processing workflows across batch and real-time data streams.
• Work extensively with ADLS, Delta Lake, and Spark (Python) for large-scale data engineering.
• Lead the development of complex ETL/ELT pipelines, ensuring high quality, reliability, and performance.
• Design and implement data models, including conceptual, logical, and physical models for analytics and operational workloads.
• Work with relational and lakehouse systems including PostgreSQL and Delta Lake.
• Define and enforce best practices in data governance, data quality, security, and architecture.
• Collaborate with architects, data scientists, analysts, and business teams to translate requirements into technical solutions.
• Troubleshoot production issues, optimize performance, and support continuous improvement of the data platform.
• Mentor junior engineers and contribute to building engineering standards and reusable components.
Required Skills & Experience
• Hands-on data engineering experience in enterprise environments.
• Strong expertise in Azure services, especially Azure Databricks, Functions, and Azure Data Factory (preferred).
• Advanced proficiency in Apache Spark with Python (PySpark).
• Strong command over SQL, query optimization, and performance tuning.
• Deep understanding of ETL/ELT methodologies, data pipelines, and scheduling/orchestration.
• Hands-on experience with Delta Lake (ACID transactions, optimization, schema evolution).
• Strong experience in data modelling (normalized, dimensional, lakehouse modelling).
• Experience in both batch processing and real-time/streaming data (Kafka, Event Hub, or similar).
• Solid understanding of data architecture principles, distributed systems, and cloud-native design patterns.
• Ability to design end-to-end solutions, evaluate trade-offs, and recommend best-fit architectures.
• Strong analytical, problem-solving, and communication skills.
• Ability to collaborate with cross-functional teams and lead technical discussions.
Preferred Skills
• Experience with CI/CD tools such as Azure DevOps and Git.
• Familiarity with IaC tools (Terraform, ARM).
• Exposure to data governance and cataloging tools (Azure Purview).
• Experience supporting machine learning or BI workloads on Databricks."






