

Envision Technology Solutions
Azure Data Tech Lead | Alpharetta, Georgia or Berkley Heights NJ (onsite)
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an Azure Data Tech Lead, a long-term contract position based in Alpharetta, Georgia or Berkley Heights, NJ. Requires 10+ years in data engineering, expertise in Azure, Databricks, Spark (Python), SQL, and data architecture.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
May 16, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Berkeley Heights, NJ
-
🧠 - Skills detailed
#Azure DevOps #Data Lake #Cloud #Data Quality #Infrastructure as Code (IaC) #Spark (Apache Spark) #SQL (Structured Query Language) #Data Lakehouse #ADLS (Azure Data Lake Storage) #Data Processing #Kafka (Apache Kafka) #Scala #Data Pipeline #PostgreSQL #Delta Lake #Terraform #Data Governance #DevOps #ML (Machine Learning) #Databricks #Data Engineering #Data Science #Azure Data Factory #GIT #PySpark #ADF (Azure Data Factory) #Azure #Azure Databricks #"ACID (Atomicity #Consistency #Isolation #Durability)" #Data Ingestion #Batch #Data Architecture #BI (Business Intelligence) #Python #Security #Apache Spark #"ETL (Extract #Transform #Load)"
Role description
Dear Applicant,
Please let me know if interested.
Job Title: Azure Data Tech Lead
Location: Alpharetta, Georgia or Berkley Heights NJ (onsite)
Hire Type: Long term Contract
"Core Skills: Azure, Databricks, ADLS, Spark (Python), SQL, ETL, Delta Lake, PostgreSQL, Data Architecture, Batch & Real-time Processing, Data Modelling
Overview
We are looking for an experienced Senior/Lead Data Engineer with 8+ years of expertise in designing and delivering scalable, high-performing data solutions on the Azure ecosystem. The ideal candidate will have deep hands-on experience with Databricks, Spark, modern data lakehouse architectures, data modelling, and both batch and real-time data processing. You will be responsible for driving end-to-end data engineering initiatives, influencing architectural decisions, and ensuring robust, high-quality data pipelines.
Key Responsibilities
• Architect, design, and implement scalable data platforms and pipelines on Azure and Databricks.
• Build and optimize data ingestion, transformation, and processing workflows across batch and real-time data streams.
• Work extensively with ADLS, Delta Lake, and Spark (Python) for large-scale data engineering.
• Lead the development of complex ETL/ELT pipelines, ensuring high quality, reliability, and performance.
• Design and implement data models, including conceptual, logical, and physical models for analytics and operational workloads.
• Work with relational and lakehouse systems including PostgreSQL and Delta Lake.
• Define and enforce best practices in data governance, data quality, security, and architecture.
• Collaborate with architects, data scientists, analysts, and business teams to translate requirements into technical solutions.
• Troubleshoot production issues, optimize performance, and support continuous improvement of the data platform.
• Mentor junior engineers and contribute to building engineering standards and reusable components.
Required Skills & Experience
• 10+ years of hands-on data engineering experience in enterprise environments.
• Strong expertise in Azure services, especially Azure Databricks, Functions, and Azure Data Factory (preferred).
• Advanced proficiency in Apache Spark with Python (PySpark).
• Strong command over SQL, query optimization, and performance tuning.
• Deep understanding of ETL/ELT methodologies, data pipelines, and scheduling/orchestration.
• Hands-on experience with Delta Lake (ACID transactions, optimization, schema evolution).
• Strong experience in data modelling (normalized, dimensional, lakehouse modelling).
• Experience in both batch processing and real-time/streaming data (Kafka, Event Hub, or similar).
• Solid understanding of data architecture principles, distributed systems, and cloud-native design patterns.
• Ability to design end-to-end solutions, evaluate trade-offs, and recommend best-fit architectures.
• Strong analytical, problem-solving, and communication skills.
• Ability to collaborate with cross-functional teams and lead technical discussions.
Preferred Skills
• Experience with CI/CD tools such as Azure DevOps and Git.
• Familiarity with IaC tools (Terraform, ARM).
• Exposure to data governance and cataloging tools (Azure Purview).
• Experience supporting machine learning or BI workloads on Databricks."
Dear Applicant,
Please let me know if interested.
Job Title: Azure Data Tech Lead
Location: Alpharetta, Georgia or Berkley Heights NJ (onsite)
Hire Type: Long term Contract
"Core Skills: Azure, Databricks, ADLS, Spark (Python), SQL, ETL, Delta Lake, PostgreSQL, Data Architecture, Batch & Real-time Processing, Data Modelling
Overview
We are looking for an experienced Senior/Lead Data Engineer with 8+ years of expertise in designing and delivering scalable, high-performing data solutions on the Azure ecosystem. The ideal candidate will have deep hands-on experience with Databricks, Spark, modern data lakehouse architectures, data modelling, and both batch and real-time data processing. You will be responsible for driving end-to-end data engineering initiatives, influencing architectural decisions, and ensuring robust, high-quality data pipelines.
Key Responsibilities
• Architect, design, and implement scalable data platforms and pipelines on Azure and Databricks.
• Build and optimize data ingestion, transformation, and processing workflows across batch and real-time data streams.
• Work extensively with ADLS, Delta Lake, and Spark (Python) for large-scale data engineering.
• Lead the development of complex ETL/ELT pipelines, ensuring high quality, reliability, and performance.
• Design and implement data models, including conceptual, logical, and physical models for analytics and operational workloads.
• Work with relational and lakehouse systems including PostgreSQL and Delta Lake.
• Define and enforce best practices in data governance, data quality, security, and architecture.
• Collaborate with architects, data scientists, analysts, and business teams to translate requirements into technical solutions.
• Troubleshoot production issues, optimize performance, and support continuous improvement of the data platform.
• Mentor junior engineers and contribute to building engineering standards and reusable components.
Required Skills & Experience
• 10+ years of hands-on data engineering experience in enterprise environments.
• Strong expertise in Azure services, especially Azure Databricks, Functions, and Azure Data Factory (preferred).
• Advanced proficiency in Apache Spark with Python (PySpark).
• Strong command over SQL, query optimization, and performance tuning.
• Deep understanding of ETL/ELT methodologies, data pipelines, and scheduling/orchestration.
• Hands-on experience with Delta Lake (ACID transactions, optimization, schema evolution).
• Strong experience in data modelling (normalized, dimensional, lakehouse modelling).
• Experience in both batch processing and real-time/streaming data (Kafka, Event Hub, or similar).
• Solid understanding of data architecture principles, distributed systems, and cloud-native design patterns.
• Ability to design end-to-end solutions, evaluate trade-offs, and recommend best-fit architectures.
• Strong analytical, problem-solving, and communication skills.
• Ability to collaborate with cross-functional teams and lead technical discussions.
Preferred Skills
• Experience with CI/CD tools such as Azure DevOps and Git.
• Familiarity with IaC tools (Terraform, ARM).
• Exposure to data governance and cataloging tools (Azure Purview).
• Experience supporting machine learning or BI workloads on Databricks."






