

Lead Data Engineer – Azure
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Lead Data Engineer – Azure, offering a contract of unspecified length, with a pay rate of "TBD". Candidates must have 12+ years of experience, focusing on Azure Data Services, DBT, Snowflake, and strong skills in Python and SQL.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
-
🗓️ - Date discovered
August 29, 2025
🕒 - Project duration
Unknown
-
🏝️ - Location type
Remote
-
📄 - Contract type
W2 Contractor
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Snowpark #Data Modeling #"ETL (Extract #Transform #Load)" #ADF (Azure Data Factory) #Data Governance #Snowflake #Data Quality #Azure Data Factory #Automation #AI (Artificial Intelligence) #Security #Anomaly Detection #Classification #Compliance #Databricks #Datasets #Python #Sentiment Analysis #ML (Machine Learning) #Scala #Forecasting #Data Pipeline #Cloud #Data Architecture #Data Orchestration #Azure #SQL (Structured Query Language) #Data Engineering #dbt (data build tool)
Role description
Role: Lead Data Engineer – Azure
Location: Remote (EST hours)
Experience Level:12+ Years
Open Position-2
Type: Contract C2C/W2
We are looking for a highly experienced Lead Data Engineer with a strong focus on Azure Data Services and DBT, complemented by expertise in Snowflake and Snowflake Cortex. The ideal candidate will lead the design and development of scalable data pipelines, ensure data quality and governance, and support AI-driven analytics using Snowflake Cortex and GenAI capabilities. This role is ideal for someone who thrives in a cloud-native, data-first environment and enjoys working across engineering and analytics teams.
Key Responsibilities
Azure & DBT-Focused Data Engineering
Design, build, and optimize scalable data pipelines using Azure Data Factory, DBT, and Databricks.
Develop modular, testable DBT models for data transformation and analytics.
Implement data orchestration and workflow automation in Azure environments.
Ensure high performance, reliability, and scalability of data workflows.
Snowflake & Cortex Integration
Build and maintain Snowflake-based data architectures and pipelines.
Leverage Snowflake Cortex functions for:
Anomaly Detection, Time-Series Forecasting, Classification
Text Completion, Embedding, Sentiment Analysis, Summarization
Integrate Snowflake Cortex with UI tools like Copilot, Universal Search, and Document AI.
Data Quality & Governance
Implement robust data validation frameworks using:
Snowflake Cortex anomaly detection
Custom logic via Snowpark UDFs
Collaborate with analytics teams to deliver clean, trusted datasets.
Ensure compliance with data governance, privacy, and security standards.
Required Qualifications
Experience in data engineering, with a strong focus on Azure Data Factory, DBT, and Databricks.
Deep expertise in Snowflake and Snowpark APIs.
Experience with Snowflake Cortex ML and LLM functions.
Strong proficiency in Python and SQL.
Familiarity with open-source LLMs and GenAI integration within Snowflake.
Excellent understanding of data modeling, transformation, and pipeline optimization.
Strong collaboration skills across engineering, analytics, and business teams.
Role: Lead Data Engineer – Azure
Location: Remote (EST hours)
Experience Level:12+ Years
Open Position-2
Type: Contract C2C/W2
We are looking for a highly experienced Lead Data Engineer with a strong focus on Azure Data Services and DBT, complemented by expertise in Snowflake and Snowflake Cortex. The ideal candidate will lead the design and development of scalable data pipelines, ensure data quality and governance, and support AI-driven analytics using Snowflake Cortex and GenAI capabilities. This role is ideal for someone who thrives in a cloud-native, data-first environment and enjoys working across engineering and analytics teams.
Key Responsibilities
Azure & DBT-Focused Data Engineering
Design, build, and optimize scalable data pipelines using Azure Data Factory, DBT, and Databricks.
Develop modular, testable DBT models for data transformation and analytics.
Implement data orchestration and workflow automation in Azure environments.
Ensure high performance, reliability, and scalability of data workflows.
Snowflake & Cortex Integration
Build and maintain Snowflake-based data architectures and pipelines.
Leverage Snowflake Cortex functions for:
Anomaly Detection, Time-Series Forecasting, Classification
Text Completion, Embedding, Sentiment Analysis, Summarization
Integrate Snowflake Cortex with UI tools like Copilot, Universal Search, and Document AI.
Data Quality & Governance
Implement robust data validation frameworks using:
Snowflake Cortex anomaly detection
Custom logic via Snowpark UDFs
Collaborate with analytics teams to deliver clean, trusted datasets.
Ensure compliance with data governance, privacy, and security standards.
Required Qualifications
Experience in data engineering, with a strong focus on Azure Data Factory, DBT, and Databricks.
Deep expertise in Snowflake and Snowpark APIs.
Experience with Snowflake Cortex ML and LLM functions.
Strong proficiency in Python and SQL.
Familiarity with open-source LLMs and GenAI integration within Snowflake.
Excellent understanding of data modeling, transformation, and pipeline optimization.
Strong collaboration skills across engineering, analytics, and business teams.