

Clevanoo LLC
Senior Lead Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Lead Data Engineer in Greenfield, IN, on a long-term contract, targeting Green Card holders and citizens. Requires 13-15 years of experience, strong Azure Data Factory, Databricks, PySpark, Python, and Azure SQL skills.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
October 30, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Greenfield, IN
-
🧠 - Skills detailed
#Jira #NoSQL #Cloud #Azure DevOps #Migration #Python #GitHub #Vault #ADLS (Azure Data Lake Storage) #"ETL (Extract #Transform #Load)" #Libraries #Big Data #Business Analysis #Azure Databricks #Data Management #Automation #Data Layers #Security #Storage #Spark SQL #Terraform #Data Transformations #DevOps #Azure Data Factory #Monitoring #SonarQube #Triggers #SQL Queries #Synapse #AI (Artificial Intelligence) #Deployment #Data Engineering #GIT #Databricks #ADF (Azure Data Factory) #Kafka (Apache Kafka) #Azure #Data Migration #Programming #Airflow #Azure SQL #Pytest #PySpark #Google Cloud Storage #Code Reviews #Azure cloud #Version Control #Metadata #Informatica BDM (Big Data Management) #Azure ADLS (Azure Data Lake Storage) #Batch #Data Lakehouse #Agile #Data Processing #Data Access #Spark (Apache Spark) #SQL (Structured Query Language) #Data Lake #API (Application Programming Interface) #Scrum
Role description
Position: Senior Lead Data Engineer
Location: Onsite in Greenfield, IN
Duration: Long Term Contract
Only Green Card and Citizens
Key skills: (Azure Data Factory, Strong Databricks, PySpark, Python, Azure SQL)
13-15 years
4+ years of experience in Azure Databricks with PySpark.
2+ years of experience in Databricks workflow & Unity catalog.
3+ years of experience in ADF (Azure Data Factory).
3+ years of experience in ADLS Gen 2.
3+ years of experience in Azure SQL.
5+ years of experience in Azure Cloud platform.
2+ years of experience in Python programming & package builds.
Key technical skills :
Data management experience handling Analytics workload covering design, development, and maintenance of Lakehouse solutions sourcing data from platforms such as ERP sources, API sources, Relational stores, NoSQL and on-prem sources using Databricks/PySpark as distributed /big data management service, supporting batch and near-real-time ingestion, transformation, and processing.
Ability to optimize Spark jobs and manage large-scale data processing using RDD/Data Frame APIs. Demonstrated expertise in partitioning strategies, file format optimization (Parquet/Delta), and Spark SQL tuning. Familiarity with Databricks runtime versions, cluster policies, libraries, and workspace management.
Skilled in governing and manage data access for Azure Data Lakehouse with Unity CatLog. Experience in configuring data permissions, object lineage, and access policies with Unity Cata log. Understanding of integrating Unity Cata log with Azure AD, external meta stores, and audit trails.
Experience in building efficient orchestration solutions using Azure data factory, Databricks Workflows. Ability to design modular, reusable workflows using tasks, triggers, and dependencies. Skilled in using dynamic expressions, parameterized pipelines, custom activities, and triggers.
Familiarity with integration runtime configurations, pipeline performance tuning, and error handling strategies.
Strong experience in implementing secure, hierarchical namespace-based data lake storage for structured/semi-structured data, aligned to bronze-silver-gold layers with ADLS Gen2. Hands-on experience with lifecycle policies, access control (RBAC/ACLs), and folder-level security. Understanding of best practices in file partitioning, retention management, and storage performance optimization.
Capable of developing T-SQL queries, stored procedures, and managing metadata layers on Azure SQL.
Comprehensive experience working across the Azure ecosystem, including networking, security, monitoring, and cost management relevant to data engineering workloads. Understanding of VNets, Private Endpoints, Key Vaults, Managed Identities, and Azure Monitor. Exposure to DevOps tools for deployment automation (e.g., Azure DevOps, ARM/Bicep/Terraform).
Experience in writing modular, testable Python code used in data transformations, utility functions, and packaging reusable components. Familiarity with Python environments, dependency management (pip/Poetry/Conda), and packaging libraries. Ability to write unit tests using PyTest/unit test and integrate with CI/CD pipelines.
Lead solution design discussions, mentor junior engineers, and ensure adherence to coding guidelines, design patterns, and peer review processes. Able to prepare Design documents for development and guiding the team technically. Experience preparing technical design documents, HLD/LLDs, and architecture diagrams. Familiarity with code quality tools (e.g., SonarQube, pylint), and version control workflows (Git).
Demonstrates strong verbal and written communication, proactive stakeholder engagement, and a collaborative attitude in cross-functional teams. Ability to articulate technical concepts clearly to both technical and business audiences. Experience in working with product owners, QA, and business analysts to translate requirements into deliverables.
Communication Skills:
Communicate effectively with internal and customer stakeholders
Communication approach: verbal, emails and instant messages
Interpersonal Skills:
Strong interpersonal skills to build and maintain productive relationships with team members
Provide constructive feedback during code reviews and be open to receiving feedback on your own code.
Problem-Solving and Analytical Thinking:
Capability to troubleshoot and resolve issues efficiently.
Analytical mindset.
Task/ Work Updates
Prior experience in working on Agile/Scrum projects with exposure to tools like Jira/Azure DevOps.
Provides regular updates, proactive and due diligent to carry out responsibilities.
We are seeking a highly skilled Data Engineering specialist with above mentioned mentioned Primary Skills to join our dynamic team who are at the forefront of enabling enterprises in Healthcare sectors.
The ideal candidate should be passionate about working on Data Engineering on Azure cloud with strong focus on DevOps practices in building product for our customers.
Effectively Communicate and Collaborate with internal teams and customer to build code leveraging or building low level design documents aligning to standard coding principles and guidelines.
Good to have Azure Entra/AD skills and GitHub Actions.
Good to have orchestration experience using Airflow, Dagster, LogicApp.
Good to have expereince working on event-driven architectures using Kafka, Azure Event Hub.
Good to have exposure on Google Cloud Pub/Sub.
Good to have experience developing and maintaining Change Data Capture (CDC) solutions preferrably using Debezium.
Good to have hands-on experience on data migration projects specifically involving Azure Synapse and Databricks Lakehouse.
Good to have eperienced in managing cloud storage solutions on Azure Data Lake Storage . Experience with Google Cloud Storage will be an advantage.
\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_
Best Regards
Mirza Azmat | Manager Talent Acquisition
Direct Number: 802-255-7781
Email: mirza.a@clevanoo.com
Clevanoo LLC
www.clevanoo.com
12864 Leopold Trail, Frisco, Texas 75035
https://www.linkedin.com/in/mirza-azmat-65b35417a/
Clevanoo - Find Your Dream Job
Clevanoo connecting talented professionals with innovative companies. Find your dream job or hire top talent with our AI-powered matching platform.
Position: Senior Lead Data Engineer
Location: Onsite in Greenfield, IN
Duration: Long Term Contract
Only Green Card and Citizens
Key skills: (Azure Data Factory, Strong Databricks, PySpark, Python, Azure SQL)
13-15 years
4+ years of experience in Azure Databricks with PySpark.
2+ years of experience in Databricks workflow & Unity catalog.
3+ years of experience in ADF (Azure Data Factory).
3+ years of experience in ADLS Gen 2.
3+ years of experience in Azure SQL.
5+ years of experience in Azure Cloud platform.
2+ years of experience in Python programming & package builds.
Key technical skills :
Data management experience handling Analytics workload covering design, development, and maintenance of Lakehouse solutions sourcing data from platforms such as ERP sources, API sources, Relational stores, NoSQL and on-prem sources using Databricks/PySpark as distributed /big data management service, supporting batch and near-real-time ingestion, transformation, and processing.
Ability to optimize Spark jobs and manage large-scale data processing using RDD/Data Frame APIs. Demonstrated expertise in partitioning strategies, file format optimization (Parquet/Delta), and Spark SQL tuning. Familiarity with Databricks runtime versions, cluster policies, libraries, and workspace management.
Skilled in governing and manage data access for Azure Data Lakehouse with Unity CatLog. Experience in configuring data permissions, object lineage, and access policies with Unity Cata log. Understanding of integrating Unity Cata log with Azure AD, external meta stores, and audit trails.
Experience in building efficient orchestration solutions using Azure data factory, Databricks Workflows. Ability to design modular, reusable workflows using tasks, triggers, and dependencies. Skilled in using dynamic expressions, parameterized pipelines, custom activities, and triggers.
Familiarity with integration runtime configurations, pipeline performance tuning, and error handling strategies.
Strong experience in implementing secure, hierarchical namespace-based data lake storage for structured/semi-structured data, aligned to bronze-silver-gold layers with ADLS Gen2. Hands-on experience with lifecycle policies, access control (RBAC/ACLs), and folder-level security. Understanding of best practices in file partitioning, retention management, and storage performance optimization.
Capable of developing T-SQL queries, stored procedures, and managing metadata layers on Azure SQL.
Comprehensive experience working across the Azure ecosystem, including networking, security, monitoring, and cost management relevant to data engineering workloads. Understanding of VNets, Private Endpoints, Key Vaults, Managed Identities, and Azure Monitor. Exposure to DevOps tools for deployment automation (e.g., Azure DevOps, ARM/Bicep/Terraform).
Experience in writing modular, testable Python code used in data transformations, utility functions, and packaging reusable components. Familiarity with Python environments, dependency management (pip/Poetry/Conda), and packaging libraries. Ability to write unit tests using PyTest/unit test and integrate with CI/CD pipelines.
Lead solution design discussions, mentor junior engineers, and ensure adherence to coding guidelines, design patterns, and peer review processes. Able to prepare Design documents for development and guiding the team technically. Experience preparing technical design documents, HLD/LLDs, and architecture diagrams. Familiarity with code quality tools (e.g., SonarQube, pylint), and version control workflows (Git).
Demonstrates strong verbal and written communication, proactive stakeholder engagement, and a collaborative attitude in cross-functional teams. Ability to articulate technical concepts clearly to both technical and business audiences. Experience in working with product owners, QA, and business analysts to translate requirements into deliverables.
Communication Skills:
Communicate effectively with internal and customer stakeholders
Communication approach: verbal, emails and instant messages
Interpersonal Skills:
Strong interpersonal skills to build and maintain productive relationships with team members
Provide constructive feedback during code reviews and be open to receiving feedback on your own code.
Problem-Solving and Analytical Thinking:
Capability to troubleshoot and resolve issues efficiently.
Analytical mindset.
Task/ Work Updates
Prior experience in working on Agile/Scrum projects with exposure to tools like Jira/Azure DevOps.
Provides regular updates, proactive and due diligent to carry out responsibilities.
We are seeking a highly skilled Data Engineering specialist with above mentioned mentioned Primary Skills to join our dynamic team who are at the forefront of enabling enterprises in Healthcare sectors.
The ideal candidate should be passionate about working on Data Engineering on Azure cloud with strong focus on DevOps practices in building product for our customers.
Effectively Communicate and Collaborate with internal teams and customer to build code leveraging or building low level design documents aligning to standard coding principles and guidelines.
Good to have Azure Entra/AD skills and GitHub Actions.
Good to have orchestration experience using Airflow, Dagster, LogicApp.
Good to have expereince working on event-driven architectures using Kafka, Azure Event Hub.
Good to have exposure on Google Cloud Pub/Sub.
Good to have experience developing and maintaining Change Data Capture (CDC) solutions preferrably using Debezium.
Good to have hands-on experience on data migration projects specifically involving Azure Synapse and Databricks Lakehouse.
Good to have eperienced in managing cloud storage solutions on Azure Data Lake Storage . Experience with Google Cloud Storage will be an advantage.
\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_
Best Regards
Mirza Azmat | Manager Talent Acquisition
Direct Number: 802-255-7781
Email: mirza.a@clevanoo.com
Clevanoo LLC
www.clevanoo.com
12864 Leopold Trail, Frisco, Texas 75035
https://www.linkedin.com/in/mirza-azmat-65b35417a/
Clevanoo - Find Your Dream Job
Clevanoo connecting talented professionals with innovative companies. Find your dream job or hire top talent with our AI-powered matching platform.






