

Symantec
Technical Lead Azure Data Engineering Specialist
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Technical Lead Azure Data Engineering Specialist on a contract lasting over 6 months, with a pay rate of $98,090.02 - $118,129.91 per year. Key skills required include 5+ years in Azure Cloud, 4+ years in Azure Databricks, and experience in ADF, Python, and DevOps practices.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
536
-
ποΈ - Date
October 17, 2025
π - Duration
More than 6 months
-
ποΈ - Location
On-site
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Indianapolis, IN 46221
-
π§ - Skills detailed
#GitHub #Azure Data Factory #Storage #Agile #Databricks #Programming #PySpark #SQL Queries #Security #Code Reviews #Metadata #Monitoring #Data Engineering #Automation #Azure #Airflow #Migration #Synapse #ADLS (Azure Data Lake Storage) #Terraform #Pytest #Jira #Libraries #DevOps #SQL (Structured Query Language) #Google Cloud Storage #Data Layers #Version Control #Kafka (Apache Kafka) #Scrum #Azure DevOps #Data Transformations #Azure ADLS (Azure Data Lake Storage) #Azure Databricks #Data Migration #SonarQube #Python #Azure SQL #ADF (Azure Data Factory) #GIT #Cloud #Deployment #Azure cloud #Data Lake #"ETL (Extract #Transform #Load)" #Spark (Apache Spark) #Business Analysis #Vault
Role description
We are seeking a highly skilled Data Engineering Specialists with the below skills to join our dynamic team who are at the forefront of enabling enterprises in healthcare sectors. The ideal candidate should be passionate about working on Data Engineering on Azure Cloud with strong focus on DevOps practices in building product for our customers. Effectively Communicate and Collaborate with internal teams and customer to build code leveraging or building low level design documents aligning to standard coding principles and guidelines.
Skills/Experience
5+ years of experience in Azure Cloud platform and 4+ years of experience in Azure Databricks with PySpark
3+ years of experience in ADF (Azure Data Factory), ADLS Gen 2 and Azure SQL
2+ years of experience in Databricks workflow & Unity catalog
2+ years of experience in Python programming & package builds
Strong experience in implementing secure, hierarchical namespace-based data lake storage for structured/semi-structured data, aligned to bronze-silver-gold layers with ADLS Gen2
Hands-on experience with lifecycle policies, access control (RBAC/ACLs), and folder-level security. Understanding of best practices in file partitioning, retention management, and storage performance optimization; Capable of developing T-SQL queries, stored procedures, and managing metadata layers on Azure SQL
Comprehensive experience working across the Azure ecosystem, including networking, security, monitoring, and cost management relevant to data engineering workloads. Understanding of VNets, Private Endpoints, Key Vaults, Managed Identities, and Azure Monitor. Exposure to DevOps tools for deployment automation (e.g., Azure DevOps, ARM/Bicep/Terraform).
Experience in writing modular, testable Python code used in data transformations, utility functions, and packaging reusable components. Familiarity with Python environments, dependency management (pip/Poetry/Conda), and packaging libraries. Ability to write unit tests using Pytest/unit test and integrate with CI/CD pipelines.
Lead solution design discussions, mentor Junior engineers, ensure adherence to coding guidelines, design patterns, and peer review processes; Able to prepare Design documents for development and guiding the team technically. Experience preparing technical design documents, HLD/LLDs, and architecture diagrams; Familiarity with code quality tools (e.g., SonarQube, pylint), and Version Control workflows (Git)
Experience in working with product owners, QA, and business analysts to translate requirements into deliverables; Ability to articulate technical concepts clearly to both technical and business audiences; Demonstrates strong verbal and written communication, proactive stakeholder engagement, and a collaborative attitude in cross-functional teams
Prior experience in working on Agile/Scrum projects with exposure to tools like Jira/Azure DevOps; Provide constructive feedback during code reviews and be open to receiving feedback on your own code
Problem-Solving and Analytical Thinking, Capability to troubleshoot and resolve issues efficiently; Provides regular updates, proactive and due diligent to carry out responsibilities
Communicate effectively with internal and customer stakeholders; Communication approach: verbal, emails and instant messages
Strong interpersonal skills to build and maintain productive relationships with team members
Secondary Skills:
Good to have Azure Entra/AD skills and GitHub Actions
Good to have orchestration experience using Airflow, Dagster, LogicApp
Good to have experience working on event-driven architectures using Kafka, Azure Event Hub
Good to have experience in managing Cloud storage solutions on Azure Data Lake Storage (ADLS)
Good to have exposure on Google Cloud Pub/Sub; Experience with Google Cloud Storage will be an advantage
Good to have experience developing and maintaining Change Data Capture (CDC) solutions preferably using Debezium
Good to have hands-on experience on data migration projects specifically involving Azure Synapse and Databricks Lakehouse
Job Types: Full-time, Contract
Pay: $98,090.02 - $118,129.91 per year
Work Location: In person
We are seeking a highly skilled Data Engineering Specialists with the below skills to join our dynamic team who are at the forefront of enabling enterprises in healthcare sectors. The ideal candidate should be passionate about working on Data Engineering on Azure Cloud with strong focus on DevOps practices in building product for our customers. Effectively Communicate and Collaborate with internal teams and customer to build code leveraging or building low level design documents aligning to standard coding principles and guidelines.
Skills/Experience
5+ years of experience in Azure Cloud platform and 4+ years of experience in Azure Databricks with PySpark
3+ years of experience in ADF (Azure Data Factory), ADLS Gen 2 and Azure SQL
2+ years of experience in Databricks workflow & Unity catalog
2+ years of experience in Python programming & package builds
Strong experience in implementing secure, hierarchical namespace-based data lake storage for structured/semi-structured data, aligned to bronze-silver-gold layers with ADLS Gen2
Hands-on experience with lifecycle policies, access control (RBAC/ACLs), and folder-level security. Understanding of best practices in file partitioning, retention management, and storage performance optimization; Capable of developing T-SQL queries, stored procedures, and managing metadata layers on Azure SQL
Comprehensive experience working across the Azure ecosystem, including networking, security, monitoring, and cost management relevant to data engineering workloads. Understanding of VNets, Private Endpoints, Key Vaults, Managed Identities, and Azure Monitor. Exposure to DevOps tools for deployment automation (e.g., Azure DevOps, ARM/Bicep/Terraform).
Experience in writing modular, testable Python code used in data transformations, utility functions, and packaging reusable components. Familiarity with Python environments, dependency management (pip/Poetry/Conda), and packaging libraries. Ability to write unit tests using Pytest/unit test and integrate with CI/CD pipelines.
Lead solution design discussions, mentor Junior engineers, ensure adherence to coding guidelines, design patterns, and peer review processes; Able to prepare Design documents for development and guiding the team technically. Experience preparing technical design documents, HLD/LLDs, and architecture diagrams; Familiarity with code quality tools (e.g., SonarQube, pylint), and Version Control workflows (Git)
Experience in working with product owners, QA, and business analysts to translate requirements into deliverables; Ability to articulate technical concepts clearly to both technical and business audiences; Demonstrates strong verbal and written communication, proactive stakeholder engagement, and a collaborative attitude in cross-functional teams
Prior experience in working on Agile/Scrum projects with exposure to tools like Jira/Azure DevOps; Provide constructive feedback during code reviews and be open to receiving feedback on your own code
Problem-Solving and Analytical Thinking, Capability to troubleshoot and resolve issues efficiently; Provides regular updates, proactive and due diligent to carry out responsibilities
Communicate effectively with internal and customer stakeholders; Communication approach: verbal, emails and instant messages
Strong interpersonal skills to build and maintain productive relationships with team members
Secondary Skills:
Good to have Azure Entra/AD skills and GitHub Actions
Good to have orchestration experience using Airflow, Dagster, LogicApp
Good to have experience working on event-driven architectures using Kafka, Azure Event Hub
Good to have experience in managing Cloud storage solutions on Azure Data Lake Storage (ADLS)
Good to have exposure on Google Cloud Pub/Sub; Experience with Google Cloud Storage will be an advantage
Good to have experience developing and maintaining Change Data Capture (CDC) solutions preferably using Debezium
Good to have hands-on experience on data migration projects specifically involving Azure Synapse and Databricks Lakehouse
Job Types: Full-time, Contract
Pay: $98,090.02 - $118,129.91 per year
Work Location: In person