Programmers.io

Data Engineer

โญ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with 8+ years of experience, focusing on Azure resources, data pipelines, and automation using Terraform and Azure DevOps. Azure and Databricks certifications are required. Contract length and pay rate are unspecified.
๐ŸŒŽ - Country
United States
๐Ÿ’ฑ - Currency
$ USD
-
๐Ÿ’ฐ - Day rate
Unknown
-
๐Ÿ—“๏ธ - Date
February 13, 2026
๐Ÿ•’ - Duration
Unknown
-
๐Ÿ๏ธ - Location
Unknown
-
๐Ÿ“„ - Contract
Unknown
-
๐Ÿ”’ - Security
Unknown
-
๐Ÿ“ - Location detailed
Charlotte, NC
-
๐Ÿง  - Skills detailed
#Data Processing #Jenkins #Terraform #Python #Data Engineering #Documentation #Spark (Apache Spark) #Monitoring #Security #DevOps #PySpark #ML Ops (Machine Learning Operations) #Databricks #ADLS (Azure Data Lake Storage) #SQL (Structured Query Language) #Azure DevOps #Data Pipeline #Azure Machine Learning #AI (Artificial Intelligence) #Cloud #Jira #ADF (Azure Data Factory) #GitHub #"ETL (Extract #Transform #Load)" #Azure SQL #Synapse #Storage #Delta Lake #Deployment #ML (Machine Learning) #Azure
Role description
ยท Cloud & Data Engineer with hands-on experience deploying and administering Azure resources, including compute, networking, storage, security, and monitoring components ยท Proven ability to automate cloud provisioning using Terraform, ARM templates, Azure DevOps, GitHub Actions, and Jenkins to improve deployment speed and reliability. ยท Experienced in designing and building distributed data pipelines using ADF, ADLS Gen2, Azure SQL, Synapse, Event Hubs, and Stream Analytics. ยท Strong background in Databricks administration and PySpark development, delivering high-performance ELT pipelines, Delta Lake solutions, and workspace governance. ยท Familiar with ML Ops frameworks, Azure Machine Learning (AML), MCP servers, and enterprise AI Search integration for production-grade AI workloads. ยท Strong command of Python, PySpark, and SQL performance tuning for large-scale data processing and analytics workloads. Responsibilities โ€ข Identify, Clarify, Analyze and Confirm any requests for data sourcing or data changes for our ETL pipelines. โ€ข Ensure that the requirements are clear, edge cases are considered, and impact of the changes are understood. โ€ข Effectively manage our backlog prioritized and is a clear reflection of what the team is currently working on. โ€ข Assist and support the developers from intake, to development, testing, and release of new features. โ€ข Liaise between our Stakeholder, Product Owners, and upstream source systems. โ€ข Ensure our documentation and our runbooks are up to date. โ€ข Keep up to date with the latest industry trends and technologies related to data engineering Preferred Qualifications ยท 8+ years of relevant experience in a related field of job function. ยท Azure certified, Databricks certified ยท Experience with Jira, Confluence. ยท Experience with APIs, SQL.