Talent Groups

Azure Data Engineer (ADF)

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an Azure Data Engineer (ADF) in Raleigh, NC, with a contract length of unspecified duration. Key skills include SQL, data modeling, ETL design, Azure services, and Big Data technologies. Experience with Azure Cosmos DB and DevOps is essential.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
January 8, 2026
πŸ•’ - Duration
Unknown
-
🏝️ - Location
On-site
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Raleigh, NC
-
🧠 - Skills detailed
#Azure #Spark (Apache Spark) #Data Ingestion #Azure SQL #"ETL (Extract #Transform #Load)" #Data Processing #Azure SQL Data Warehouse #DevOps #Data Warehouse #Storage #Azure Databricks #API (Application Programming Interface) #Data Lake #Python #SQL (Structured Query Language) #Data Engineering #Kafka (Apache Kafka) #Big Data #Data Pipeline #Data Security #Databricks #Azure Synapse Analytics #ADLS (Azure Data Lake Storage) #Azure Cosmos DB #Security #Azure ADLS (Azure Data Lake Storage) #Synapse #Infrastructure as Code (IaC) #Databases #Scala #Datasets #ADF (Azure Data Factory) #Azure Data Factory #Data Storage
Role description
Job Title: Azure Data Engineer (ADF) Location: Raleigh, NC - Onsite Contract Job Description: β€’ Expert level skills writing and optimizing complex SQL β€’ Experience with complex data modelling, ETL design, and using large databases in a business environment β€’ Experience with building data pipelines and applications to stream and process datasets at low latencies β€’ Fluent with Big Data technologies like Spark, Kafka and Hive β€’ Expert level understanding on Azure Data Factory, Azure Synapse, Azure SQL, Azure Data Lake, and Azure App Service is required β€’ Designing and building of data pipelines using API ingestion and Streaming ingestion methods β€’ Knowledge of Dev-Ops processes (including CI/CD) and Infrastructure as code is essential β€’ Experience in developing NO SQL solutions using Azure Cosmos DB is essential β€’ Working knowledge of Python is desirable β€’ Designing and implementing scalable and secure data processing pipelines using Azure Data Factory, Azure Databricks, and other Azure services β€’ Managing and optimizing data storage using Azure Data Lake Storage, Azure SQL Data Warehouse, and Azure Cosmos DB β€’ Implementing data security measures, including encryption, access controls, and auditing, to protect sensitive information β€’ Automating data pipelines and workflows to streamline data ingestion, processing, and distribution tasks β€’ Utilizing Azure's analytics services, such as Azure Synapse Analytics, to provide insights and support data-driven decision-making.