Azure Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an Azure Data Engineer requiring 10-12 years of IT experience, expert SQL skills, and proficiency in Azure services. Key skills include data pipeline design, Big Data technologies, DevOps, and NoSQL solutions. Contract duration exceeds 6 months.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
August 21, 2025
πŸ•’ - Project duration
More than 6 months
-
🏝️ - Location type
Unknown
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Weehawken, NJ
-
🧠 - Skills detailed
#Azure Cosmos DB #Datasets #Azure Synapse Analytics #DevOps #ADF (Azure Data Factory) #Data Engineering #"ETL (Extract #Transform #Load)" #Storage #Data Warehouse #Monitoring #Synapse #Data Ingestion #SQL (Structured Query Language) #AWS (Amazon Web Services) #Python #Data Processing #Azure SQL #API (Application Programming Interface) #Big Data #Databases #Data Lake #Databricks #Cloud #Azure Data Factory #Data Storage #Security #Data Security #Spark (Apache Spark) #Kafka (Apache Kafka) #ADLS (Azure Data Lake Storage) #Azure #Data Lineage #Compliance #Data Catalog #Azure ADLS (Azure Data Lake Storage) #Scala #Data Management #Azure SQL Data Warehouse #Data Pipeline #Data Governance #Infrastructure as Code (IaC) #Azure Databricks #Metadata
Role description
Dice is the leading career destination for tech experts at every stage of their careers. Our client, Talent Groups, is seeking the following. Apply via Dice today! This is Full Time position (not Contract) and need above 10+ years of experience in IT. Job Description: Min 10 -12 years exp β€’ Expert level skills writing and optimizing complex SQL β€’ Experience with complex data modelling, ETL design, and using large databases in a business environment β€’ Experience with building data pipelines and applications to stream and process datasets at low latencies β€’ Fluent with Big Data technologies like Spark, Kafka and Hive β€’ Expert level understanding on Azure Data Factory, Azure Synapse, Azure SQL, Azure Data Lake, and Azure App Service is required β€’ Designing and building of data pipelines using API ingestion and Streaming ingestion methods β€’ Knowledge of Dev-Ops processes (including CI/CD) and Infrastructure as code is essential β€’ Experience in developing NO SQL solutions using Azure Cosmos DB is essential β€’ Thorough understanding of Azure and AWS Cloud Infrastructure offerings β€’ Working knowledge of Python is desirable β€’ Designing and implementing scalable and secure data processing pipelines using Azure Data Factory, Azure Databricks, and other Azure services β€’ Managing and optimizing data storage using Azure Data Lake Storage, Azure SQL Data Warehouse, and Azure Cosmos DB β€’ Monitoring and troubleshooting data-related issues within the Azure environment to maintain high availability and performance β€’ Implementing data security measures, including encryption, access controls, and auditing, to protect sensitive information β€’ Automating data pipelines and workflows to streamline data ingestion, processing, and distribution tasks β€’ Utilizing Azure's analytics services, such as Azure Synapse Analytics, to provide insights and support data-driven decision-making. β€’ Documenting data procedures, systems, and architectures to maintain clarity and ensure compliance with regulatory standards β€’ Providing guidance and support for data governance, including metadata management, data lineage, and data cataloging