

Immediate Opening - Azure Data Engineer - Remote
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an Azure Data Engineer with a long-term remote contract. Requires 10+ years of experience, 3–7+ years in Azure, and expertise in Azure DataBricks, Azure Synapse, Snowflake, SQL, PySpark, and data warehousing concepts.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
-
🗓️ - Date discovered
August 8, 2025
🕒 - Project duration
More than 6 months
-
🏝️ - Location type
Remote
-
📄 - Contract type
Unknown
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
Denver, CO
-
🧠 - Skills detailed
#"ETL (Extract #Transform #Load)" #Synapse #GIT #Security #Data Security #Data Modeling #Big Data #Cloud #Delta Lake #SQL (Structured Query Language) #Python #Azure Databricks #Compliance #Agile #PySpark #DevOps #Azure #Databricks #Computer Science #Data Engineering #Spark SQL #Version Control #Snowflake #Spark (Apache Spark) #Data Processing #Azure DevOps #Azure cloud #Azure Synapse Analytics
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Our client is looking Azure Data Engineer for Long term project Remote below is the detailed requirements.
Job Title : Azure Data Engineer
Location : Remote
Duration : Long term
Must Have : Azure, Azure DataBricks, Azure Synapse, Snowflake
Job description:
• Bachelor’s or Master’s degree in Computer Science, Data Engineering, or related field with 10+Years of Experience.
• Must have 3–7+ years of experience as a Data Engineer with a strong focus on Azure cloud services.
• Hands-on experience with Azure Databricks (Spark, Delta Lake) and Azure Synapse Analytics (Dedicated and Serverless SQL Pools)
• Proficiency in PySpark, SQL, and Python for ETL and data transformation tasks.
• Strong understanding of data warehousing concepts, data modeling, and big data processing.
• Experience with version control (Git) and CI/CD tools (e.g., Azure DevOps).
• Knowledge of data security, RBAC, and compliance standards.
• Ability to work in agile, collaborative environments.