

Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer on a long-term contract, remote. Required skills include strong experience in Azure and Snowflake, proficiency in Azure Databricks, PySpark, SQL, and Python, along with knowledge of data security and compliance standards.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
August 9, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Remote
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Denver, CO
-
π§ - Skills detailed
#"ETL (Extract #Transform #Load)" #Data Modeling #Azure DevOps #SQL (Structured Query Language) #Compliance #Azure Synapse Analytics #Security #Synapse #Delta Lake #Python #Data Security #PySpark #Azure cloud #Cloud #DevOps #Spark SQL #Version Control #Snowflake #GIT #Big Data #Data Engineering #Spark (Apache Spark) #Azure Databricks #Databricks #Azure #Agile #Data Processing
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Hi
Our client is looking Data Engineer for Long Term Project in Remote, below is the detailed requirements.
Job Role: Data Engineer
Location: Remote
Mode of Hiring: Long Term Contract
Mandatory skills: The candidate should have a strong experience in Azure and Snowflakes
Job description:
β’ Experience as a Data Engineer with a strong focus on Azure cloud services.
β’ Hands-on experience with Azure Databricks (Spark, Delta Lake) and Azure Synapse Analytics (Dedicated and Serverless SQL Pools)
β’ Proficiency in PySpark, SQL, and Python for ETL and data transformation tasks.
β’ Strong understanding of data warehousing concepts, data modeling, and big data processing.
β’ Experience with version control (Git) and CI/CD tools (e.g., Azure DevOps).
β’ Knowledge of data security, RBAC, and compliance standards.
β’ Ability to work in agile, collaborative environments.