

The Ash Group
Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a direct hire contract in Denver, CO, offering $100,000 annually. Requires 4+ years in software engineering, expertise in SQL, Python, Azure Data Factory, and AI/ML knowledge. Hybrid work setup with one office day weekly.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
454
-
ποΈ - Date
December 9, 2025
π - Duration
More than 6 months
-
ποΈ - Location
Hybrid
-
π - Contract
W2 Contractor
-
π - Security
Unknown
-
π - Location detailed
Denver, CO
-
π§ - Skills detailed
#Data Pipeline #Base #Deployment #Azure Data Factory #Spark (Apache Spark) #Data Architecture #PySpark #Python #Jupyter #SQL Server #TensorFlow #Azure cloud #Data Warehouse #Data Lake #Microsoft Azure #DataOps #Databases #SQL (Structured Query Language) #Cloud #"ETL (Extract #Transform #Load)" #ML (Machine Learning) #Azure #Programming #Synapse #ADF (Azure Data Factory) #Data Engineering #AI (Artificial Intelligence) #Scrum #Automation #Data Science #Agile
Role description
β’
β’
β’ W2 Contract Only β No C2C β No 3rd Parties
β’
β’
β’ Summary
The Ash Group is hiring a Data Engineer for our client (a specialized financial services subsidiary providing dedicated new home construction financing). This is a Direct Hire role with compensation of $100,000 annually, based in Denver, CO (Hybrid setting).
This role is crucial for transforming the organization into a data-driven environment by designing and optimizing data infrastructures, migrating large-scale data to the Microsoft Azure cloud (specifically Microsoft Fabric), and leveraging expertise in AI/ML to drive decision-making.
Role Details
β’ Compensation: Annual base salary of $100,000. (Eligible for annual bonus based on performance objectives).
β’ Benefits: Comprehensive package including Medical, Dental, and Vision coverage. Eligibility for 401(k) Plan, Company-paid disability/basic life insurance, parental leave, tuition reimbursement, and generous PTO (up to 17 days/year for less than 10 years of service).
β’ Duration: Direct Hire.
β’ Location: Hybrid in Denver, CO. (Requires 1 day per week in office).
What Youβll Be Doing
β’ Design new and migrate existing large-scale data stores (from on-premises SQL Server) to the modern Microsoft Fabric-based infrastructure, including the Lakehouse and data warehouses.
β’ Develop, code, and optimize ETL/ELT solutions and data pipelines using SQL, Python, and PySpark, focusing on data acquisition and quality.
β’ Collaborate with data scientists to productionize ML models and integrate them seamlessly into data pipelines to deliver business impact.
β’ Utilize and optimize modern data engineering tools like Azure Data Factory, Synapse, and Jupyter Notebooks for processing and analysis.
β’ Provide technical expertise during the full development lifecycle, ensuring adherence to data architecture and enterprise quality standards.
What Weβre Looking For
β’ 4+ yearsβ software engineering experience with Python, PySpark, Spark, or equivalent notebook programming.
β’ 3+ yearsβ experience with SQL, relational databases, and large data repositories, including advanced knowledge of writing SQL and optimizing query plans.
β’ Hands-on experience with Azure Data Factory, Azure Synapse, Data Lake, and the Microsoft Fabric environment (or strong willingness to adopt new cloud-native data platforms).
β’ Knowledge of AI/ML, Agents, and other automation tools, including experience with ML frameworks (e.g., scikit-learn, TensorFlow) is highly preferred.
β’ Experience with CI/CD concepts, DataOps/MLOps, and general software deployment lifecycles.
β’ Participant in Agile methodologies (Scrum) with strong verbal and written communication skills to effectively collaborate with technical and non-technical stakeholders.
Apply today to join a dynamic team supporting critical infrastructure projects.
#DataEngineer #AzureCloud #DataOps #AIML #DirectHire #DenverJobs #PySpark
β’
β’
β’ W2 Contract Only β No C2C β No 3rd Parties
β’
β’
β’ Summary
The Ash Group is hiring a Data Engineer for our client (a specialized financial services subsidiary providing dedicated new home construction financing). This is a Direct Hire role with compensation of $100,000 annually, based in Denver, CO (Hybrid setting).
This role is crucial for transforming the organization into a data-driven environment by designing and optimizing data infrastructures, migrating large-scale data to the Microsoft Azure cloud (specifically Microsoft Fabric), and leveraging expertise in AI/ML to drive decision-making.
Role Details
β’ Compensation: Annual base salary of $100,000. (Eligible for annual bonus based on performance objectives).
β’ Benefits: Comprehensive package including Medical, Dental, and Vision coverage. Eligibility for 401(k) Plan, Company-paid disability/basic life insurance, parental leave, tuition reimbursement, and generous PTO (up to 17 days/year for less than 10 years of service).
β’ Duration: Direct Hire.
β’ Location: Hybrid in Denver, CO. (Requires 1 day per week in office).
What Youβll Be Doing
β’ Design new and migrate existing large-scale data stores (from on-premises SQL Server) to the modern Microsoft Fabric-based infrastructure, including the Lakehouse and data warehouses.
β’ Develop, code, and optimize ETL/ELT solutions and data pipelines using SQL, Python, and PySpark, focusing on data acquisition and quality.
β’ Collaborate with data scientists to productionize ML models and integrate them seamlessly into data pipelines to deliver business impact.
β’ Utilize and optimize modern data engineering tools like Azure Data Factory, Synapse, and Jupyter Notebooks for processing and analysis.
β’ Provide technical expertise during the full development lifecycle, ensuring adherence to data architecture and enterprise quality standards.
What Weβre Looking For
β’ 4+ yearsβ software engineering experience with Python, PySpark, Spark, or equivalent notebook programming.
β’ 3+ yearsβ experience with SQL, relational databases, and large data repositories, including advanced knowledge of writing SQL and optimizing query plans.
β’ Hands-on experience with Azure Data Factory, Azure Synapse, Data Lake, and the Microsoft Fabric environment (or strong willingness to adopt new cloud-native data platforms).
β’ Knowledge of AI/ML, Agents, and other automation tools, including experience with ML frameworks (e.g., scikit-learn, TensorFlow) is highly preferred.
β’ Experience with CI/CD concepts, DataOps/MLOps, and general software deployment lifecycles.
β’ Participant in Agile methodologies (Scrum) with strong verbal and written communication skills to effectively collaborate with technical and non-technical stakeholders.
Apply today to join a dynamic team supporting critical infrastructure projects.
#DataEngineer #AzureCloud #DataOps #AIML #DirectHire #DenverJobs #PySpark






