

A-Line Staffing Solutions
Senior Data Engineer - NO C2C
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer, contract length unspecified, paying $60/hr, located remotely in Centennial, CO. Requires 10+ years of experience, expertise in Databricks, Azure SQL, SQL, and data pipeline architecture. Bachelor’s degree required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
480
-
🗓️ - Date
April 25, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
Corp-to-Corp (C2C)
-
🔒 - Security
Unknown
-
📍 - Location detailed
Colorado, United States
-
🧠 - Skills detailed
#Data Architecture #ADLS (Azure Data Lake Storage) #Scala #Data Quality #Snowflake #Scripting #Azure cloud #Data Lake #Azure ADLS (Azure Data Lake Storage) #Data Ingestion #Metadata #Storage #Data Engineering #Spark SQL #Delta Lake #Compliance #Python #Computer Science #PySpark #Azure Data Factory #Documentation #Data Lineage #SSIS (SQL Server Integration Services) #Microsoft Power BI #Azure #Data Science #SQL (Structured Query Language) #Statistics #ADF (Azure Data Factory) #Data Lakehouse #Logic Apps #Cloud #Data Pipeline #Azure SQL #Databricks #Spark (Apache Spark) #BI (Business Intelligence) #Apache Spark #Agile #"ETL (Extract #Transform #Load)"
Role description
Title: Senior Data Engineer
Location: Centennial, CO - Remote
Rate: $60/Hr
Note: This position is a contract on W2, and is NOT open to C2C.
General Summary
The Senior Data Engineer will support Data Warehousing in a Databricks and Azure SQL environment by designing, building, and maintaining scalable, reliable data pipelines and data models that enable analytics and reporting across the organization. This role is responsible for ingesting data from diverse source systems, applying data quality and transformation logic, and modeling data to support BI reporting and downstream data needs. Working closely with BI developers, product owners, and business stakeholders, the Senior Data Engineer ensures data is reliable, well governed, and aligned to enterprise standards.
Essential Job Responsibilities
• Create and maintain optimal data pipeline Patterns/Architecture.
• Assemble large, complex data sets that meet functional / non-functional business requirements.
• Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability.
• Build data pipelines that extract, transform, and load data from a wide variety of data sources using Databricks and Azure technologies.
• Design and implement data models.
• Create automated tests to continuously monitor the quality of the data models.
• Work with stakeholders to assist with data-related technical issues and support their data infrastructure needs.
• Keep data separated and secure across Azure regions, ensuring HIPAA & HITECH and applicable regulation compliance.
Education & Experience
• Bachelor’s degree in computer science, statistics, informatics, information systems or other quantitative fields.
• 10+ years of experience in a Data Engineer/ BI Developer role.
• Extensive SQL experience including complex query development, SSIS, performance tuning, and optimization in Azure SQL and distributed query environments.
• Experience with data lake and data lakehouse architecture is preferred.
• Experience with design and engineering for Databricks Delta tables, Spark Declarative Pipelines, Jobs/Workflows is preferred.
• Experience with Azure cloud services: Databricks, Azure Data Factory, Azure SQL db, Azure Data Lake Storage Gen2, PySpark, Logic Apps
• Experience with object-oriented/object function scripting languages: Python, Scala
• Experience with structured, semi structured and unstructured data.
• Experience with data pipeline architecture is preferred.
• Experience with development/maintenance of APIs and WebServices is preferred.
Knowledge, Skills & Abilities
• Advanced proficiency in SQL, including complex query development, performance tuning, and optimization in Azure SQL and distributed query environments.
• Strong hands on experience designing and building ETL/ELT pipelines using Databricks, leveraging Apache Spark (PySpark / Spark SQL) for large scale data ingestion, transformation, and processing.
• Deep understanding of data lake and lakehouse architectures, including structured, semi structured, and incremental data ingestion patterns using Delta Lake, partitioning, and schema enforcement.
• Proven ability to design and implement analytics ready data models (star, snowflake, and dimensional models) to support Power BI and other BI/analytics consumption patterns.
• Experience managing metadata, data lineage, dependencies, and workload orchestration, ensuring reliable and repeatable data pipelines across development and production environments.
• Strong analytical and troubleshooting skills, with the ability to perform root cause analysis across source systems, data pipelines, and downstream reporting to resolve data quality and performance issues.
• Ability to collaborate closely with data architects, DBAs, BI developers, and data scientists to align engineering solutions with enterprise architecture and analytics standards.
• Proactive mindset with the ability to identify data quality risks, pipeline failures, scalability constraints, and performance bottlenecks, and escalate or remediate them early.
• Detail oriented with strong documentation, versioning, and governance discipline, supporting maintainable, auditable, and compliant data solutions.
• Comfortable working in an Agile, cross functional environment, managing multiple pipelines and priorities while maintaining production stability.
• Customer focused approach to delivering reliable, scalable, and business ready data assets that support operational and strategic decision making.
• Strategic thinker who understands how data engineering choices impact cost, performance, scalability, and business outcomes.
• Ability and willingness to mentor junior data engineers, promote Spark and Databricks best practices, and contribute to team technical standards.
Title: Senior Data Engineer
Location: Centennial, CO - Remote
Rate: $60/Hr
Note: This position is a contract on W2, and is NOT open to C2C.
General Summary
The Senior Data Engineer will support Data Warehousing in a Databricks and Azure SQL environment by designing, building, and maintaining scalable, reliable data pipelines and data models that enable analytics and reporting across the organization. This role is responsible for ingesting data from diverse source systems, applying data quality and transformation logic, and modeling data to support BI reporting and downstream data needs. Working closely with BI developers, product owners, and business stakeholders, the Senior Data Engineer ensures data is reliable, well governed, and aligned to enterprise standards.
Essential Job Responsibilities
• Create and maintain optimal data pipeline Patterns/Architecture.
• Assemble large, complex data sets that meet functional / non-functional business requirements.
• Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability.
• Build data pipelines that extract, transform, and load data from a wide variety of data sources using Databricks and Azure technologies.
• Design and implement data models.
• Create automated tests to continuously monitor the quality of the data models.
• Work with stakeholders to assist with data-related technical issues and support their data infrastructure needs.
• Keep data separated and secure across Azure regions, ensuring HIPAA & HITECH and applicable regulation compliance.
Education & Experience
• Bachelor’s degree in computer science, statistics, informatics, information systems or other quantitative fields.
• 10+ years of experience in a Data Engineer/ BI Developer role.
• Extensive SQL experience including complex query development, SSIS, performance tuning, and optimization in Azure SQL and distributed query environments.
• Experience with data lake and data lakehouse architecture is preferred.
• Experience with design and engineering for Databricks Delta tables, Spark Declarative Pipelines, Jobs/Workflows is preferred.
• Experience with Azure cloud services: Databricks, Azure Data Factory, Azure SQL db, Azure Data Lake Storage Gen2, PySpark, Logic Apps
• Experience with object-oriented/object function scripting languages: Python, Scala
• Experience with structured, semi structured and unstructured data.
• Experience with data pipeline architecture is preferred.
• Experience with development/maintenance of APIs and WebServices is preferred.
Knowledge, Skills & Abilities
• Advanced proficiency in SQL, including complex query development, performance tuning, and optimization in Azure SQL and distributed query environments.
• Strong hands on experience designing and building ETL/ELT pipelines using Databricks, leveraging Apache Spark (PySpark / Spark SQL) for large scale data ingestion, transformation, and processing.
• Deep understanding of data lake and lakehouse architectures, including structured, semi structured, and incremental data ingestion patterns using Delta Lake, partitioning, and schema enforcement.
• Proven ability to design and implement analytics ready data models (star, snowflake, and dimensional models) to support Power BI and other BI/analytics consumption patterns.
• Experience managing metadata, data lineage, dependencies, and workload orchestration, ensuring reliable and repeatable data pipelines across development and production environments.
• Strong analytical and troubleshooting skills, with the ability to perform root cause analysis across source systems, data pipelines, and downstream reporting to resolve data quality and performance issues.
• Ability to collaborate closely with data architects, DBAs, BI developers, and data scientists to align engineering solutions with enterprise architecture and analytics standards.
• Proactive mindset with the ability to identify data quality risks, pipeline failures, scalability constraints, and performance bottlenecks, and escalate or remediate them early.
• Detail oriented with strong documentation, versioning, and governance discipline, supporting maintainable, auditable, and compliant data solutions.
• Comfortable working in an Agile, cross functional environment, managing multiple pipelines and priorities while maintaining production stability.
• Customer focused approach to delivering reliable, scalable, and business ready data assets that support operational and strategic decision making.
• Strategic thinker who understands how data engineering choices impact cost, performance, scalability, and business outcomes.
• Ability and willingness to mentor junior data engineers, promote Spark and Databricks best practices, and contribute to team technical standards.






