

Qualis1 Inc.
Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer in Denver, CO (Hybrid) with a contract length of "OPEN FOR W2" and a pay rate of "TBD." Requires 4+ years in cloud solutions, proficiency in AWS, Databricks, Snowflake, SQL, and Python.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
April 28, 2026
π - Duration
Unknown
-
ποΈ - Location
Hybrid
-
π - Contract
W2 Contractor
-
π - Security
Unknown
-
π - Location detailed
Denver, CO
-
π§ - Skills detailed
#Data Processing #Data Wrangling #Azure #AWS (Amazon Web Services) #Data Engineering #Data Pipeline #Automation #Cloud #Data Exploration #Code Reviews #Data Quality #Databricks #Datasets #S3 (Amazon Simple Storage Service) #Redshift #Documentation #GIT #GCP (Google Cloud Platform) #Delta Lake #Observability #Snowflake #Python #Automated Testing #Computer Science #SQL (Structured Query Language) #Infrastructure as Code (IaC) #"ETL (Extract #Transform #Load)" #AI (Artificial Intelligence) #Apache Iceberg #Monitoring #Version Control #Spark (Apache Spark) #Terraform #Scala
Role description
Role : Data Engineer
Location : Denver, CO (HYBRID)
Preferred Employment: OPEN FOR W2
Job Overview
The Data Engineer builds and optimizes scalable data pipelines using AWS, Databricks, and Snowflake. This role focuses on transforming complex data into trusted, analytics-ready datasets while ensuring performance, reliability, and cost efficiency. This role will collaborate with business teams to deliver actionable insights; strong written and verbal communication is required.
Job Responsibilities:
Design, develop, and maintain data pipelines using AWS Redshift, Databricks, and Snowflake
Ingest, transform, and curate data from multiple sources
Optimize solutions for performance, reliability, scalability, and cost
Implement data quality checks,
monitoring, and observability for production pipelines
Perform root cause analysis on data incidents and pipeline failures, and implement preventive fixes to avoid recurrence
Data wrangling of heterogeneous data, exploration and discovery in pursuit of new business insights
Collaborate with analytics and business teams to deliver trusted datasets
Mentor other team members in their efforts to build data engineering skillsets
Assist team management in defining projects, including helping estimate, plan and scope work
Prepare and contribute to presentations required by management
Required Qualifications:
4+ years developing cloud solutions as a Data Engineer
Bachelorβs Degree in Computer Science,
Computer Engineering, or a related field Clear and concise verbal and written
Proven experience developing solutions with: AWS (Redshift, S3, Step Functions, Eventbridge, CloudWatch)
Databricks (Spark, Delta Lake, Apache Iceberg, Unity Catalog) Snowflake
Strong proficiency in SQL to write and optimize performance of large-scale analytics queries
Strong proficiency in Python for custom data processing
Familiarity with CI/CD, version control (Git), and automated testing for data pipelines
Familiarity with Infrastructure as Code (Terraform, or similar)
Solid understanding of data warehousing and dimensional modeling
Ability to write detailed and comprehensive testing documentation. Strong focus on code quality with the ability to design and execute thorough tests.
Ability to manage work across multiple projects, good organizational skills
Ability to quickly learn new technologies and understand data flows and business concepts.
Excellent problem-solving skills, attention to detail, and the ability to work independently.
Proven track record of proactively identifying, performing root cause analysis on, and mitigating production data issues.
Ability to conduct effective code reviews.
Ability to interview without use of AI tools
Preferred Qualifications:
Hands-on experience integrating AI tools/solutions into data workflows to improve efficiency, automation, or developer productivity
#DataEngineer #DenverCO #HybridRole #DataEngineeringJobs #ETL #BigData #DataPipelines #CloudData #AWS #Azure #GCP #SQL #Python #Spark #Databricks #Snowflake #DataWarehouse #DataIntegration #HiringNow
Role : Data Engineer
Location : Denver, CO (HYBRID)
Preferred Employment: OPEN FOR W2
Job Overview
The Data Engineer builds and optimizes scalable data pipelines using AWS, Databricks, and Snowflake. This role focuses on transforming complex data into trusted, analytics-ready datasets while ensuring performance, reliability, and cost efficiency. This role will collaborate with business teams to deliver actionable insights; strong written and verbal communication is required.
Job Responsibilities:
Design, develop, and maintain data pipelines using AWS Redshift, Databricks, and Snowflake
Ingest, transform, and curate data from multiple sources
Optimize solutions for performance, reliability, scalability, and cost
Implement data quality checks,
monitoring, and observability for production pipelines
Perform root cause analysis on data incidents and pipeline failures, and implement preventive fixes to avoid recurrence
Data wrangling of heterogeneous data, exploration and discovery in pursuit of new business insights
Collaborate with analytics and business teams to deliver trusted datasets
Mentor other team members in their efforts to build data engineering skillsets
Assist team management in defining projects, including helping estimate, plan and scope work
Prepare and contribute to presentations required by management
Required Qualifications:
4+ years developing cloud solutions as a Data Engineer
Bachelorβs Degree in Computer Science,
Computer Engineering, or a related field Clear and concise verbal and written
Proven experience developing solutions with: AWS (Redshift, S3, Step Functions, Eventbridge, CloudWatch)
Databricks (Spark, Delta Lake, Apache Iceberg, Unity Catalog) Snowflake
Strong proficiency in SQL to write and optimize performance of large-scale analytics queries
Strong proficiency in Python for custom data processing
Familiarity with CI/CD, version control (Git), and automated testing for data pipelines
Familiarity with Infrastructure as Code (Terraform, or similar)
Solid understanding of data warehousing and dimensional modeling
Ability to write detailed and comprehensive testing documentation. Strong focus on code quality with the ability to design and execute thorough tests.
Ability to manage work across multiple projects, good organizational skills
Ability to quickly learn new technologies and understand data flows and business concepts.
Excellent problem-solving skills, attention to detail, and the ability to work independently.
Proven track record of proactively identifying, performing root cause analysis on, and mitigating production data issues.
Ability to conduct effective code reviews.
Ability to interview without use of AI tools
Preferred Qualifications:
Hands-on experience integrating AI tools/solutions into data workflows to improve efficiency, automation, or developer productivity
#DataEngineer #DenverCO #HybridRole #DataEngineeringJobs #ETL #BigData #DataPipelines #CloudData #AWS #Azure #GCP #SQL #Python #Spark #Databricks #Snowflake #DataWarehouse #DataIntegration #HiringNow






