Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer in Denver, Colorado, with a contract length of over 6 months and a pay rate of $115,000 - $130,000. Key skills include SQL, Python, ETL processes, and experience with Apache Airflow and big data technologies.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
590.9090909091
-
πŸ—“οΈ - Date discovered
August 21, 2025
πŸ•’ - Project duration
More than 6 months
-
🏝️ - Location type
On-site
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Denver, CO
-
🧠 - Skills detailed
#REST API #Apache Airflow #Linux #API (Application Programming Interface) #Tableau #Virtualization #Spark (Apache Spark) #JavaScript #Informatica #NoSQL #ML (Machine Learning) #Python #REST (Representational State Transfer) #Data Engineering #Computer Science #Talend #Hadoop #Big Data #SQL (Structured Query Language) #Data Science #Visualization #RDBMS (Relational Database Management System) #Automation #Unix #BI (Business Intelligence) #Data Storage #Airflow #"ETL (Extract #Transform #Load)" #JBoss #Scripting #Data Pipeline #Storage #R #SQL Queries #MS SQL (Microsoft SQL Server) #Web Scraping #Scala
Role description
Job Summary: Our client is seeking a Data Engineer to join their team! This position is located in Denver, Colorado. Duties: β€’ Create and maintain scalable, reliable, consistent and repeatable systems that support data operations for reporting, analytics, applications, and data science β€’ Gather and process raw data at scale, including writing scripts, web scraping, calling APIs, write SQL queries, etc. β€’ Use ETL processes in order to maintain, improve, clean, and manipulate data β€’ Profile data to measure quality, integrity, accuracy, and completeness β€’ Develop and implement tools, scripts, queries, and applications for ETL/ELT and data operations β€’ Design, build, and automate Machine Learning Data Pipeline β€’ Deliver solutions by developing, testing, and implementing code and scripts β€’ Produce reports and uphold data delivery schedules β€’ Manage life cycle of multiple data sources β€’ Work closely with stakeholders on the data demand side (analysts and data scientists) β€’ Increase speed to delivery by implementing workload/workflow automation solutions Desired Skills/Experience: β€’ Bachelor's degree in an Engineering discipline or Computer Science β€’ 3+ years of Linux/Unix/CentOS system admin experience β€’ 5+ years of hands-on working experience with RDBMS, SQL, scripting, and coding β€’ Experience with Apache Airflow β€’ Coding/scripting experience using Python, R, shell scripts β€’ Experience with SQL, Tableau, ML Pipeline techniques, and ETL techniques β€’ Extensive background in Linux/Unix/CentOS installation and administration β€’ Windows experience preferred β€’ Knowledge in data storage that demonstrates knowledge of when to use a file system, relational database, or NoSQL variant β€’ Experience with Spark, and Hadoop/Hive β€’ Familiarity with JavaScript API, Rest API or Data Extract APIs β€’ Experience receiving, converting, and cleansing big data β€’ Experience with visualization or BI tools, such as Tableau Extensive experience with data virtualization concepts, and software (Denodo, Teiid, JBoss) β€’ Experience with data workflow/data prep platforms, such as Informatica, Pentaho, or Talend Ability to identify and resolve end-to-end performance, network, server, and platform issues Benefits: β€’ Medical, Dental, & Vision Insurance Plans β€’ Employee-Owned Profit Sharing (ESOP) β€’ 401K offered The approximate pay range for this position starting at $115,000 - $130,000. Please note that the pay range provided is a good faith estimate. Final compensation may vary based on factors including but not limited to background, knowledge, skills, and location. We comply with local wage minimums.