Insight

Senior Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer, offering a 6-month contract at $65/hr, based in Englewood, CO. Requires 5–7+ years of experience, strong Python and Scala skills, AWS expertise, and proficiency in Apache Airflow and Kafka.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
520
-
🗓️ - Date
April 9, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Englewood, CO
-
🧠 - Skills detailed
#Data Quality #Batch #Monitoring #ML (Machine Learning) #AWS (Amazon Web Services) #Cloud #Datasets #Kubernetes #Airflow #Scala #"ETL (Extract #Transform #Load)" #Python #Data Science #S3 (Amazon Simple Storage Service) #Data Engineering #Kafka (Apache Kafka) #Data Ingestion #Apache Airflow #Computer Science #Data Pipeline #Deployment
Role description
Overview We are seeking a Senior Data Engineer to support a high-impact Connectivity project, focused on building and maintaining scalable data pipelines that ingest, clean, transform, and integrate both structured and real-time streaming data. This role partners closely with Data Scientists to deliver production-ready datasets that power advanced analytics and machine learning models. Key Responsibilities • Design, build, and maintain robust data pipelines for both batch and real-time data ingestion, including structured sources and raw streaming data from Kafka • Clean, transform, and standardize incoming data to ensure high-quality, analytics-ready datasets • Integrate curated data pipelines with machine learning models developed by Data Science teams • Develop and optimize workflows using Apache Airflow for scheduling and orchestration • Build cloud-native data solutions leveraging AWS, including S3, Kubernetes, and related services • Write high-quality, scalable code in Python and Scala • Support deployment, monitoring, and performance tuning of data pipelines in production environments • Collaborate cross-functionally with Data Scientists, Software Engineers, and Product teams to align data solutions with business needs Required Qualifications • 5–7+ years of experience as a Data Engineer or in a similar role • Strong hands-on experience with Python and Scala • Experience working with AWS cloud services, specifically S3 • Expertise in Kubernetes for containerized workloads • Experience with Apache Airflow for workflow orchestration • Hands-on experience processing real-time streaming data (Kafka or similar) • Strong knowledge of data transformation, data quality, and pipeline optimization best practices • Ability to work onsite in Englewood, CO three days per week Preferred Qualifications • Master’s degree in Computer Science, Data Engineering, or a related field • Prior experience supporting data pipelines for machine learning or advanced analytics use cases • Experience working on connectivity, network, or platform-oriented data projects • Background in productionizing data solutions in complex enterprise environments This position pays $65/hr