Insight Global

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer with 5–7+ years of experience, focusing on AWS, Python, Scala, and Kubernetes. It is a 3-day onsite position in Greenwood Village, CO, offering $55-$60 per hour, requiring expertise in data pipelines and real-time data processing.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
480
-
🗓️ - Date
April 30, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Englewood, CO
-
🧠 - Skills detailed
#Batch #Python #Data Pipeline #Datasets #Data Science #Computer Science #ML (Machine Learning) #Data Ingestion #Monitoring #Kubernetes #Apache Airflow #AWS (Amazon Web Services) #Data Engineering #Airflow #"ETL (Extract #Transform #Load)" #Scala #S3 (Amazon Simple Storage Service) #Deployment #Cloud #Kafka (Apache Kafka)
Role description
Required Skills & Experience • 5–7+ years of experience as a Data Engineer or in a similar role • Strong hands-on experience with Python and Scala • Experience working with AWS cloud services, specifically S3 • Expertise in Kubernetes for containerized workloads • Experience with Apache Airflow for workflow orchestration • Hands-on experience processing real-time streaming data (Kafka or similar) Nice to Have Skills & Experience • Master’s degree in Computer Science, Data Engineering, or a related field • Prior experience supporting data pipelines for machine learning or advanced analytics use cases • Experience working on connectivity, network, or platform-oriented data projects • Background in productionizing data solutions in complex enterprise environments Job Description A client of Insight Global is seeking a Senior Data Engineer to support a high-impact Connectivity project, focused on building and maintaining scalable data pipelines that ingest, clean, transform, and integrate both structured and real-time streaming data. This role partners closely with Data Scientists to deliver production-ready datasets that power advanced analytics and machine learning models. Key Responsibilities Design, build, and maintain robust data pipelines for both batch and real-time data ingestion, including structured sources and raw streaming data from Kafka Clean, transform, and standardize incoming data to ensure high-quality, analytics-ready datasets Integrate curated data pipelines with machine learning models developed by Data Science teams Develop and optimize workflows using Apache Airflow for scheduling and orchestration Build cloud-native data solutions leveraging AWS, including S3, Kubernetes, and related services Write high-quality, scalable code in Python and Scala Support deployment, monitoring, and performance tuning of data pipelines in production environments Collaborate cross-functionally with Data Scientists, Software Engineers, and Product teams to align data solutions with business needs This role is 3 days a week onsite in Greenwood Village CO and pays between $55- $60 per hour