Brooksource

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer in Greenwood Village, CO (Hybrid) on a long-term W2 contract at $70-$77/hour. Requires 3+ years in Data Engineering, strong Python and Spark skills, AWS experience, and familiarity with data modeling and graph databases.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
616
-
🗓️ - Date
December 30, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Greenwood Village, CO
-
🧠 - Skills detailed
#GIT #Data Quality #Metadata #Data Transformations #ML (Machine Learning) #AWS (Amazon Web Services) #SQL (Structured Query Language) #Data Pipeline #Scala #Databases #Graph Databases #Model Deployment #Programming #PySpark #"ETL (Extract #Transform #Load)" #Data Engineering #Spark (Apache Spark) #Data Science #Deployment #Python #Storage #Data Modeling #GitLab #Anomaly Detection #Schema Design #Cloud
Role description
Data Engineer – Infrastructure Intelligence & Analytics Location: Greenwood Village, CO (Hybrid) Pay: $70-$77/hour on a long-term W2 contract with benefits About the Role We’re seeking a Data Engineer to help productionalize ML and graph‑driven capabilities for the IIA initiative. You’ll build scalable pipelines, transform raw network data into graph‑ready structures, and partner closely with Data Scientists to move models from prototype to production. Responsibilities • Build and maintain scalable data pipelines using Spark (PySpark preferred). • Implement feature engineering and data transformations to support anomaly detection and graph models. • Develop and manage cloud‑based infrastructure for model deployment (AWS preferred). • Collaborate on graph data construction, storage patterns, and query optimization. • Implement CI/CD workflows with Git/GitLab. • Write unit tests, data quality checks, and ensure reliability of production systems. • Contribute to schema design, metadata strategies, and pipeline orchestration. Qualifications • 3+ years in Data Engineering or backend engineering. • Strong hands‑on experience with Python, Spark/PySpark, and SQL. • Knowledge of data modeling and distributed data systems. • Cloud experience, preferably AWS. • Bonus: exposure to graph databases or willingness to learn quickly. • Strong programming fundamentals, ideally fluency in at least two languages. • Excellent communication and collaboration skills. Preferred • Experience with ML pipeline support or production ML systems. • AWS certification. • Ability to flex between engineering and light DS work when needed.