Ampstek

Need USC/GC Only :: Lead Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Lead Data Engineer based in Englewood, CO, on a contract basis. Key skills include ETL, ML Ops, AI-ML, Python, and AWS. Candidates must have experience with data warehousing, BigData tools, and relational databases.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
January 17, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Denver, CO
-
🧠 - Skills detailed
#API (Application Programming Interface) #Python #SQL (Structured Query Language) #NoSQL #AI (Artificial Intelligence) #AWS (Amazon Web Services) #Spark (Apache Spark) #Deployment #Data Mart #Data Warehouse #Data Architecture #C++ #Data Analysis #Data Quality #Monitoring #Scala #Data Engineering #"ETL (Extract #Transform #Load)" #ML (Machine Learning) #Java #Data Pipeline #Model Validation #Data Exploration #Databases #Data Integration #ML Ops (Machine Learning Operations)
Role description
Position: Lead Data Engineer Location : Englewood, CO 80111 /Denver ,CO Duration: Contract Job Description: Mandatory Skills :: ETL, ML OPS, AI-ML, Data warehousing, Python , AWS Roles & Responsibilities: Design, develop, and maintain scalable ETL pipelines to ensure data quality and availability Implement monitoring and alerting solutions to ensure data pipeline reliability and performance Develop and manage deployment pipelines to facilitate continuous integration and delivery of data engineering solutions Implement data integration solutions to support analytics and reporting needs Execute the complete analytics lifecycle for problem solving, including: • Algorithm traditionalization • Model validation • Model prototyping • Data exploration • Data grooming Survey varied data sources for analytic relevance, including: • External sources accessed via API • Flat files • Relational databases • Distributed file systems • Expertise in data engineering languages such as Scala (preferred) or Java, with proficiency in Python • Experience with BigData tools, particularly Spark • Proficiency in building and managing ETL pipelines • Expert-level quantitative analysis skills including interpretation of model results, consideration of causality, treatment of multicollinearity • The ability to work in compiled, high-performance languages (e.g., Scala, Java, C++) • Experience with relational databases • Strong understanding of relational databases and SQL, and familiarity with NoSQL databases • Broad experience and solid theoretical foundation on the modeling process using a • variety of algorithmic techniques, including Machine Learning, and Graph/Network Analytics • Data pre-processing, exploratory data analysis using a variety of techniques • Basic understanding of data architecture, data warehouse, and data marts • Demonstrated ability and desire to continually expand skill set, and learn from and teach others