

R Systems
Data Engineer – AI & Analytics
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Engineer – AI & Analytics contract position in Denver, Colorado, requiring 8+ years of data engineering experience, advanced data architecture skills, and expertise in Databricks, AWS, and SQL. A Bachelor's degree in a related field is required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
April 26, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Denver, CO
-
🧠 - Skills detailed
#SQL (Structured Query Language) #ML (Machine Learning) #Observability #Datasets #Data Engineering #Lambda (AWS Lambda) #Data Architecture #Python #CRM (Customer Relationship Management) #Storage #Data Pipeline #Computer Science #IAM (Identity and Access Management) #Looker #Data Processing #Metadata #"ETL (Extract #Transform #Load)" #BI (Business Intelligence) #Monitoring #DataOps #Data Quality #Airflow #Data Reconciliation #R #Data Design #Scala #Spark (Apache Spark) #AI (Artificial Intelligence) #Data Governance #Tableau #Redshift #AWS (Amazon Web Services) #Data Science #S3 (Amazon Simple Storage Service) #Cloud #Documentation #Data Modeling #AWS S3 (Amazon Simple Storage Service) #Amazon Redshift #Databricks #Microsoft Power BI
Role description
Data Engineer – AI & Analytics
Department: Data Engineering / AI Platform
Employment Type: Contract
Location: Denver, Colorado
Experience Required : 8+ years
ROLE OVERVIEW
R Systems International is seeking a Data Engineer to design and build the data foundations that power our AI, data science, and analytics solutions at Fortune 100 scale. This role is central to making AI operationally 'real' in the business — you will be the expert at finding, stitching, modeling, and serving data so that AI Engineers and Data Scientists can deliver high-impact models and insights. You will architect and implement advanced data solutions on Databricks, AWS, and Redshift, automate complex data flows with modern orchestrators, and continuously optimize for performance and cost.
KEY RESPONSIBILITIES
• Design, build, and maintain scalable data pipelines and services that feed ML models, LLM/RAG solutions, and advanced analytics.
• Act as the 'data backbone' for AI products — from raw ingestion through curated, analytics- and model-ready datasets.
• Find, join, and reconcile data from disparate systems (CRM, billing, interaction/call data, clickstream, third-party sources).
• Resolve data quality issues, gaps, and inconsistencies; establish reliable, reusable data assets for AI and analytics teams.
• Design and implement advanced data architectures (lakehouse, dimensional models, domain data products) at enterprise scale.
• Build strategic data models for analytics, ML, and AI use cases: feature stores, RAG retrieval layers, training/inference datasets.
• Define and maintain data contracts, schemas, and standards ensuring consistency, performance, and ease of use.
• Use orchestration tools (Airflow, Dagster, cloud-native orchestrators) to automate repetitive tasks and complex data flows.
• Implement robust monitoring, alerting, and observability for pipelines with clear SLAs.
• Build secure, performant, and cost-efficient solutions on Databricks, AWS, and Redshift.
• Partner with AI Engineers and Data Scientists to translate model requirements into data designs and pipelines.
• Leverage AI-assisted coding tools to improve development speed, code quality, documentation, and testing.
REQUIRED QUALIFICATIONS
Experience
• 8+ years of data engineering experience in large-scale, production environments
• 8+ years of data modeling and building strategic data solutions for analytics and ML; 3+ years specifically for AI/LLM/RAG.
• 8+ years finding, joining, and reconciling data from disparate enterprise systems.
• 8+ years with advanced data architectures supporting AI and analytics at scale.
• 5+ years with workflow/orchestration tools (Airflow, Dagster) and robust monitoring/observability.
Technical Skills
• Expert-level SQL for complex transformations, data reconciliation, and performance tuning.
• Strong Python skills for ETL/ELT, data pipelines, and ML/AI workflow integration.
• Spark (preferably on Databricks) for large-scale data processing.
• Deep hands-on experience with Databricks, AWS (S3, Glue, EMR, Lambda, IAM), and Amazon Redshift.
• Working knowledge of AI/ML and LLM/RAG data requirements: feature stores, vector stores, retrieval indexes.
• Strong understanding of data governance, data quality, lineage, and metadata practices.
• Demonstrated ability to size, estimate, and optimize cloud compute, storage, and processing costs.
Education
• Bachelor's degree in Computer Science, Data Engineering, Information Systems, or a closely related technical field. Advanced degree is a plus.
PREFERRED QUALIFICATIONS
• Experience with CI/CD, infrastructure-as-code, and DataOps/MLOps practices.
• Familiarity with analytics and BI tools (Tableau, Power BI, Looker) and how they consume data
Data Engineer – AI & Analytics
Department: Data Engineering / AI Platform
Employment Type: Contract
Location: Denver, Colorado
Experience Required : 8+ years
ROLE OVERVIEW
R Systems International is seeking a Data Engineer to design and build the data foundations that power our AI, data science, and analytics solutions at Fortune 100 scale. This role is central to making AI operationally 'real' in the business — you will be the expert at finding, stitching, modeling, and serving data so that AI Engineers and Data Scientists can deliver high-impact models and insights. You will architect and implement advanced data solutions on Databricks, AWS, and Redshift, automate complex data flows with modern orchestrators, and continuously optimize for performance and cost.
KEY RESPONSIBILITIES
• Design, build, and maintain scalable data pipelines and services that feed ML models, LLM/RAG solutions, and advanced analytics.
• Act as the 'data backbone' for AI products — from raw ingestion through curated, analytics- and model-ready datasets.
• Find, join, and reconcile data from disparate systems (CRM, billing, interaction/call data, clickstream, third-party sources).
• Resolve data quality issues, gaps, and inconsistencies; establish reliable, reusable data assets for AI and analytics teams.
• Design and implement advanced data architectures (lakehouse, dimensional models, domain data products) at enterprise scale.
• Build strategic data models for analytics, ML, and AI use cases: feature stores, RAG retrieval layers, training/inference datasets.
• Define and maintain data contracts, schemas, and standards ensuring consistency, performance, and ease of use.
• Use orchestration tools (Airflow, Dagster, cloud-native orchestrators) to automate repetitive tasks and complex data flows.
• Implement robust monitoring, alerting, and observability for pipelines with clear SLAs.
• Build secure, performant, and cost-efficient solutions on Databricks, AWS, and Redshift.
• Partner with AI Engineers and Data Scientists to translate model requirements into data designs and pipelines.
• Leverage AI-assisted coding tools to improve development speed, code quality, documentation, and testing.
REQUIRED QUALIFICATIONS
Experience
• 8+ years of data engineering experience in large-scale, production environments
• 8+ years of data modeling and building strategic data solutions for analytics and ML; 3+ years specifically for AI/LLM/RAG.
• 8+ years finding, joining, and reconciling data from disparate enterprise systems.
• 8+ years with advanced data architectures supporting AI and analytics at scale.
• 5+ years with workflow/orchestration tools (Airflow, Dagster) and robust monitoring/observability.
Technical Skills
• Expert-level SQL for complex transformations, data reconciliation, and performance tuning.
• Strong Python skills for ETL/ELT, data pipelines, and ML/AI workflow integration.
• Spark (preferably on Databricks) for large-scale data processing.
• Deep hands-on experience with Databricks, AWS (S3, Glue, EMR, Lambda, IAM), and Amazon Redshift.
• Working knowledge of AI/ML and LLM/RAG data requirements: feature stores, vector stores, retrieval indexes.
• Strong understanding of data governance, data quality, lineage, and metadata practices.
• Demonstrated ability to size, estimate, and optimize cloud compute, storage, and processing costs.
Education
• Bachelor's degree in Computer Science, Data Engineering, Information Systems, or a closely related technical field. Advanced degree is a plus.
PREFERRED QUALIFICATIONS
• Experience with CI/CD, infrastructure-as-code, and DataOps/MLOps practices.
• Familiarity with analytics and BI tools (Tableau, Power BI, Looker) and how they consume data






