Elios, Inc.

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer on a remote contract focused on CPG and media/entertainment, paying "rate". Requires 4+ years of data engineering experience, strong SQL and Python skills, and familiarity with cloud services (AWS/GCP) and AI/ML data infrastructure.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
April 8, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#AI (Artificial Intelligence) #Documentation #Databricks #Metadata #Compliance #GCP (Google Cloud Platform) #AWS (Amazon Web Services) #Batch #dbt (data build tool) #"ETL (Extract #Transform #Load)" #Data Modeling #BigQuery #Data Layers #Data Quality #Kafka (Apache Kafka) #Libraries #Snowflake #Python #Data Access #Data Architecture #Dataflow #Airflow #Docker #Consulting #Cloud #Data Pipeline #Redshift #Databases #Data Engineering #Storage #SQL (Structured Query Language) #ML (Machine Learning) #S3 (Amazon Simple Storage Service) #Monitoring #Data Governance #Lambda (AWS Lambda) #Data Integration #Data Warehouse
Role description
Data Engineer Contract | Remote (US)| CPG & Media/Entertainment Focus About the Role You will be the data backbone of a pod delivering AI solutions for enterprise clients in CPG, media, and sports. The AI models are only as good as the data feeding them, and you are the person who makes sure that data is clean, accessible, and flowing reliably. Day to day, you will build and maintain the data pipelines, integrations, and infrastructure that power AI-driven solutions for major brands. You will work alongside a Senior AI Engineer and an AI Strategist/PM, ensuring the data layer is solid so the AI layer can perform. Think ingesting product catalogs, content metadata, consumer engagement data, and media assets at scale — then making all of it queryable and usable for AI systems. This is not a role where you sit in a data warehouse team running batch jobs. You are embedded in a delivery pod, building purpose-specific data infrastructure for client engagements. Every pipeline you build has a direct line to an AI system that a major brand depends on. Responsibilities • Design, build, and maintain data pipelines that feed AI/ML systems for enterprise CPG and media clients. • Build ETL/ELT workflows to ingest, transform, and serve data from diverse sources — product information systems, content management platforms, consumer data platforms, and media asset libraries. • Architect data models and storage solutions optimized for AI workloads: vector stores, feature stores, knowledge bases, and retrieval systems. • Collaborate with the Senior AI Engineer to ensure data pipelines deliver clean, well-structured inputs for LLM-powered systems, RAG pipelines, and model training. • Implement data quality monitoring, validation, and alerting to catch issues before they impact downstream AI outputs. • Optimize query performance and data access patterns for real-time and near-real-time AI applications. • Work with the AI Strategist/PM to understand client data landscapes and translate business data requirements into technical architecture. • Manage data governance, access controls, and documentation for client engagements. Qualifications • 4+ years of data engineering experience building production data pipelines and infrastructure. • Strong SQL skills and experience with modern data warehouses (Snowflake, BigQuery, Redshift, or Databricks). • Experience with Python for data pipeline development and orchestration (Airflow, Prefect, Dagster, or similar). • Hands-on experience with cloud data services on AWS or GCP (S3, Glue, Lambda, BigQuery, Dataflow, etc.). • Understanding of data modeling patterns for both analytics and AI/ML workloads. • Experience working in consulting, agency, or services environments delivering solutions for external clients. • Familiarity with CPG, media, or entertainment data ecosystems is strongly valued. Preferred Skills • Experience building data infrastructure specifically for AI/ML systems — vector databases, embedding pipelines, feature stores, or RAG data layers. • Familiarity with streaming data architectures (Kafka, Kinesis, Pub/Sub) for real-time AI applications. • Experience with data integration from CPG-specific platforms (product information management systems, digital asset management, consumer data platforms). • Knowledge of data governance frameworks and compliance requirements in enterprise environments. • Exposure to dbt for transformation layer management. Tech Stack Python, SQL, Snowflake/BigQuery/Databricks, Airflow/Prefect, AWS/GCP, Docker, dbt, Kafka. The specific tools vary by client and engagement. This is a pod-based engagement where you work alongside a Senior AI Engineer and AI Strategist/PM. If you want to build the data infrastructure that powers AI solutions for brands you recognize, this is your role. Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or veteran status.