Akkodis

Sr. AI Data Engineer - Omaha, NE - Contract to Hire (C2H)

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr. AI Data Engineer in Omaha, NE, on a Contract-to-Hire basis. Pay rate ranges from $80-$90 per hour. Requires 5-8 years of data engineering experience, proficiency in Python, SQL, and familiarity with Snowflake/Databricks.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
720
-
πŸ—“οΈ - Date
April 9, 2026
πŸ•’ - Duration
Unknown
-
🏝️ - Location
Hybrid
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Omaha, NE
-
🧠 - Skills detailed
#Classification #Metadata #Security #Databricks #GIT #Indexing #Data Enrichment #Data Accuracy #Monitoring #Normalization #ML (Machine Learning) #OpenSearch #SQL (Structured Query Language) #Data Processing #Cloud #Leadership #Automated Testing #Infrastructure as Code (IaC) #AI (Artificial Intelligence) #Airflow #Scala #Code Reviews #Snowflake #"ETL (Extract #Transform #Load)" #Data Design #Observability #Python #Spark (Apache Spark) #Azure #Data Engineering #DevOps #Version Control #Terraform #PySpark #Computer Science #Logging #Data Pipeline #Deployment
Role description
Akkodis is seeking a Senior AI Data Engineer for a Contract-to-Hire (C2H) position, and this is a Hybrid role in Omaha, NE. Ideally, the client is looking for applicants with a solid background in Data Engineering, LLM/GenAI, Python, Snowflake/Databricks. Rate Range: $80-$90 per hour. The rate may be negotiable based on experience, education, geographic location, and other factors. Job Description: As a Senior AI Data Engineer, you will lead the development, maintenance, and optimization of data pipelines and workflows within our Enterprise Data Platform with a primary focus on AI readiness. You’ll apply strong data engineering fundamentals alongside software engineering and DevOps practices, ensuring pipelines are built, tested, deployed, and monitored as code. A key part of this role is preparing structured and unstructured enterprise data to be agent-ready and accessible by LLMs, including curation, metadata/lineage, embeddings and vector indexes, and secure, permission-aware retrieval so AI products can deliver grounded, reliable outcomes. Your work will ensure data accuracy, reliability, accessibility, and responsible use across analytics and AI solutions. Key Responsibilities β€’ Lead the design, development, and maintenance of scalable pipelines that ingest, process, and serve data LLM/agent use cases. β€’ Build LLM/agent-ready data products by preparing structured and unstructured data for retrieval, grounding, and enterprise search at scale. β€’ Create robust ingestion for unstructured content (documents, PDFs, knowledge bases, tickets, emails, transcripts): parsing/normalization, de-duplication, content quality checks, chunking strategies, and metadata enrichment. β€’ Implement and operate embeddings pipelines and vector indexing (incremental updates, backfills, re-embedding strategies, index lifecycle management, and cost controls). β€’ Enable hybrid retrieval patterns (keyword + vector + metadata filtering) and improve relevance through tuning, deduping, freshness strategies, and structured metadata design. β€’ Partner with AI/ML and product teams to define retrieval contracts for agents (schemas, tool/function inputs, grounding requirements, citations, and permission-aware access). β€’ Build curated semantic layers for structured data (consistent entity keys, conformed dimensions, metric definitions), so LLM tools can query reliably and interpret results correctly. β€’ Establish evaluation and monitoring for AI data products: retrieval quality signals, grounding/citation coverage, freshness SLAs, drift/anomaly checks, and feedback loops from user interactions. β€’ Implement governance and security for AI access: PII detection/redaction, data classification, and permission-aware retrieval with auditable access patterns. β€’ Build pipelines/workflows as code using modern engineering practices (version control, code reviews, automated testing, reusable components). β€’ Define and implement CI/CD for data and retrieval pipelines (automated builds, tests, deployments, environment promotion). β€’ Implement observability (logging/metrics/alerts), contribute to operational runbooks, and support incident response for production pipelines. β€’ Provide technical leadership through mentoring, design reviews, and guidance on best practices and standards. Position Qualifications Required β€’ Bachelor’s degree in Computer Science, Information Systems, or related field (or equivalent experience). β€’ 5–8 years of experience in data engineering, including production-grade pipelines. β€’ Advanced SQL and strong Python/PySpark for data processing and pipeline development. β€’ Hands-on experience with Databricks and/or Snowflake (Palantir experience a plus). β€’ Strong software engineering discipline: Git-based workflows, code reviews, testing, CI/CD, and operational ownership. β€’ Demonstrated experience preparing data for LLM/GenAI applications (e.g., enterprise search, RAG/grounding, knowledge assistants, AI agents). β€’ Experience with unstructured data processing: extraction, normalization, chunking, metadata enrichment, deduplication, and content lifecycle/versioning. β€’ Experience building and operating embeddings + vector indexing pipelines and supporting scalable retrieval. β€’ Experience implementing permission-aware retrieval and governance controls (ACL propagation, entitlements, PII handling, auditability). β€’ Proven ability to collaborate with AI/ML and product teams to define requirements and deliver dependable AI-ready data products. Preferred β€’ Familiarity with vector/search tools (e.g., Azure AI Search, OpenSearch/Elastic, Databricks Vector Search). β€’ Familiarity with orchestration tools (Databricks Workflows, Airflow). β€’ Infrastructure as Code (Terraform/CloudFormation) and/or containerization. β€’ Familiarity with evaluation approaches for retrieval quality and grounding (offline test sets, relevance metrics, feedback loops). If you are interested in this position, then please click APPLY NOW. For other opportunities available at Akkodis go to www.akkodis.com. If you have questions about the position, please contact Narendra Pratap at (213) 410-5211 or narendra.pratap@akkodis.com Equal Opportunity Employer/Veterans/Disabled Benefit offerings include medical, dental, vision, term life insurance, short-term disability insurance, additional voluntary benefits, commuter benefits, and a 401K plan. Our program provides employees the flexibility to choose the type of coverage that meets their individual needs. Available paid leave may include Paid Sick Leave, where required by law; any other paid leave required by Federal, State, or local law; and Holiday pay upon meeting eligibility criteria. Disclaimer: These benefit offerings do not apply to client-recruited jobs and jobs that are direct hires to a client. To read our Candidate Privacy Information Statement, which explains how we will use your information, please visit https://www.akkodis.com/en/privacy-policy The Company will consider qualified applicants with arrest and conviction records.