

APOLLO TECHNOLOGY SOLUTIONS LLC
AI Technical Capability Owner
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an AI Technical Capability Owner, remote for 12 months, with a pay rate of "Unknown." Requires 12+ years in data/ML platform engineering, AWS and Databricks expertise, MLOps experience, and strong communication skills.
🌎 - Country
United States
💱 - Currency
Unknown
-
💰 - Day rate
Unknown
-
🗓️ - Date
October 17, 2025
🕒 - Duration
More than 6 months
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#IAM (Identity and Access Management) #Classification #Observability #AI (Artificial Intelligence) #Compliance #S3 (Amazon Simple Storage Service) #Batch #Databricks #Security #AWS S3 (Amazon Simple Storage Service) #MLflow #Monitoring #AWS (Amazon Web Services) #Data Science #Documentation #Data Security #ML (Machine Learning)
Role description
Title: AI Technical Capability Owner Location: Remote Duration: 12 Months Job Description:
Own the technical capability roadmap for the AI/ML CoE and align with the Business Capability Owner on outcomes, funding, chargeback model, governance, and adoption plans
Translate company goals into technical guardrails, accelerators, and "opinionated defaults for AI/ML delivery
Design and maintain end-to-end reference architectures on AWS and Databricks, including batch/streaming, feature stores, training/serving, and GenAI patterns
Publish reusable blueprints such as modules, templates, starter repositories, and CI/CD pipelines tailored for various personas like Data Scientists, ML Engineers, and Citizen AI/ML Developers
Curate a suite of best-fit tools for data, ML, GenAI, and MLOps (e.g., Databricks Lakehouse, AWS S3, Bedrock for GenAI)
Conduct evaluations, POCs, and vendor assessments to set selection criteria, SLAs, and TCO models
Define technical guardrails for data security, lineage, access control, PII handling, and model risk management according to AI Policy
Establish standards for experiment tracking, model registry, approvals, monitoring, and incident response
Lead workshops, organize engineering guilds, and deliver "train-the-trainer programs.
Develop hands-on labs, documentation, and internal courses to upskill teams on AI/ML frameworks and tools
Required:
8 12+ years of experience in data/ML platform engineering or ML architecture, with 3+ years designing solutions on AWS and Databricks at enterprise scale
Proven expertise in defining reference architectures, golden paths, and reusable accelerators
MLOps experience including experiment tracking (MLflow), CI/CD pipelines, feature stores, model serving, observability, drift/quality monitoring, and A/B or shadow testing
Proficiency in GenAI patterns such as retrieval-augmented generation (RAG), vector search, prompt orchestration, and safety guardrails
Security-by-design mindset with experience in IAM/KMS, network segmentation, data classification, and compliance framework
Strong skills in organizing large groups (guilds, communities of practice, workshops) and influencing without authority
Exceptional presentation and communication skills for both technical and executive audiences
Title: AI Technical Capability Owner Location: Remote Duration: 12 Months Job Description:
Own the technical capability roadmap for the AI/ML CoE and align with the Business Capability Owner on outcomes, funding, chargeback model, governance, and adoption plans
Translate company goals into technical guardrails, accelerators, and "opinionated defaults for AI/ML delivery
Design and maintain end-to-end reference architectures on AWS and Databricks, including batch/streaming, feature stores, training/serving, and GenAI patterns
Publish reusable blueprints such as modules, templates, starter repositories, and CI/CD pipelines tailored for various personas like Data Scientists, ML Engineers, and Citizen AI/ML Developers
Curate a suite of best-fit tools for data, ML, GenAI, and MLOps (e.g., Databricks Lakehouse, AWS S3, Bedrock for GenAI)
Conduct evaluations, POCs, and vendor assessments to set selection criteria, SLAs, and TCO models
Define technical guardrails for data security, lineage, access control, PII handling, and model risk management according to AI Policy
Establish standards for experiment tracking, model registry, approvals, monitoring, and incident response
Lead workshops, organize engineering guilds, and deliver "train-the-trainer programs.
Develop hands-on labs, documentation, and internal courses to upskill teams on AI/ML frameworks and tools
Required:
8 12+ years of experience in data/ML platform engineering or ML architecture, with 3+ years designing solutions on AWS and Databricks at enterprise scale
Proven expertise in defining reference architectures, golden paths, and reusable accelerators
MLOps experience including experiment tracking (MLflow), CI/CD pipelines, feature stores, model serving, observability, drift/quality monitoring, and A/B or shadow testing
Proficiency in GenAI patterns such as retrieval-augmented generation (RAG), vector search, prompt orchestration, and safety guardrails
Security-by-design mindset with experience in IAM/KMS, network segmentation, data classification, and compliance framework
Strong skills in organizing large groups (guilds, communities of practice, workshops) and influencing without authority
Exceptional presentation and communication skills for both technical and executive audiences