

Intellibus
Lead AI Engineer / Cloud & Data Architect
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Lead AI Engineer / Cloud & Data Architect, a 6-month contract in Phoenix, AZ, with a pay rate of "XX". Requires 8-15 years in software/data/AI engineering, proficiency in Python, SQL, and cloud architectures, plus experience with data pipelines and ML model deployment.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
October 30, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Phoenix, AZ
-
🧠 - Skills detailed
#Cloud #Leadership #Python #GCP (Google Cloud Platform) #GitHub #Observability #Scala #"ETL (Extract #Transform #Load)" #Automation #Security #MLflow #Monitoring #Data Pipeline #SQL Queries #AI (Artificial Intelligence) #Deployment #Data Strategy #Data Engineering #Kafka (Apache Kafka) #Azure #Documentation #Hugging Face #Kubernetes #Airflow #Data Ingestion #Compliance #Strategy #Data Integration #dbt (data build tool) #AWS (Amazon Web Services) #Model Deployment #Data Architecture #ML (Machine Learning) #Docker #Spark (Apache Spark) #SQL (Structured Query Language) #API (Application Programming Interface) #SageMaker #Jenkins
Role description
The AI Engineering Architect & Technical Coach is a hands-on engineering leader responsible for designing, building, and guiding the technical foundation of an enterprise-scale AI transformation program.
This role bridges architecture, execution, and mentorship, ensuring that AI experiments, data pipelines, and production systems are technically sound, scalable, and reusable across squads.
You’ll work side by side with engineers in the AI Skunk Works, Data Foundations, and Engineering Excellence squads setting engineering standards, unblocking delivery, and embedding best practices in cloud, data, and AI systems.
Your north star: make sure every AI experiment can scale cleanly, securely, and reliably.
Key Responsibilities
1. Architecture Design & Implementation
• Design the technical architecture for AI and data initiatives — including ingestion, transformation, and model deployment pipelines.
• Define and document reference architectures, API standards, and reusability frameworks.
• Collaborate with data engineers to build scalable ETL/ELT pipelines and feature stores that feed AI models.
• Ensure solutions adhere to security, compliance, and governance requirements.
• Evaluate and optimize cloud infrastructure (AWS, Azure, or GCP) for cost, performance, and resilience.
Deliverables: Architecture blueprints, reference implementations, technical documentation.
1. Hands-On Development & Coaching
• Act as a player-coach able to prototype, debug, and code alongside engineers.
• Build or review Python scripts, SQL queries, APIs, and pipeline automation.
• Coach engineers on coding standards, CI/CD automation, observability, and testing practices.
• Conduct stability reviews and code walk-throughs to raise engineering quality.
• Lead “Engineering Excellence” workshops on reliability, scalability, and AI deployment hygiene.
Deliverables: Working prototypes, CI/CD templates, best-practice repositories, coaching sessions.
1. AI Experiment Enablement
• Partner with Skunk Works leads to make AI experiments technically viable.
• Set up data pipelines, connectors, and lightweight back-end APIs for pilot experiments.
• Optimize workflows for 2-week sprint cycles — enabling rapid iteration and testing.
• Ensure each experiment’s architecture supports clean handoff to production once validated.
• Evaluate and integrate AI tools, APIs, or SDKs (e.g., OpenAI, Hugging Face, Vertex AI, Azure AI Studio).
Deliverables: Reusable experiment scaffolding, model integration templates, and experiment runtime environments..
1. Engineering Quality & Platform Improvement
• Define and enforce engineering excellence standards: stability, scalability, and security.
• Implement automation in build, deploy, and monitoring pipelines.
• Lead incident reviews and root-cause analyses to improve reliability metrics.
• Collaborate with the Engineering Excellence squad to uplift delivery velocity and reduce incidents.
Deliverables: Automated deployment pipelines, quality dashboards, remediation plans
1. Collaboration & Leadership
• Work closely with the Director of AI Practice & Transformation on cross-squad technical strategy.
• Collaborate with the AI & Data Strategy Lead to ensure architecture aligns with data availability and governance rules.
• Serve as the technical north star for all squads — guiding decisions on design, tooling, and trade-offs.
• Build deep trust with both the client’s technical teams and Intellibus engineers.
Deliverables: Technical reviews, architecture alignment sessions, mentoring reports
Key Qualifications
• Experience: 8–15 years in software, data, or AI engineering; 3–5 years in lead or architect-level roles.
• Technical Skills:
• Proficiency in Python, SQL, and cloud-native architectures (AWS, Azure, or GCP).
• Hands-on experience with data-pipeline frameworks (Airflow, dbt, Kafka, Spark).
• Familiarity with ML model deployment (MLflow, SageMaker, Vertex AI, or custom API deployment).
• Knowledge of container orchestration (Docker, Kubernetes) and CI/CD tools (GitHub Actions, Jenkins).
• Mindset: Pragmatic builder, detail-oriented problem solver, and teacher.
• Soft Skills: Strong communicator who can explain complex technical concepts simply to non-technical stakeholders.
• Bonus: Experience in retail systems (POS, inventory, merchandising, supply chain) or large-scale data integrations.
• Location: Based in or near Phoenix, AZ (preferred).
Key Deliverables (First 90 Days)
• Deliver reusable AI Experimentation Framework for Skunk Works pilots (scripts, templates, pipelines).
• Establish CI/CD and data ingestion pipelines supporting initial experiments.
• Conduct first Engineering Excellence Bootcamp for existing engineers.
• Lead stability and code-quality review across squads.
• Present technical readiness report to Program Director and ELT sponsor.|
Success Profile
This person is:
• 50% Architect → designs reusable systems and processes for AI Experimentation.
• 30% Engineer → writes, reviews, and deploys framework level code and trains engineers in the Skunk Works squad to use the framework the right way for experiments
• 20% Coach → teaches and unblocks other coaches in the Engineering Excellence Squad
• Always outcome-oriented — making sure every technical effort works in the real world and delivers measurable business value.
Our Process
• Schedule a 15-minute Video Call with someone from our Team
• Interview with the Advisory/Leadership team
• 1 Proctored GQ (Q&A) & Slideware (Google Slide Presentation) Assessment
• 30-45 min Final/Tech Video Interview
• Receive Job Offer
If you are interested in contacting us, please apply, and our team will reach out to you within the hour.
The AI Engineering Architect & Technical Coach is a hands-on engineering leader responsible for designing, building, and guiding the technical foundation of an enterprise-scale AI transformation program.
This role bridges architecture, execution, and mentorship, ensuring that AI experiments, data pipelines, and production systems are technically sound, scalable, and reusable across squads.
You’ll work side by side with engineers in the AI Skunk Works, Data Foundations, and Engineering Excellence squads setting engineering standards, unblocking delivery, and embedding best practices in cloud, data, and AI systems.
Your north star: make sure every AI experiment can scale cleanly, securely, and reliably.
Key Responsibilities
1. Architecture Design & Implementation
• Design the technical architecture for AI and data initiatives — including ingestion, transformation, and model deployment pipelines.
• Define and document reference architectures, API standards, and reusability frameworks.
• Collaborate with data engineers to build scalable ETL/ELT pipelines and feature stores that feed AI models.
• Ensure solutions adhere to security, compliance, and governance requirements.
• Evaluate and optimize cloud infrastructure (AWS, Azure, or GCP) for cost, performance, and resilience.
Deliverables: Architecture blueprints, reference implementations, technical documentation.
1. Hands-On Development & Coaching
• Act as a player-coach able to prototype, debug, and code alongside engineers.
• Build or review Python scripts, SQL queries, APIs, and pipeline automation.
• Coach engineers on coding standards, CI/CD automation, observability, and testing practices.
• Conduct stability reviews and code walk-throughs to raise engineering quality.
• Lead “Engineering Excellence” workshops on reliability, scalability, and AI deployment hygiene.
Deliverables: Working prototypes, CI/CD templates, best-practice repositories, coaching sessions.
1. AI Experiment Enablement
• Partner with Skunk Works leads to make AI experiments technically viable.
• Set up data pipelines, connectors, and lightweight back-end APIs for pilot experiments.
• Optimize workflows for 2-week sprint cycles — enabling rapid iteration and testing.
• Ensure each experiment’s architecture supports clean handoff to production once validated.
• Evaluate and integrate AI tools, APIs, or SDKs (e.g., OpenAI, Hugging Face, Vertex AI, Azure AI Studio).
Deliverables: Reusable experiment scaffolding, model integration templates, and experiment runtime environments..
1. Engineering Quality & Platform Improvement
• Define and enforce engineering excellence standards: stability, scalability, and security.
• Implement automation in build, deploy, and monitoring pipelines.
• Lead incident reviews and root-cause analyses to improve reliability metrics.
• Collaborate with the Engineering Excellence squad to uplift delivery velocity and reduce incidents.
Deliverables: Automated deployment pipelines, quality dashboards, remediation plans
1. Collaboration & Leadership
• Work closely with the Director of AI Practice & Transformation on cross-squad technical strategy.
• Collaborate with the AI & Data Strategy Lead to ensure architecture aligns with data availability and governance rules.
• Serve as the technical north star for all squads — guiding decisions on design, tooling, and trade-offs.
• Build deep trust with both the client’s technical teams and Intellibus engineers.
Deliverables: Technical reviews, architecture alignment sessions, mentoring reports
Key Qualifications
• Experience: 8–15 years in software, data, or AI engineering; 3–5 years in lead or architect-level roles.
• Technical Skills:
• Proficiency in Python, SQL, and cloud-native architectures (AWS, Azure, or GCP).
• Hands-on experience with data-pipeline frameworks (Airflow, dbt, Kafka, Spark).
• Familiarity with ML model deployment (MLflow, SageMaker, Vertex AI, or custom API deployment).
• Knowledge of container orchestration (Docker, Kubernetes) and CI/CD tools (GitHub Actions, Jenkins).
• Mindset: Pragmatic builder, detail-oriented problem solver, and teacher.
• Soft Skills: Strong communicator who can explain complex technical concepts simply to non-technical stakeholders.
• Bonus: Experience in retail systems (POS, inventory, merchandising, supply chain) or large-scale data integrations.
• Location: Based in or near Phoenix, AZ (preferred).
Key Deliverables (First 90 Days)
• Deliver reusable AI Experimentation Framework for Skunk Works pilots (scripts, templates, pipelines).
• Establish CI/CD and data ingestion pipelines supporting initial experiments.
• Conduct first Engineering Excellence Bootcamp for existing engineers.
• Lead stability and code-quality review across squads.
• Present technical readiness report to Program Director and ELT sponsor.|
Success Profile
This person is:
• 50% Architect → designs reusable systems and processes for AI Experimentation.
• 30% Engineer → writes, reviews, and deploys framework level code and trains engineers in the Skunk Works squad to use the framework the right way for experiments
• 20% Coach → teaches and unblocks other coaches in the Engineering Excellence Squad
• Always outcome-oriented — making sure every technical effort works in the real world and delivers measurable business value.
Our Process
• Schedule a 15-minute Video Call with someone from our Team
• Interview with the Advisory/Leadership team
• 1 Proctored GQ (Q&A) & Slideware (Google Slide Presentation) Assessment
• 30-45 min Final/Tech Video Interview
• Receive Job Offer
If you are interested in contacting us, please apply, and our team will reach out to you within the hour.






