

Yochana
Contract Role: Senior AI Engineer at Frisco, TX (Onsite- Hybrid Model) - Only Locals
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a long-term contract for a Senior AI Engineer in Frisco, TX (Hybrid). Required skills include expertise in AI/ML technologies, Generative AI frameworks, and backend development with Python, FastAPI, and Azure services.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
April 1, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Frisco, TX
-
🧠 - Skills detailed
#Snowflake #AWS (Amazon Web Services) #AI (Artificial Intelligence) #Databases #Deployment #GitHub #Azure Blob Storage #React #MLflow #Data Warehouse #Documentation #Cloud #Storage #Flask #Microsoft Azure #ML (Machine Learning) #FastAPI #Python #Data Extraction #Docker #Data Quality #SageMaker #AWS SageMaker #MongoDB #Data Pipeline #"ETL (Extract #Transform #Load)" #Angular #Langchain #Redis #API (Application Programming Interface) #SQL (Structured Query Language) #Monitoring #Azure #Logging
Role description
Senior AI Engineer
Frisco, TX (Onsite- Hybrid Model)
Long Term Contract
Mandatory Skills: Strong hands-on experience ,understanding of modern AI/ML technologies, Generative AI frameworks including LangChain, LangGraph, and Retrieval-Augmented Generation (RAG), and extensive experience in designing and implementing agentic AI workflows and multi-agent systems
Key Responsibilities
• Instrumental in architecting and deploying production-grade AI solutions using Azure OpenAI (GPT-4o), Azure Document Intelligence, and serverless computing paradigms on Microsoft Azure
• Developing and designing solutions using Python, FastAPI, LangChain, LangGraph, Azure OpenAI (GPT-4o), Azure Document Intelligence, Azure Functions, Azure Blob Storage, Snowflake, MongoDB (Vector Search), SQL, Docker, MLflow, GitHub Actions (CI/CD), Socket.IO, Redis, and AWS SageMaker
Backend Development
• Build and maintain robust, production-grade backend APIs using FastAPI or Flask, ensuring secure authentication, input validation, and structured error handling.
• Implement secure, event-driven data pipelines (e.g., Azure Functions) to automate extraction, transformation, and loading of structured and unstructured data across cloud storage and data warehouses (Azure Blob Storage, Snowflake).
• Manage database integrations including SQL databases, Snowflake, and MongoDB (Vector Search) to support both transactional and AI-driven retrieval workflows.
• Optimize backend systems for real-time processing of AI queries and responses, implementing asynchronous Python patterns and Redis caching to minimize latency under concurrent load.
• Integrate real-time communication frameworks such as Socket.IO for seamless, low-latency user interactions with frontend applications (e.g., Angular, React).
Generative AI Model Integration
• Utilize Azure OpenAI (GPT-4o) and related services to build LLM-powered applications, including Retrieval-Augmented Generation (RAG) systems with hybrid search (keyword + semantic).
• Architect and orchestrate multi-agent systems using LangChain and LangGraph, designing specialized agents for tasks such as content generation, intelligent data extraction, and automated decision-making.
• Deploy, fine-tune, and integrate AI models into business applications, working closely with product and business stakeholders to align model outputs with business objectives.
• Optimize AI-driven prompt engineering and embedding models for efficient performance, iterating on system prompts, chunking strategies, and retrieval pipelines to maximize accuracy and reduce API costs.
• Leverage Azure Document Intelligence for parsing unstructured documents (PDFs, earnings reports) and extracting structured financial or operational KPIs at scale.
• Build and maintain Model Context Protocol (MCP) servers to expose internal databases and documentation to LLM clients for secure, standardized data retrieval.
Containerization & Deployment
• Use Docker to containerize AI applications and their dependencies, ensuring consistent behavior across development, staging, and production environments.
• Manage end-to-end application deployments in Azure environments (Azure Functions, Azure Workspace, Azure Blob Storage), including infrastructure setup and configuration.
• Engineer CI/CD pipelines using GitHub Actions to automate testing, building, and deployment processes for seamless, zero-downtime releases.
• Monitor, troubleshoot, and resolve application performance issues post-deployment using MLflow, custom dashboards, automated alerts, and logging systems.
• Implement model monitoring practices to detect data drift, performance degradation, and data quality issues in production ML/AI systems.
Senior AI Engineer
Frisco, TX (Onsite- Hybrid Model)
Long Term Contract
Mandatory Skills: Strong hands-on experience ,understanding of modern AI/ML technologies, Generative AI frameworks including LangChain, LangGraph, and Retrieval-Augmented Generation (RAG), and extensive experience in designing and implementing agentic AI workflows and multi-agent systems
Key Responsibilities
• Instrumental in architecting and deploying production-grade AI solutions using Azure OpenAI (GPT-4o), Azure Document Intelligence, and serverless computing paradigms on Microsoft Azure
• Developing and designing solutions using Python, FastAPI, LangChain, LangGraph, Azure OpenAI (GPT-4o), Azure Document Intelligence, Azure Functions, Azure Blob Storage, Snowflake, MongoDB (Vector Search), SQL, Docker, MLflow, GitHub Actions (CI/CD), Socket.IO, Redis, and AWS SageMaker
Backend Development
• Build and maintain robust, production-grade backend APIs using FastAPI or Flask, ensuring secure authentication, input validation, and structured error handling.
• Implement secure, event-driven data pipelines (e.g., Azure Functions) to automate extraction, transformation, and loading of structured and unstructured data across cloud storage and data warehouses (Azure Blob Storage, Snowflake).
• Manage database integrations including SQL databases, Snowflake, and MongoDB (Vector Search) to support both transactional and AI-driven retrieval workflows.
• Optimize backend systems for real-time processing of AI queries and responses, implementing asynchronous Python patterns and Redis caching to minimize latency under concurrent load.
• Integrate real-time communication frameworks such as Socket.IO for seamless, low-latency user interactions with frontend applications (e.g., Angular, React).
Generative AI Model Integration
• Utilize Azure OpenAI (GPT-4o) and related services to build LLM-powered applications, including Retrieval-Augmented Generation (RAG) systems with hybrid search (keyword + semantic).
• Architect and orchestrate multi-agent systems using LangChain and LangGraph, designing specialized agents for tasks such as content generation, intelligent data extraction, and automated decision-making.
• Deploy, fine-tune, and integrate AI models into business applications, working closely with product and business stakeholders to align model outputs with business objectives.
• Optimize AI-driven prompt engineering and embedding models for efficient performance, iterating on system prompts, chunking strategies, and retrieval pipelines to maximize accuracy and reduce API costs.
• Leverage Azure Document Intelligence for parsing unstructured documents (PDFs, earnings reports) and extracting structured financial or operational KPIs at scale.
• Build and maintain Model Context Protocol (MCP) servers to expose internal databases and documentation to LLM clients for secure, standardized data retrieval.
Containerization & Deployment
• Use Docker to containerize AI applications and their dependencies, ensuring consistent behavior across development, staging, and production environments.
• Manage end-to-end application deployments in Azure environments (Azure Functions, Azure Workspace, Azure Blob Storage), including infrastructure setup and configuration.
• Engineer CI/CD pipelines using GitHub Actions to automate testing, building, and deployment processes for seamless, zero-downtime releases.
• Monitor, troubleshoot, and resolve application performance issues post-deployment using MLflow, custom dashboards, automated alerts, and logging systems.
• Implement model monitoring practices to detect data drift, performance degradation, and data quality issues in production ML/AI systems.






