GenAI Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a GenAI Engineer, remote, with a contract lasting until 09/30/2025, offering $62.01-$78.98/hour. Key skills include expert Python development, LLM APIs, prompt engineering, AWS, and API integration. Experience in AI governance and automation is essential.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
624
-
πŸ—“οΈ - Date discovered
August 19, 2025
πŸ•’ - Project duration
More than 6 months
-
🏝️ - Location type
Remote
-
πŸ“„ - Contract type
W2 Contractor
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Minnesota, United States
-
🧠 - Skills detailed
#Security #AI (Artificial Intelligence) #Azure #AWS (Amazon Web Services) #API (Application Programming Interface) #AWS Lambda #REST (Representational State Transfer) #Compliance #Stories #REST API #Agile #Scala #Observability #IAM (Identity and Access Management) #Automation #Cloud #Python #"ETL (Extract #Transform #Load)" #Lambda (AWS Lambda) #Data Privacy
Role description
Role: GenAI Engineer Start Date: ASAP End Date: 09/30/2025 Location: Remote Pay Rate: $62.01-$78.98/hour W2, dependent on skills and qualifications Project Overview: The client is seeking experienced Generative AI Engineers to support enterprise-wide AI enablement initiatives. This high-impact role will contribute to the development of scalable GenAI and agentic AI workflows, automation solutions, and reusable AI components across platforms such as AWS, Microsoft 365, MS Copilot, and other third-party GenAI tools. You will be a key contributor to their AI Center of Excellence (CoE), operating across architecture and hands-on implementation. Key Responsibilities: β€’ Design and develop GenAI solutions, including prompt engineering, RAG pipelines, and custom AI workflows. β€’ Build AI agents and agentic workflows across AWS, Microsoft 365, MS Copilot, and other GenAI platforms. β€’ Automate extraction of information from unstructured data sources (emails, documents, web pages). β€’ Develop reusable enterprise-wide AI components and services. β€’ Design and implement MCP hosts, clients, and servers using interoperable protocols (e.g., Google A2A). β€’ Establish frameworks for automated LLM testing and drift detection. β€’ Integrate with internal and external APIs with secure authentication and authorization. β€’ Implement safeguards against prompt injection and jailbreaks, ensuring compliance with enterprise security and governance. β€’ Work in Agile environments, documenting user stories and technical specifications. β€’ Collaborate with the AI CoE to ensure scalability, reusability, and governance compliance. Required Skills & Qualifications: β€’ Expert-level Python development. β€’ Hands-on experience with LLM APIs (OpenAI, Azure OpenAI, Gemini, Anthropic, etc.). β€’ Strong background in prompt engineering, context construction, and grounding strategies. β€’ Experience with RAG pipelines: chunking, embedding, and retrieval from unstructured content (e.g., O365, PDFs, web pages). β€’ Proficiency in building MCP clients, servers, and hosts. β€’ Strong REST API design and integration skills. β€’ Experience with Intelligent Document Processing (IDP) and/or OCR for complex documents. β€’ Deep expertise in AWS (Lambda, Bedrock, Step Functions, API Gateway, IAM). β€’ Familiarity with observability tools like LangSmith, CloudWatch, etc. β€’ Solid understanding of enterprise data privacy, AI governance, and observability.