Aptonet Inc

GenAI Data Automation

โญ - Featured Role | Apply direct with Data Freelance Hub
This role is for a GenAI Data Automation Engineer, a 6-month contract position with a pay rate of "$XX/hour". It requires expertise in AWS and Azure, Generative AI frameworks, and 2+ years in data engineering. U.S. citizenship is mandatory.
๐ŸŒŽ - Country
United States
๐Ÿ’ฑ - Currency
$ USD
-
๐Ÿ’ฐ - Day rate
400
-
๐Ÿ—“๏ธ - Date
April 1, 2026
๐Ÿ•’ - Duration
Unknown
-
๐Ÿ๏ธ - Location
Hybrid
-
๐Ÿ“„ - Contract
Unknown
-
๐Ÿ”’ - Security
Unknown
-
๐Ÿ“ - Location detailed
Washington, DC
-
๐Ÿง  - Skills detailed
#Indexing #S3 (Amazon Simple Storage Service) #Kafka (Apache Kafka) #REST (Representational State Transfer) #AI (Artificial Intelligence) #AWS (Amazon Web Services) #Hugging Face #SQL Server #Deployment #CRM (Customer Relationship Management) #DevOps #GitHub #RDS (Amazon Relational Database Service) #Bash #Computer Science #Azure DevOps #Azure SQL #IAM (Identity and Access Management) #BI (Business Intelligence) #SSIS (SQL Server Integration Services) #Spark (Apache Spark) #Metadata #Data Engineering #Jenkins #Cloud #Compliance #Batch #ML (Machine Learning) #DynamoDB #Python #Data Quality #Apache Spark #Data Pipeline #"ETL (Extract #Transform #Load)" #Anomaly Detection #Security #VPC (Virtual Private Cloud) #Model Deployment #Lambda (AWS Lambda) #Data Automation #CLI (Command-Line Interface) #Scripting #Scala #Langchain #Programming #API (Application Programming Interface) #Agile #Monitoring #Automation #REST API #SQL (Structured Query Language) #Firewalls #OpenSearch #Azure
Role description
The GenAI Data Automation Engineer is responsible for designing, developing, and maintaining AI-driven data pipelines and automation solutions across hybrid AWS and Azure environments. This role focuses on integrating Generative AI capabilities into scalable data systems to support analytics, reporting, and enterprise platforms. The position requires hands-on development, problem-solving, and collaboration to deliver mission-critical solutions in a federal environment. Key Responsibilities โ€ข Design and maintain data pipelines in AWS using S3, RDS/SQL Server, Glue, Lambda, EMR, DynamoDB, and Step Functions โ€ข Develop ETL/ELT processes to move data across systems, including DynamoDB to SQL Server and AWS to Azure SQL integrations โ€ข Integrate AWS Connect and Nice inContact CRM data into enterprise data pipelines for analytics and reporting โ€ข Build and enhance ingestion pipelines using Apache Spark, Flume, and Kafka for real-time and batch processing into Solr and AWS OpenSearch โ€ข Leverage Generative AI frameworks (AWS Bedrock, Amazon Q, Azure OpenAI, Hugging Face, LangChain) to: โ€ข Generate embeddings and vector data from unstructured sources โ€ข Automate data quality, metadata tagging, and lineage tracking โ€ข Enhance ETL processes with LLM-based transformations and anomaly detection โ€ข Develop conversational BI interfaces for natural language querying โ€ข Build AI-powered copilots for monitoring and troubleshooting pipelines โ€ข Implement SQL Server optimization including stored procedures, indexing, query tuning, and execution plan analysis โ€ข Apply CI/CD practices using GitHub, Jenkins, or Azure DevOps โ€ข Ensure security and compliance using IAM, KMS encryption, VPC isolation, RBAC, and firewalls โ€ข Support Agile DevOps processes with sprint-based delivery โ€ข Collaborate with cross-functional teams and communicate technical solutions clearly Required Technical Skills โ€ข AWS services: S3, RDS/SQL Server, Glue, Lambda, EMR, DynamoDB, Step Functions โ€ข Azure services including Azure SQL and Azure OpenAI โ€ข Generative AI frameworks: AWS Bedrock, Amazon Q, Azure OpenAI, Hugging Face, LangChain โ€ข Programming and scripting: Python, SQL, SSIS, Spark, Bash, PowerShell โ€ข Data engineering tools: Apache Spark, Flume, Kafka, Solr, AWS OpenSearch โ€ข REST API integration within data pipelines โ€ข CI/CD tools: GitHub, Jenkins, Azure DevOps โ€ข Cloud CLI tools (AWS/Azure) โ€ข SQL performance tuning and optimization โ€ข GenAI Ops including model deployment, monitoring, retraining, and lifecycle management Qualifications & Experience โ€ข Bachelorโ€™s degree in Computer Science or related field โ€ข 2+ years of experience in data engineering and automation โ€ข Hands-on experience with Generative AI and LLM frameworks โ€ข Experience working in AWS and/or Azure environments โ€ข Strong troubleshooting and performance optimization skills โ€ข Familiarity with Agile/DevOps methodologies โ€ข Strong communication and presentation skills โ€ข U.S. Citizenship required โ€ข Ability to obtain Public Trust clearance About the Team / Company Leidos is a Fortune 500ยฎ technology, engineering, and science solutions provider supporting defense, intelligence, civil, and health markets. The Civil Group focuses on modernizing government operations through AI/ML-driven data solutions, partnering with agencies such as FAA, DOE, DOJ, NASA, and TSA to deliver mission-critical systems.