

Data Engineer - AI/ML
β - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Engineer - AI/ML for a contract length of "unknown", offering a pay rate of "unknown". Key skills include Python, Snowflake, AWS, GCP, and experience with data pipelines for AI/ML. W2 only; no third-party pass-throughs.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
736
-
ποΈ - Date discovered
July 2, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Unknown
-
π - Contract type
W2 Contractor
-
π - Security clearance
Unknown
-
π - Location detailed
Charlotte, NC
-
π§ - Skills detailed
#AWS (Amazon Web Services) #Data Pipeline #Data Accuracy #Graph Databases #Data Engineering #ML (Machine Learning) #Python #"ETL (Extract #Transform #Load)" #Databases #Storage #Amazon Neptune #Data Quality #Neo4J #AI (Artificial Intelligence) #Snowflake #Monitoring #Data Lake #Data Processing #Deployment #GCP (Google Cloud Platform) #Data Warehouse #Data Ingestion
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
W2 ONLY - NO CORP'S - NO VISA TRANSFER- NO 3RD PARTY PASS THROUGHS
Primary Job Responsibilities
β’ Design, develop, and implement complex data pipelines for AI/ML, including those supporting RAG architectures, using technologies such as Python, Snowflake, AWS, GCP, and Vertex AI.
β’ Implement on end-to-end generative AI pipelines, from data ingestion to pipeline deployment and monitoring.
β’ Build and maintain data pipelines that ingest, transform, and load data from various sources (structured, unstructured, and semi-structured) into data warehouses, data lakes, vector databases (e.g., Pinecone, Weaviate, Faiss - consider specifying which ones you use or are exploring), and graph databases (e.g., Neo4j, Amazon Neptune - same consideration as above).
β’ Develop and implement data quality checks, validation processes, and monitoring solutions to ensure data accuracy, consistency, and reliability.
β’ Implement end-to-end generative AI data pipelines, from data ingestion to pipeline deployment and monitoring.
β’ Develop complex AI systems, adhering to best practices in software engineering and AI development.
β’ Work with cross-functional teams to integrate AI solutions into existing products and services.
β’ Keep up-to-date with AI advancements and apply new technologies and methodologies to our systems.
β’ Assist in mentoring junior AI/data engineers in AI development best practices.
β’ Implement and optimize RAG architectures and pipelines.
β’ Develop solutions for handling unstructured data in AI pipelines.
β’ Implement agentic workflows for autonomous AI systems.
β’ Develop graph database solutions for complex data relationships in AI systems.
β’ Integrate AI pipelines with Snowflake data warehouse for efficient data processing and storage.
β’ Apply GenAI solutions to insurance-specific use cases and challenges.