

GCP AI Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a GCP AI Engineer (Junior/Senior) on a 6-12 month contract, fully remote, paying $50-75/hour. Requires 3-5+ years in data engineering, 2+ years in Gen AI initiatives, and strong Python skills.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
600
-
ποΈ - Date discovered
August 1, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
Remote
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#ML (Machine Learning) #Data Pipeline #AI (Artificial Intelligence) #Snowflake #SQL (Structured Query Language) #Deployment #DevOps #PyTorch #TensorFlow #Compliance #Spark (Apache Spark) #Data Science #Python #Data Engineering #Deep Learning #"ETL (Extract #Transform #Load)" #GCP (Google Cloud Platform) #Programming #NoSQL
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Positions: Junior and Senior GCP AI Engineering
Location: Fully Remote (working core EST hours)
Duration: 6-12 month contract to hire
Project: New Generative AI use cases for enterprise systems
Compensation: $50-75/hour
Desired Experience
β’ 3-5+ years of experience across data engineering utilizing data solutions, SQL and NoSQL, Snowflake, ELT/ETL tools, CICD, Bigdata, GCP, Python/Spark, Datamesh, Datalake, Data Fabric
β’ 2+ years of hands-on experience supporting Gen AI initiatives in a data environment
β’ Experience implementing RAG (Retrieval-Augmented Generation) pipelines
β’ Vector/graph database experience to structure data for large models using AI tools (steps include extraction, chunking, embedding, and ground strategies)
β’ Strong programming skills in Python and familiarity with deep learning frameworks such as PyTorch or TensorFlow
Day-to-Day Responsibilities
Responsible for implementing AI data pipelines that bring together structured, semi-structured and unstructured data to support AI and Agentic solutions - includes pre-processing with extraction, chunking, embedding and grounding strategies to get the data ready. Develop AI-driven systems to improve data capabilities, ensuring compliance with industry best practices for insurance-specific data use cases and challenges. Develop data domains and data products for various consumption archetypes including Reporting, Data Science, AI/ML, Analytics etc. Collaborate closely with DevOps and infrastructure teams to ensure seamless deployment, operation, and maintenance of data systems.