GCP Data Engineer with Gen AI

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a GCP Data Engineer with Gen AI, offering a remote contract until year-end at $45/hr. Requires 3+ years in SQL, Python, cloud data pipelines, and Generative AI. Experience with GCP services and data orchestration is essential.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
360
-
πŸ—“οΈ - Date discovered
August 16, 2025
πŸ•’ - Project duration
More than 6 months
-
🏝️ - Location type
Remote
-
πŸ“„ - Contract type
W2 Contractor
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
United States
-
🧠 - Skills detailed
#Cloud #Airflow #DevOps #GIT #Data Warehouse #Data Science #Azure #Databases #Data Lake #Datasets #AWS (Amazon Web Services) #Data Orchestration #Scala #Spark (Apache Spark) #GCP (Google Cloud Platform) #Python #Automation #Deployment #Data Quality #BigQuery #Database Systems #Data Architecture #Scripting #"ETL (Extract #Transform #Load)" #Storage #Data Engineering #Data Pipeline #Teradata #AI (Artificial Intelligence) #Model Deployment #SQL (Structured Query Language) #BI (Business Intelligence)
Role description
Position: GCP Data Engineer with Gen AI/LLM/Copilot and TERADATA Location: Remote Duration: Contract to end of year + likely extensions Interview: Phone and Video Interview: Tuesday and Wednesday $45/hr W2 Interview Process: β€’ 1 round with team leads – 30 minutes coding exercise + 30 minutes technical questions β€’ 1 round with Abhinav, technical & behavioral questions, no coding Must Haves: β€’ 3+ years of experience writing SQL within database systems such as BigQuery, Teradata β€’ 1+ years of experience with Generative AI: Familiarity with GenAI concepts, prompt engineering, LLMs (preferably Gemini and Copilot) and frameworks. β€’ 3+ years of hands-on experience building modern data pipelines within a major cloud platform (GCP, AWS, Azure), preferably GCP. β€’ 3+ years of experience with Python or other comparable scripting language in a data engineering setting β€’ 3+ years of experience with data orchestration, and pipeline engineering services such as Airflow, Spark, preferably Composer / DataProc β€’ 3+ years of experience deploying to, and managing CI/CD within, cloud database platforms. β€’ Demonstrates a keen awareness of the member experience while seeking opportunities for automation and innovation to automate and optimize business operations. Plusses: β€’ Experience in designing and building data engineering solutions in cloud environments with knowledge of GCP services like Gemini, Cloud Storage, Composer and BigQuery strongly preferred. β€’ Experience with Git, CI/CD pipeline, and other DevOps principles/best practices β€’ Experience with agentic frameworks and RAG approaches. β€’ Knowledge of vector databases. β€’ Familiarity with MLOps practices and AI model deployment β€’ Strong collaboration and communication skills within and across teams Responsibilities: Designs, builds, and maintains the data infrastructure that supports the organization's data-related initiatives utilizing Gen AI and automation at its core. Collaborate with cross-functional teams, including BI developers, data scientists, analysts, and software engineers, to ensure efficient and reliable processing, storage, and retrieval of data. Develops scalable data pipelines, automates and optimizes data workflows, and ensures the quality and integrity of the data. β€’ Designs scalable and efficient data pipelines to extract, transform, and load data from various sources into data warehouses or data lakes. β€’ Integrates GenAI models into existing data systems and workflows, ensuring seamless data flow through automation β€’ Design and optimize data architectures and processes to integrate AI workflows, ensuring models receive high-quality data necessary for optimal performance, including handling large, diverse datasets β€’ Identifies use cases for Gen AI utilization, opportunities to streamline data engineering processes, improve efficiency, and enhance the quality of deliverables β€’ Documents data engineering processes, workflows, and systems for reference and knowledge-sharing purposes. β€’ Implements data quality checks and validation processes to ensure the accuracy, completeness, and consistency of the data. β€’ Provides guidance and mentorship to junior data engineers to help them develop their technical skills and grow in their role