SPECTRAFORCE

Senior GenAI Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior GenAI Data Engineer in San Francisco, CA (Hybrid). It offers an 8-month contract with a high possibility of extension. Key skills include GenAI proficiency, SQL mastery, and Python expertise. Experience in data integration and cloud platforms is preferred.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
April 1, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
San Francisco Bay Area
-
🧠 - Skills detailed
#Observability #Snowflake #AWS (Amazon Web Services) #AI (Artificial Intelligence) #Databases #CRM (Customer Relationship Management) #GitHub #Pandas #BI (Business Intelligence) #Data Warehouse #Data Engineering #Data Integration #Cloud #dbt (data build tool) #Compliance #NumPy #Libraries #ML (Machine Learning) #GCP (Google Cloud Platform) #Python #"ETL (Extract #Transform #Load)" #Security #Data Science #Apache Airflow #Airflow #SQL (Structured Query Language) #GDPR (General Data Protection Regulation) #Azure
Role description
Title: Senior GenAI Data Engineer Location: San Francisco, CA (Hybrid - 1-2 days per week) Duration: 8 months assignment with high possibility of extension What You’ll Do: GenAI Engineering & Production Data Engineering & Architecture Business Intelligence & Stakeholder Management Required Qualifications: Technical Expertise: GenAI Proficiency: Deep hands-on experience with LLM applications, including observability tools, evaluation frameworks, and safety guardrails Agentic AI: Demonstrated experience building multi-agent or agentic workflows using LangGraph or similar frameworks LLM Fundamentals: Strong understanding of how LLMs work, their capabilities and limitations, context windows, tokenization, embeddings, and fine-tuning AI-Assisted Development: Active user of GenAI coding tools (Cursor, GitHub Copilot, Codex, Gemini Code Assist, etc.) with proven ability to accelerate development SQL Mastery: Expert-level SQL skills including complex joins, window functions, CTEs, query optimization, and performance tuning Data Engineering: Expert knowledge of dimensional modeling (star schemas, SCD Type 2), data warehouse concepts, and ETL/ELT patterns Python Stack: Advanced proficiency in Python, pandas, numpy, and related data science libraries Workflow Orchestration: Production experience with Apache Airflow or similar orchestration platforms Enterprise Data Integration: Experience working with structured data from ERP, CRM, and financial systems Nice to Have: Experience with vector databases Knowledge of cloud platforms (AWS, GCP, Azure) and their AI/ML services Experience with dbt (data build tool) for analytics engineering Experience with streaming data and real-time processing Background in conversation intelligence or speech-to-text applications Understanding of privacy, security, and compliance requirements for AI systems (SOC 2, GDPR, etc.) Previous experience in a startup or fast-paced environment Familiarity with modern data warehouse solutions (Snowflake, Hive)