eTeam

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Engineer position based in Cupertino, CA or Austin, CA for 6 months, offering a competitive pay rate. Requires 6 years of data engineering experience, strong SQL, and familiarity with Spark, Kafka, and FinTech domains.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
600
-
🗓️ - Date
March 11, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Cupertino, CA
-
🧠 - Skills detailed
#Trino #Azure #Batch #AWS (Amazon Web Services) #Airflow #Dimensional Data Models #Data Integrity #SQL (Structured Query Language) #Scala #Data Modeling #Python #GCP (Google Cloud Platform) #"ETL (Extract #Transform #Load)" #Snowflake #Tableau #Data Governance #Spark (Apache Spark) #Kafka (Apache Kafka) #Data Engineering #Data Pipeline #Data Architecture #ML (Machine Learning) #Databricks #Observability #Java #Compliance
Role description
Title: Data Engineer Location: Cupertino, CA/ Austin, CA Duration: 6 Months The Data Foundations Engineer designs and scales modern data architectures powering Wallet, Payments, and Commerceproducts. This role focuses on building high-performance data pipelines and enabling analytics and ML use cases, with strongfundamentals in data modeling and scalable systems. KEY RESPONSIBILITIES: • Data Engineering & Architecture Design and implement scalable batch and near-real-time data pipelines. Develop ETL/ELT workflows optimized for performance and cost. Implement dimensional data models and standardize business metrics. • Instrument APIs and user journeys to capture behavioral and transactional data. • Data Governance & Quality - • Ensure data integrity, governance, privacy, and compliance. • Maintain reliability and availability of mission-critical systems. REQUIRED QUALIFICATIONS • 6 years of experience in data engineering for analytics or ML systems. • Strong SQL proficiency. • Experience in Python, Scala, or Java. • Hands-on experience with Spark, Kafka, and Airflow (or similar). • Strong understanding of data modeling and lakehouse architectures (e.g., Iceberg). Experience with AWS, Azure, or GCP. • Comfortable participating in rotating on-call. • Experience with Snowflake, Databricks, Trino, OLAP/NRT systems, Superset or Tableau. Familiarity with CI/CD, data observability, infrastructure-as-code. • Exposure to MLOps and GenAI/RAG pipelines. • Hands-on experience with LLMs (prompt engineering, fine-tuning, RAG). • Experience in FinTech, Wallet, or Payments domain.