

Optomi
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with 6+ years of experience in data engineering for analytics or ML systems, offering a hybrid contract in Cupertino, CA or Austin, TX. Pay rate is competitive. Key skills include SQL, Python, Spark, and experience in FinTech.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
536
-
🗓️ - Date
February 25, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Austin, TX
-
🧠 - Skills detailed
#Kafka (Apache Kafka) #Deployment #Tableau #BigQuery #Java #ML (Machine Learning) #Batch #SQL (Structured Query Language) #Azure #AWS (Amazon Web Services) #Cloud #Databricks #Snowflake #Security #Scala #Debugging #Observability #ML Ops (Machine Learning Operations) #Data Modeling #Redshift #Automation #Monitoring #Spark (Apache Spark) #Airflow #Data Engineering #Python #Compliance #GCP (Google Cloud Platform) #Trino #Big Data
Role description
Open to both Cupertino, CA and Austin, TX locations!
Core Responsibilities
• Operations & Reliability
• Lead day-to-day operational management of analytics infrastructure.
• Ensure high availability , performance, and scalability of batch and real-time systems.
• Drive zero-downtime deployments through CI/CD and release best practices.
• Own incident management, production debugging, and post-incident reviews.
• Infrastructure & Platform Enablement
• Provision, enable, scale, and maintain data, analytics, and ML infrastructure in hybrid cloud.
• Build tools for observability , monitoring, alerting, and self-healing.
• Implement infrastructure-as-code and orchestration frameworks.
• Ensure governance, compliance, and security best practices.
• Automation & Efficiency
• Develop self-service tooling to improve engineering productivity .
• Drive cost optimization and infrastructure efficiency initiatives.
Required Qualifications
• 6+ years of experience in data engineering for analytics or ML systems.
• Strong SQL proficiency .
• Experience in Python, Scala, or Java.
• Hands-on experience with Spark, Kafka, and Airflow (or similar).
• Strong understanding of data modeling and lakehouse architectures (e.g., Iceberg).
• Experience with AWS, Azure, or GCP .
• Comfortable participating in rotating on-call.
• Experience with Snowflake, Databricks, Trino, OLAP/NRT systems, Superset or Tableau.
• Familiarity with CI/CD, data observability , infrastructure-as-code.
• Exposure to MLOps and GenAI/RAG pipelines.
• Hands-on experience with LLMs (prompt engineering, fine-tuning, RAG).
• Experience in FinTech, Wallet, or Payments domain.
Skill Prioritization & Ideal Background
• Snowflake, Databricks, and Tableau are priorities.
• Candidates from large, structured environments:
• Are more likely to have deep experience with big data, streaming, Spark, ML Ops, etc.
• Snowflake = soft requirement.
• Other cloud DW experience (e.g., Redshift, BigQuery) is sufficient.
Open to both Cupertino, CA and Austin, TX locations!
Core Responsibilities
• Operations & Reliability
• Lead day-to-day operational management of analytics infrastructure.
• Ensure high availability , performance, and scalability of batch and real-time systems.
• Drive zero-downtime deployments through CI/CD and release best practices.
• Own incident management, production debugging, and post-incident reviews.
• Infrastructure & Platform Enablement
• Provision, enable, scale, and maintain data, analytics, and ML infrastructure in hybrid cloud.
• Build tools for observability , monitoring, alerting, and self-healing.
• Implement infrastructure-as-code and orchestration frameworks.
• Ensure governance, compliance, and security best practices.
• Automation & Efficiency
• Develop self-service tooling to improve engineering productivity .
• Drive cost optimization and infrastructure efficiency initiatives.
Required Qualifications
• 6+ years of experience in data engineering for analytics or ML systems.
• Strong SQL proficiency .
• Experience in Python, Scala, or Java.
• Hands-on experience with Spark, Kafka, and Airflow (or similar).
• Strong understanding of data modeling and lakehouse architectures (e.g., Iceberg).
• Experience with AWS, Azure, or GCP .
• Comfortable participating in rotating on-call.
• Experience with Snowflake, Databricks, Trino, OLAP/NRT systems, Superset or Tableau.
• Familiarity with CI/CD, data observability , infrastructure-as-code.
• Exposure to MLOps and GenAI/RAG pipelines.
• Hands-on experience with LLMs (prompt engineering, fine-tuning, RAG).
• Experience in FinTech, Wallet, or Payments domain.
Skill Prioritization & Ideal Background
• Snowflake, Databricks, and Tableau are priorities.
• Candidates from large, structured environments:
• Are more likely to have deep experience with big data, streaming, Spark, ML Ops, etc.
• Snowflake = soft requirement.
• Other cloud DW experience (e.g., Redshift, BigQuery) is sufficient.






