

Optomi
Senior Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer with a contract length of "unknown" and a pay rate of "unknown." Required skills include Databricks, Snowflake, Apache Spark, AWS, and strong programming in Scala. Experience in large-scale AdTech and high-volume data processing is essential.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
672
-
🗓️ - Date
March 4, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
San Francisco Bay Area
-
🧠 - Skills detailed
#Programming #Python #Snowflake #Data Lake #Data Engineering #Databricks #Data Pipeline #"ETL (Extract #Transform #Load)" #Data Processing #Datasets #Java #Batch #Apache Spark #AWS (Amazon Web Services) #AI (Artificial Intelligence) #Spark (Apache Spark) #Scala
Role description
Domain Experience
• Large-scale AdTech / Advertising platforms
• Processing high-volume ad logs
• Transforming raw ad data for downstream business stakeholders
• Building datasets and systems that measure and report ad performance
• Supporting multiple business domains through shared data platforms
Required Technical Skills (Must-Have)
• Databricks (mandatory)
• Snowflake
• Apache Spark
• AWS ecosystem
• Strong programming skills:
• Scala (required)
• Python or Java
• Experience working with:
• Terabytes of daily data processing
• Petabyte-scale data lakes
• Building and maintaining:
• Batch data pipelines
• Real-time/streaming pipelines
• Exposure to AI-assisted development tools:
• Cursor
• Claude Code
Domain Experience
• Large-scale AdTech / Advertising platforms
• Processing high-volume ad logs
• Transforming raw ad data for downstream business stakeholders
• Building datasets and systems that measure and report ad performance
• Supporting multiple business domains through shared data platforms
Required Technical Skills (Must-Have)
• Databricks (mandatory)
• Snowflake
• Apache Spark
• AWS ecosystem
• Strong programming skills:
• Scala (required)
• Python or Java
• Experience working with:
• Terabytes of daily data processing
• Petabyte-scale data lakes
• Building and maintaining:
• Batch data pipelines
• Real-time/streaming pipelines
• Exposure to AI-assisted development tools:
• Cursor
• Claude Code






