Senior Engineer (Spark/AWS)

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Engineer (Spark/AWS) on a hybrid contract basis, offering a pay rate of "pay rate". Candidates should have a Bachelor's degree, 5+ years of experience, and proficiency in Scala or Python, with strong skills in Apache Spark and data modeling.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
600
-
πŸ—“οΈ - Date discovered
June 4, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
Hybrid
-
πŸ“„ - Contract type
W2 Contractor
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Seattle, WA
-
🧠 - Skills detailed
#AWS (Amazon Web Services) #Hadoop #Spark (Apache Spark) #Data Quality #Data Engineering #Programming #Java #Data Pipeline #TypeScript #Scala #Apache Spark #Data Processing #Snowflake #Computer Science #Python #Data Modeling #Agile #A/B Testing #Airflow #Documentation #Data Governance #Data Warehouse #ML (Machine Learning) #Databricks
Role description
Senior Engineer (Spark/AWS) | Hybrid | Contract/W2 Only! β€’ There are no Corp-to-Corp options or Visa Sponsorship available for this position β€’ Optomi, in partnership with a market leader in the entertainment industry, is seeking a Senior Engineer for a hybrid position out of one of their hub locations. This candidate will join the Discovery, Merchandising, and Experimentation Data team that owns and operates mission-critical data products and services that enable personalized experiences, content campaigns, and A/B testing across a suite of streaming platforms. The ideal candidate will possess a strong technical background in large-scale data engineering and a passion for designing scalable, high-impact data systems. Additionally, this candidate will collaborate with cross-functional partners to solve complex data problems, build new greenfield solutions, and help drive innovation in experimentation and personalization. What the right candidate will enjoy: β€’ Flexible work schedule! β€’ Long-term career opportunity! β€’ Contributing to building data products and services from the ground-up! β€’ Gaining experience with cutting-edge technologies on enterprise-level platforms! Experience of the right candidate: β€’ Bachelor’s degree in Computer Science, Engineering, or a related field, plus 5+ years of relevant industry experience. β€’ Proficiency in Scala or Python, with solid programming fundamentals and experience building production-grade systems. β€’ Strong experience with Apache Spark, Databricks, and Airflow. β€’ Solid understanding of data modeling techniques and best practices. β€’ Familiarity with orchestration frameworks and distributed data processing systems such as Spark, Hadoop, and Databricks. β€’ Working knowledge of data warehouse solutions, including Databricks and Snowflake, and the ability to evaluate trade-offs among technologies. β€’ Excellent analytical, problem-solving, and communication skills with a strong attention to detail. Preferred Qualifications: β€’ Experience preparing data for machine learning pipelines and personalized recommendation systems. β€’ Familiarity with A/B testing frameworks, multi-armed bandits, or other experimentation methodologies. β€’ Hands-on experience with AWS, Java, Kotlin, Typescript, and/or Snowflake in production environments. Responsibilities of the right candidate: β€’ Design and develop scalable data products and services supporting personalization, content discovery, and experimentation at petabyte scale. β€’ Collaborate with stakeholders to understand business needs and translate them into technical solutions in the data and experimentation space. β€’ Build and maintain robust data pipelines and orchestration frameworks using technologies such as Spark, Airflow, and Databricks. β€’ Ensure system reliability and performance to meet SLAs and support critical business operations. β€’ Create and maintain comprehensive documentation to support data quality, data governance, and reproducibility. β€’ Contribute to agile development practices to continuously improve team processes and delivery. Tech Stack: β€’ Languages: Scala, Python, Java, Kotlin, Typescript β€’ Technologies: Databricks, Spark, Airflow, Snowflake, AWS