

Senior Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer with 5+ years of experience in Python and Apache Spark, focusing on data pipeline development. It's a hybrid contract position in "Los Angeles, Seattle, or New York," offering competitive pay.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
664
-
ποΈ - Date discovered
August 22, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Hybrid
-
π - Contract type
W2 Contractor
-
π - Security clearance
Unknown
-
π - Location detailed
Los Angeles, CA
-
π§ - Skills detailed
#Batch #Spark (Apache Spark) #AWS (Amazon Web Services) #Data Pipeline #Pytest #SQL (Structured Query Language) #Computer Science #Code Reviews #Data Engineering #Apache Spark #Docker #Scala #"ETL (Extract #Transform #Load)" #JUnit #Airflow #Data Integration #Automation #PySpark #Snowflake #Databricks #Java #Data Ingestion #Cloud #Python
Role description
Senior Data Engineer (Python/PySpark/Airflow) | Hybrid in Los Angeles, Seattle, or New York | Contract/W2 Only!
There are no Corp-to-Corp options or Visa Sponsorship available for this position.
Optomi, in partnership with a global leader in the entertainment industry, is seeking a Senior Data Engineer for a hybrid role based out of their Orlando, FL office. This individual will join the Activation team, playing a key role in the design, development, and support of a cutting-edge Data Integration Platform that powers data ingestion, transformation, and delivery across the enterprise.
This is a high-impact opportunity to contribute to a platform that acts as the connective tissue between internal systems, driving seamless data activation and integration at scale.
What the right candidate will enjoy:
β’ Long-term, high-visibility engagement!
β’ Working with large-scale, real-time and batch data systems!
β’ A collaborative and forward-thinking engineering culture!
β’ Building tools that enable data-driven decision-making across the business!
Experience of the right candidate:
β’ Bachelorβs degree in Computer Science, Engineering, or related field.
β’ 5+ years of professional experience in software and/or data engineering.
β’ Strong development experience with Python (PySpark preferred), with additional experience in Scala or Java.
β’ 2+ years working with Apache Spark, especially PySpark Streaming.
β’ Proficiency in SQL for data transformation and ETL.
β’ Experience developing data pipelines and APIs for data sharing and delivery.
β’ Familiarity with Docker and containerization.
β’ Hands-on experience with Airflow or similar workflow orchestration tools.
β’ Solid foundation in distributed systems and software engineering principles.
β’ Strong testing practices using frameworks such as pytest or JUnit.
Preferred Qualifications:
β’ Experience with cloud platforms such as AWS, Databricks, and Snowflake.
β’ Exposure to QA/QE practices and test automation.
β’ Experience mentoring junior engineers and leading engineering best practices.
β’ Familiarity with self-service enablement platforms and reusable component design.
Responsibilities of the right candidate:
β’ Design and build scalable, fault-tolerant data pipelines and services (real-time and batch).
β’ Develop and maintain Activation Data Products for data sharing and delivery.
β’ Create reusable components to support self-service integration and activation.
β’ Collaborate cross-functionally with product managers, program managers, QA, and other engineers.
β’ Lead code reviews and champion engineering excellence within the team.
β’ Mentor junior engineers and support professional development within the team.
β’ Represent the team in organization-wide technical initiatives and platform improvements.
What weβre looking for:
β’ Proven ability to design and scale data-intensive systems in a modern cloud environment.
β’ Demonstrated success building data infrastructure to support high-volume integration needs.
β’ Experience leading projects and driving technical decision-making.
β’ Strong communication skills and ability to collaborate across technical and non-technical stakeholders.
β’ Passion for clean, maintainable code and continuous improvement of engineering practices.
Senior Data Engineer (Python/PySpark/Airflow) | Hybrid in Los Angeles, Seattle, or New York | Contract/W2 Only!
There are no Corp-to-Corp options or Visa Sponsorship available for this position.
Optomi, in partnership with a global leader in the entertainment industry, is seeking a Senior Data Engineer for a hybrid role based out of their Orlando, FL office. This individual will join the Activation team, playing a key role in the design, development, and support of a cutting-edge Data Integration Platform that powers data ingestion, transformation, and delivery across the enterprise.
This is a high-impact opportunity to contribute to a platform that acts as the connective tissue between internal systems, driving seamless data activation and integration at scale.
What the right candidate will enjoy:
β’ Long-term, high-visibility engagement!
β’ Working with large-scale, real-time and batch data systems!
β’ A collaborative and forward-thinking engineering culture!
β’ Building tools that enable data-driven decision-making across the business!
Experience of the right candidate:
β’ Bachelorβs degree in Computer Science, Engineering, or related field.
β’ 5+ years of professional experience in software and/or data engineering.
β’ Strong development experience with Python (PySpark preferred), with additional experience in Scala or Java.
β’ 2+ years working with Apache Spark, especially PySpark Streaming.
β’ Proficiency in SQL for data transformation and ETL.
β’ Experience developing data pipelines and APIs for data sharing and delivery.
β’ Familiarity with Docker and containerization.
β’ Hands-on experience with Airflow or similar workflow orchestration tools.
β’ Solid foundation in distributed systems and software engineering principles.
β’ Strong testing practices using frameworks such as pytest or JUnit.
Preferred Qualifications:
β’ Experience with cloud platforms such as AWS, Databricks, and Snowflake.
β’ Exposure to QA/QE practices and test automation.
β’ Experience mentoring junior engineers and leading engineering best practices.
β’ Familiarity with self-service enablement platforms and reusable component design.
Responsibilities of the right candidate:
β’ Design and build scalable, fault-tolerant data pipelines and services (real-time and batch).
β’ Develop and maintain Activation Data Products for data sharing and delivery.
β’ Create reusable components to support self-service integration and activation.
β’ Collaborate cross-functionally with product managers, program managers, QA, and other engineers.
β’ Lead code reviews and champion engineering excellence within the team.
β’ Mentor junior engineers and support professional development within the team.
β’ Represent the team in organization-wide technical initiatives and platform improvements.
What weβre looking for:
β’ Proven ability to design and scale data-intensive systems in a modern cloud environment.
β’ Demonstrated success building data infrastructure to support high-volume integration needs.
β’ Experience leading projects and driving technical decision-making.
β’ Strong communication skills and ability to collaborate across technical and non-technical stakeholders.
β’ Passion for clean, maintainable code and continuous improvement of engineering practices.