YO IT Consulting

Senior Data Engineer - Entertainment / Media Domain

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer in the Entertainment/Media domain, offering a 12-month contract at $[pay rate] in Glendale, CA. Requires 5+ years of experience with data pipelines, Spark, Airflow, SQL, Python, and API development.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
May 9, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Glendale, AZ
-
🧠 - Skills detailed
#Data Quality #Spark (Apache Spark) #SQL (Structured Query Language) #GraphQL #Java #Airflow #Snowflake #Programming #Data Engineering #Datasets #Data Pipeline #Python #Data Science #Scala #Data Processing #Cloud #Delta Lake #Databricks #Data Modeling
Role description
Job Title: Sr Data Engineer (Contract Role for 12 Months) Domain : Entertainment/Media/Publishing Location: Glendale, CA - Onsite 4 days a week Experience : 5-20 Years Must-Haves • 5+ years of data engineering experience specifically developing large-scale data pipelines. • Spark, Airflow, Databricks or Snowflake, SQL, Python. Role And Responsibilities • Contribute to maintaining, updating, and expanding existing Core Data platform data pipelines. • Build and maintain APIs to expose data to downstream applications. • Develop real-time streaming data pipelines. • Tech stack includes Airflow, Spark, Databricks, Delta Lake, and Snowflake. • Collaborate with product managers, architects, and other engineers to drive the success of the Core Data platform. • Contribute to developing and documenting both internal and external standards and best practices for pipeline configurations, naming conventions, and more. • Maintain high operational efficiency and data quality across the Core Data platform datasets, ensuring our solutions consistently meet SLAs while delivering reliability and accuracy to all stakeholders, including Engineering, Data Science, Operations, and Analytics teams. Required Qualifications • A 5+ years of data engineering experience developing large data pipelines. • Proficiency in at least one major programming language (e.g. Python, Java, Scala)Hands-on production environment experience with distributed processing systems such as Spark Hands-on production experience with data pipeline orchestration systems such as Airflow for creating and maintaining data pipelines. • Experience with at least one major Massively Parallel Processing (MPP) or cloud database technology (Snowflake, Databricks, Big Query). • Experience in developing APIs with GraphQL • Advanced understanding of OLTP vs OLAP environments • Strong background in at least one of the following: distributed data processing or software engineering of data services, or data modeling