CTP

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer based in California, offering $70-$85/hr for a contract position. Requires 5+ years in data engineering, proficiency in Python or Java, and experience with Spark, Airflow, Snowflake, and API development.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
680
-
πŸ—“οΈ - Date
November 27, 2025
πŸ•’ - Duration
Unknown
-
🏝️ - Location
Remote
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
California, United States
-
🧠 - Skills detailed
#Spark (Apache Spark) #Cloud #Delta Lake #Java #Programming #Python #Datasets #Airflow #Snowflake #Data Science #GraphQL #Scala #Stories #Databricks #AWS (Amazon Web Services) #Data Engineering #Data Pipeline
Role description
Job Title: Data Engineer Location: Glendale, CA – Open to remote but candidates MUST be in California The Company Headquartered in Los Angeles, this leader in the Entertainment & Media space is focused on delivering world-class stories and experiences to its global audience. To offer the best entertainment experiences, their technology teams focus on continued innovation and utilization of cutting edge technology. Platform / Stack You will work with technologies that include Python, AWS, Snowflake, Databricks, and Airflow. Compensation Expectation: $70 – 85/hr What You'll Do As a Data Engineer: β€’ Contribute to maintaining, updating, and expanding existing Core Data platform data pipelines β€’ Build and maintain APIs to expose data to downstream applications β€’ Develop real-time streaming data pipelines β€’ Tech stack includes Airflow, Spark, Databricks, Delta Lake, and Snowflake β€’ Collaborate with product managers, architects, and other engineers to drive the success of the Core Data platform β€’ Contribute to developing and documenting both internal and external standards and best practices for pipeline configurations, naming conventions, and more β€’ Ensure high operational efficiency and quality of the Core Data platform datasets to ensure our solutions meet SLAs and project reliability and accuracy to all our stakeholders (Engineering, Data Science, Operations, and Analytics teams) Qualifications: You could be a great fit if you have: β€’ 5+ years of data engineering experience developing large data pipelines β€’ Proficiency in at least one major programming language (e.g. Python, Java, Scala) β€’ Hands-on production environment experience with distributed processing systems such as Spark β€’ Hands-on production experience with data pipeline orchestration systems such as Airflow for creating and maintaining data pipelines β€’ Experience with at least one major Massively Parallel Processing (MPP) or cloud database technology (Snowflake, Databricks, Big Query). β€’ Experience in developing APIs with GraphQL β€’ Advance understanding of OLTP vs OLAP environments β€’ Graph Database experience a plus β€’ Realtime Event Streaming experience a plus This client requires that a background check be completed. A background check is required to protect our company/client and its stakeholders by ensuring that we hire individuals with a trustworthy history, which helps maintain a safe and secure workplace. This proactive measure minimizes potential risks and promotes a culture of integrity within the organization. Benefits Offered: Employer provides access to: β€’ 3 levels of medical insurance for you and your family β€’ Dental insurance for you and your family β€’ 401k β€’ Overtime β€’ California has the following sick leave policy: accrue 1 hour for every 30 hours worked up to 48 hours. If you are based in a different state, please inquire about that state’s sick leave policy.