Umanist NA

Senior Data Engineer (Streaming & APIs)

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer (Streaming & APIs) in Glendale, CA, onsite 4 days a week. Contract length and pay rate are unspecified. Requires 5+ years of data engineering experience, proficiency in Python, and expertise with Spark, Snowflake, and Airflow.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
640
-
πŸ—“οΈ - Date
April 8, 2026
πŸ•’ - Duration
Unknown
-
🏝️ - Location
On-site
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Glendale, CA
-
🧠 - Skills detailed
#Programming #Databricks #GraphQL #Spark (Apache Spark) #Datasets #AWS (Amazon Web Services) #Delta Lake #Java #Data Modeling #Data Science #Snowflake #Python #Airflow #Cloud #Data Pipeline #Scala #Data Engineering #SQL (Structured Query Language) #Data Processing
Role description
Job Title: Sr Data Engineer Location: Glendale, CA – Onsite 4 days a week Platform / Stack You will work with technologies that include Python, AWS, Spark, Snowflake, Databricks, and Airflow. What You'll Do As a Sr Data Engineer β€’ Contribute to maintaining, updating, and expanding existing Core Data platform data pipelines β€’ Build and maintain APIs to expose data to downstream applications β€’ Develop real-time streaming data pipelines β€’ Tech stack includes Airflow, Spark, Databricks, Delta Lake, and Snowflake β€’ Collaborate with product managers, architects, and other engineers to drive the success of the Core Data platform β€’ Contribute to developing and documenting both internal and external standards and best practices for pipeline configurations, naming conventions, and more β€’ Ensure high operational efficiency and quality of the Core Data platform datasets to ensure our solutions meet SLAs and project reliability and accuracy to all our stakeholders (Engineering, Data Science, Operations, and Analytics teams) Qualifications You could be a great fit if you have: β€’ 5+ years of data engineering experience developing large data pipelines β€’ Proficiency in at least one major programming language (e.g. Python, Java, Scala) β€’ Hands-on production environment experience with distributed processing systems such as Spark β€’ Hands-on production experience with data pipeline orchestration systems such as Airflow for creating and maintaining data pipelines β€’ Experience with at least one major Massively Parallel Processing (MPP) or cloud database technology (Snowflake, Databricks, Big Query). β€’ Experience in developing APIs with GraphQL β€’ Advanced understanding of OLTP vs OLAP environments β€’ Strong background in at least one of the following: distributed data processing or software engineering of data services, or data modeling. Skills: pipelines,spark,sql,snowflake,databricks,airflow,data,python