aKUBE

Senior Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer in Glendale, CA, offering a 12-month contract at up to $85/hr. Key requirements include 5+ years of data engineering experience, proficiency in Airflow, Spark, Databricks, SQL, and AWS, with strong programming skills in Python, Java, or Scala.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
680
-
πŸ—“οΈ - Date
December 6, 2025
πŸ•’ - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
πŸ“„ - Contract
W2 Contractor
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Glendale, CA
-
🧠 - Skills detailed
#Kubernetes #Python #GraphQL #SQL (Structured Query Language) #Airflow #Databricks #Scrum #Programming #Datasets #Delta Lake #Scala #Agile #Data Modeling #AWS (Amazon Web Services) #Spark (Apache Spark) #Documentation #API (Application Programming Interface) #Data Quality #Data Engineering #Java
Role description
City: Glendale, CA Onsite/ Hybrid/ Remote: Hybrid (3 days a week onsite, Friday - Remote) Duration: 12 months Rate Range: Up to$85/hr on W2 depending on experience (no C2C or 1099 or sub-contract) Work Authorization: GC, USC, All valid EADs except OPT, CPT, H1B Must Have: β€’ 5+ years Data Engineering β€’ Airflow β€’ Spark DataFrame API β€’ Databricks β€’ SQL β€’ API integration β€’ AWS β€’ Python or Java or Scala Responsibilities: β€’ Maintain, update, and expand Core Data platform pipelines. β€’ Build tools for data discovery, lineage, governance, and privacy. β€’ Partner with engineering and cross-functional teams to deliver scalable solutions. β€’ Use Airflow, Spark, Databricks, Delta Lake, Kubernetes, and AWS to build and optimize workflows. β€’ Support platform standards, best practices, and documentation. β€’ Ensure data quality, reliability, and SLA adherence across datasets. β€’ Participate in Agile ceremonies and continuous process improvement. β€’ Work with internal customers to understand needs and prioritize enhancements. β€’ Maintain detailed documentation that supports governance and quality. Qualifications: β€’ 5+ years in data engineering with large-scale pipelines. β€’ Strong SQL and one major programming language (Python, Java, or Scala). β€’ Production experience with Spark and Databricks. β€’ Experience ingesting and interacting with API data sources. β€’ Hands-on Airflow orchestration experience. β€’ Experience developing APIs with GraphQL. β€’ Strong AWS knowledge and infrastructure-as-code familiarity. β€’ Understanding of OLTP vs OLAP, data modeling, and data warehousing. β€’ Strong problem-solving and algorithmic skills. β€’ Clear written and verbal communication. β€’ Agile/Scrum experience. β€’ Bachelor’s degree in a STEM field or equivalent industry experience.