

Senior Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer in Glendale, CA, for 12+ months at $80-$91.25/hr. Requires 5+ years of data engineering experience, proficiency in Python/Java/Scala, SQL, and experience with Spark, Airflow, Snowflake, and AWS. BA/BS in Comp Sci/IS required.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
728
-
ποΈ - Date discovered
July 12, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
On-site
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Glendale, CA
-
π§ - Skills detailed
#Databricks #Data Quality #Data Modeling #Spark (Apache Spark) #Python #Data Governance #Documentation #Scala #Delta Lake #Programming #SQL (Structured Query Language) #Datasets #Snowflake #Kubernetes #Infrastructure as Code (IaC) #Agile #Data Engineering #Data Pipeline #Airflow #Cloud #AWS (Amazon Web Services) #Data Science #Scrum #"ETL (Extract #Transform #Load)" #GraphQL #Data Processing #Java
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Title: Senior Data Engineer
Industry: Entertainment
Location:Β Glendale, CA
Duration:Β 12+ months
Rate Range: $80-$91.25/hr
Work Requirements: US Citizen, GC Holders or Authorized to Work in the U.S.
Description:
β’ As a Senior Data Engineer, you will play a pivotal role in the transformation of data into actionable insights.
β’ Collaborate with our dynamic team of technologists to develop cutting-edge data solutions that drive innovation and fuel business growth. Your responsibilities will include managing complex data structures and delivering scalable and efficient data solutions.
β’ Your expertise in data engineering will be crucial in optimizing our data-driven decision-making processes.
Key Responsibilities:
β’ Contribute to maintaining, updating, and expanding existing Core Data platform data pipelines
β’ Build tools and services to support data discovery, lineage, governance, and privacy
β’ Collaborate with other software/data engineers and cross-functional teams
β’ Tech stack includes Airflow, Spark, Databricks, Delta Lake, Snowflake, Kubernetes and AWS
β’ Collaborate with product managers, architects, and other engineers to drive the success of the Core Data platform
β’ Contribute to developing and documenting both internal and external standards and best practices for pipeline configurations, naming conventions, and more
β’ Ensure high operational efficiency and quality of the Core Data platform datasets to ensure our solutions meet SLAs and project reliability and accuracy to all our stakeholders (Engineering, Data Science, Operations, and Analytics teams)
β’ Be an active participant and advocate of agile/scrum ceremonies to collaborate and improve processes for our team
β’ Engage with and understand our customers, forming relationships that allow us to understand and prioritize both innovative new offerings and incremental platform improvements
β’ Maintain detailed documentation of your work and changes to support data quality and data governance requirements
Basic Qualifications:
β’ 5+ years of data engineering experience developing large data pipelines β Proficiency in at least one major programming language (e.g. Python,Java, Scala)
β’ Strong SQL skills and ability to create queries to analyze complex datasets
β’ Hands-on production environment experience with distributed processing systems such as Spark
β’ Hands-on production experience with data pipeline orchestration systems such as Airflow for creating and maintaining data pipelines
β’ Experience with at least one major Massively Parallel Processing (MPP) or cloud database technology (Snowflake, Databricks, Big Query).
β’ Experience in developing APIs with GraphQL
β’ Deep Understanding of AWS or other cloud providers as well as infrastructure as code
β’ Familiarity with Data Modeling techniques and Data Warehousing standard methodologies and practices
β’ Strong algorithmic problem-solving expertise
β’ Advance understanding of OLTP vs OLAP environments
β’ Strong background in at least one of the following: distributed data processing or software engineering of data services, or data modeling
β’ Familiar with Scrum and Agile methodologies
Required Education:
β’ BA/BS Degree Comp Sci/IS or related field
Our benefits package includes:
β’ Comprehensive medical benefits
β’ Competitive pay
β’ 401(k) retirement plan
β’ ...and much more!
About INSPYR Solutions
Technology is our focus and quality is our commitment. As a national expert in delivering flexible technology and talent solutions, we strategically align industry and technical expertise with our clients' business objectives and cultural needs. Our solutions are tailored to each client and include a wide variety of professional services, project, and talent solutions. By always striving for excellence and focusing on the human aspect of our business, we work seamlessly with our talent and clients to match the right solutions to the right opportunities. Learn more about us at inspyrsolutions.com.
INSPYR Solutions provides Equal Employment Opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, sex, national origin, age, disability, or genetics. In addition to federal law requirements, INSPYR Solutions complies with applicable state and local laws governing nondiscrimination in employment in every location in which the company has facilities