

Senior Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer, lasting 6 months+, with a pay rate of up to $86.20/HR. Located on the West Coast, it requires 5+ years of experience in data engineering, expertise in Python, PySpark, SQL, Databricks, and Snowflake.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
688
-
ποΈ - Date discovered
June 18, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
Unknown
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Glendale, CA
-
π§ - Skills detailed
#Scala #Scripting #Cloud #Automation #Big Data #Computer Science #Spark SQL #Azure #GCP (Google Cloud Platform) #GIT #ML (Machine Learning) #Spark (Apache Spark) #Airflow #Terraform #SQL (Structured Query Language) #Model Deployment #Data Engineering #AWS (Amazon Web Services) #Documentation #Data Science #Code Reviews #"ETL (Extract #Transform #Load)" #Monitoring #Data Pipeline #Shell Scripting #Version Control #Data Governance #Snowflake #Compliance #Security #Data Architecture #Data Modeling #Kafka (Apache Kafka) #Python #Datasets #Databricks #PySpark #SQL Queries #Deployment #ML Ops (Machine Learning Operations)
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Industry: Mass Media & Entertainment
Job Title: Senior Data Engineer
Location: West Coast Preferred - Seattle, Los Angeles, Glendale, Burbank
Duration: 6 months +
Rate: up to $86.20/HR (Hourly rate is commensurate with experience and is an estimated range provided by WorkGenius.)
β’
β’
β’
β’ Job Description
Our client is looking for a Senior Data Engineer. Data is essential for all our decision-making needs whether itβs
related to product design, measuring advertising effectiveness, helping users discover new content or building new
businesses in emerging markets. This data is deeply valuable and gives us insights into how we can continue
improving their service for their users, advertisers and content partners. Their Audience team is seeking a highly
motivated Data Engineer with a strong technical background and passionate about diving deeper into Big Data to
develop state of the art Data Solutions.
About the Role: We are seeking a highly skilled Senior Data Engineer with expertise in Python, PySpark, SQL, Databricks, Snowflake and Airflow to join our growing data team. This role involves building and maintaining scalable data pipelines, working cross-functionally with data science, product, and architecture teams, and enabling data-driven decision-making across the organization.
Key Responsibilities:
β’ Design, implement, and maintain ETL/ELT pipelines using Python, PySpark, Databricks, and Airflow to deliver clean, reliable, and timely data.
β’ Build optimized data models and develop advanced SQL queries within Snowflake and other cloud data platforms.
β’ Use Python and Shell Scripting to develop data transformation scripts, automation tools, and utilities to support data engineering and analytics.
β’ Collaborate closely with data scientists, product managers, data architects, and external engineering teams to understand requirements and deliver high-quality data solutions.
β’ Support data science workflows by providing well-structured datasets, enabling feature engineering, and integrating model outputs into production pipelines.
β’ Produce and maintain comprehensive documentation, including technical specifications, data flow diagrams, and pipeline monitoring procedures.
β’ Ensure high reliability, scalability, and performance of data systems, implementing best practices for testing, monitoring, and recovery.
β’ Participate in architecture reviews and provide input on strategic data platform decisions.
β’ Mentor junior team members and support peer code reviews.
Required Skills & Qualifications:
β’ 5+ years of experience in data engineering, software development, or related fields.
β’ Strong expertise in PySpark, Python, and SQL.
β’ Hands-on experience with Databricks, Snowflake, and Airflow in production environments.
β’ Solid understanding of data warehousing, data modeling, and pipeline orchestration in cloud environments (e.g., AWS, Azure, or GCP).
β’ Experience working with data science teams, understanding ML lifecycle requirements and model deployment strategies.
β’ Strong collaboration and communication skills with technical and non-technical stakeholders.
β’ Excellent documentation skills and experience producing technical design artifacts.
β’ Bachelor's or Master's degree in Computer Science, Data Engineering, or a related field.
Nice to Have:
β’ Familiarity with Kafka, or streaming technologies.
β’ Experience with ML Ops, feature stores, or deploying machine learning models in production.
β’ Understanding of data governance, security, and compliance standards.
β’ Experience with CI/CD, version control (Git), and infrastructure-as-code tools (e.g., Terraform).
Education Required: Bachelorβs or Masterβs Degree in Computer Science, Information Systems or related field