

Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with 3-7+ years of experience in AWS services (Glue, Kinesis, Lambda) and orchestration tools. It is a hybrid position in Burbank, CA, with a contract-to-hire arrangement and a competitive pay rate.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
704
-
ποΈ - Date discovered
September 17, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Hybrid
-
π - Contract type
W2 Contractor
-
π - Security clearance
Unknown
-
π - Location detailed
Burbank, CA
-
π§ - Skills detailed
#Lambda (AWS Lambda) #Data Governance #RDS (Amazon Relational Database Service) #Redshift #BI (Business Intelligence) #Data Pipeline #ML (Machine Learning) #Data Engineering #Cloud #Scala #Data Quality #Batch #Monitoring #"ETL (Extract #Transform #Load)" #Scripting #Informatica #Data Science #Anomaly Detection #Snowflake #Metadata #Airflow #SQL (Structured Query Language) #Spark (Apache Spark) #Databricks #AI (Artificial Intelligence) #S3 (Amazon Simple Storage Service) #DynamoDB #Data Transformations #PySpark #Data Architecture #Data Catalog #Python #Deployment #AWS (Amazon Web Services)
Role description
Contract to hire Data Engineer opportunity for large entertainment and media corporations. This will be hybrid 3 days a week onsite in Burbank, CA (local candidates only).
β’
β’
β’
β’
β’ NO C2C
β’
β’
β’
β’
β’
β’ Must have experience with development of batch and streaming pipelines using AWS-native tools (Glue, Lambda, Step Functions, Kinesis) and modern orchestration frameworks.
The Senior Data Engineer plays a hands-on role within the Platform Pod, ensuring data pipelines, integrations, and services are performant, reliable, and reusable. This role partners closely with Data Architects, Cloud Architects, and application pods to deliver governed, AI/ML-ready data products.
Job Responsibilities / Typical Day in the Role
Design & Build Scalable Data Pipelines
β’ Lead development of batch and streaming pipelines using AWS-native tools (Glue, Lambda, Step Functions, Kinesis) and modern orchestration frameworks.
β’ Implement best practices for monitoring, resilience, and cost optimization in high-scale pipelines.
β’ Collaborate with architects to translate canonical and semantic data models into physical implementations.
Enable Analytics & AI/ML Workflows
β’ Build pipelines that deliver clean, well-structured data to analysts, BI tools, and ML pipelines.
β’ Work with data scientists to enable feature engineering and deployment of ML models into production environments.
Ensure Data Quality & Governance
β’ Embed validation, lineage, and anomaly detection into pipelines.
β’ Contribute to the enterprise data catalog and enforce schema alignment across pods.
β’ Partner with governance teams to implement role-based access, tagging, and metadata standards.
Mentor & Collaborate Across Pods
β’ Guide junior data engineers, sharing best practices in pipeline design and coding standards.
β’ Participate in pod ceremonies (backlog refinement, sprint reviews) and program-level architecture syncs.
β’ Promote reusable services and reduce fragmentation by advocating platform-first approaches.
Years experience:
β’ 3-7+ years of experience in data engineering, with hands-on expertise in AWS services (Glue, Kinesis, Lambda, RDS, DynamoDB, S3) and orchestration tools (Airflow, Step Functions).
Must Have Skills / Requirements
1. Data Engineering Experience with hands-on expertise in AWS services (Glue, Kinesis, Lambda, RDS, DynamoDB, S3) and orchestration tools (Airflow, Step Functions).
1. Proven ability to optimize pipelines for both batch and streaming use cases.
1. Knowledge of data governance practices, including lineage, validation, and cataloging.
Technology Requirements:
1. Experience with data engineering, with hands-on expertise in AWS services (Glue, Kinesis, Lambda, RDS, DynamoDB, S3) and orchestration tools (Airflow, Step Functions).
1. Strong skills in SQL, Python, PySpark, and scripting for data transformations.
1. Experience working with modern data platforms (Snowflake, Databricks, Redshift, Informatica).
1. Proven ability to optimize pipelines for both batch and streaming use cases.
1. Knowledge of data governance practices, including lineage, validation, and cataloging.
Nice to Have Skills / Preferred Requirements
1. Proven ability to optimize pipelines for both batch and streaming use cases.
1. Knowledge of data governance practices, including lineage, validation, and cataloging.
1. Strong collaboration and mentoring skills; ability to influence pods and domains.
Contract to hire Data Engineer opportunity for large entertainment and media corporations. This will be hybrid 3 days a week onsite in Burbank, CA (local candidates only).
β’
β’
β’
β’
β’ NO C2C
β’
β’
β’
β’
β’
β’ Must have experience with development of batch and streaming pipelines using AWS-native tools (Glue, Lambda, Step Functions, Kinesis) and modern orchestration frameworks.
The Senior Data Engineer plays a hands-on role within the Platform Pod, ensuring data pipelines, integrations, and services are performant, reliable, and reusable. This role partners closely with Data Architects, Cloud Architects, and application pods to deliver governed, AI/ML-ready data products.
Job Responsibilities / Typical Day in the Role
Design & Build Scalable Data Pipelines
β’ Lead development of batch and streaming pipelines using AWS-native tools (Glue, Lambda, Step Functions, Kinesis) and modern orchestration frameworks.
β’ Implement best practices for monitoring, resilience, and cost optimization in high-scale pipelines.
β’ Collaborate with architects to translate canonical and semantic data models into physical implementations.
Enable Analytics & AI/ML Workflows
β’ Build pipelines that deliver clean, well-structured data to analysts, BI tools, and ML pipelines.
β’ Work with data scientists to enable feature engineering and deployment of ML models into production environments.
Ensure Data Quality & Governance
β’ Embed validation, lineage, and anomaly detection into pipelines.
β’ Contribute to the enterprise data catalog and enforce schema alignment across pods.
β’ Partner with governance teams to implement role-based access, tagging, and metadata standards.
Mentor & Collaborate Across Pods
β’ Guide junior data engineers, sharing best practices in pipeline design and coding standards.
β’ Participate in pod ceremonies (backlog refinement, sprint reviews) and program-level architecture syncs.
β’ Promote reusable services and reduce fragmentation by advocating platform-first approaches.
Years experience:
β’ 3-7+ years of experience in data engineering, with hands-on expertise in AWS services (Glue, Kinesis, Lambda, RDS, DynamoDB, S3) and orchestration tools (Airflow, Step Functions).
Must Have Skills / Requirements
1. Data Engineering Experience with hands-on expertise in AWS services (Glue, Kinesis, Lambda, RDS, DynamoDB, S3) and orchestration tools (Airflow, Step Functions).
1. Proven ability to optimize pipelines for both batch and streaming use cases.
1. Knowledge of data governance practices, including lineage, validation, and cataloging.
Technology Requirements:
1. Experience with data engineering, with hands-on expertise in AWS services (Glue, Kinesis, Lambda, RDS, DynamoDB, S3) and orchestration tools (Airflow, Step Functions).
1. Strong skills in SQL, Python, PySpark, and scripting for data transformations.
1. Experience working with modern data platforms (Snowflake, Databricks, Redshift, Informatica).
1. Proven ability to optimize pipelines for both batch and streaming use cases.
1. Knowledge of data governance practices, including lineage, validation, and cataloging.
Nice to Have Skills / Preferred Requirements
1. Proven ability to optimize pipelines for both batch and streaming use cases.
1. Knowledge of data governance practices, including lineage, validation, and cataloging.
1. Strong collaboration and mentoring skills; ability to influence pods and domains.