

Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with 3–5 years of experience in SQL, Python, and AWS services (Glue, Lambda, Kinesis). Contract length is unspecified, with a pay rate of $45.00 to $55.00/hr, located in Burbank, CA - Hybrid.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
440
-
🗓️ - Date discovered
September 12, 2025
🕒 - Project duration
Unknown
-
🏝️ - Location type
Hybrid
-
📄 - Contract type
W2 Contractor
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
Burbank, CA
-
🧠 - Skills detailed
#"ETL (Extract #Transform #Load)" #Compliance #Data Architecture #IAM (Identity and Access Management) #Lambda (AWS Lambda) #Data Catalog #Databricks #Monitoring #SQL (Structured Query Language) #Airflow #Python #Security #Data Science #Datasets #AWS (Amazon Web Services) #Snowflake #Data Engineering #Data Quality #S3 (Amazon Simple Storage Service) #Informatica #Redshift #Agile #Debugging
Role description
Pay Rate: $45.00 to $55.00/hr on W2
Location: Burbank, CA - Hybrid
Job Responsibilities / Typical Day in the Role
Build & Maintain Pipelines
• Develop ETL/ELT jobs and streaming pipelines using AWS services (Glue, Lambda, Kinesis, Step Functions).
• Write efficient SQL and Python scripts for ingestion, transformation, and enrichment.
• Monitor pipeline health, troubleshoot issues, and ensure SLAs for data freshness.
Support Data Architecture & Models
• Implement data models defined by architects into physical schemas.
• Contribute to pipeline designs that align with canonical and semantic standards.
• Collaborate with application pods to deliver pipelines tailored to product features.
Ensure Data Quality & Governance
• Apply validation rules and monitoring to detect and surface data quality issues.
• Tag, document, and register new datasets in the enterprise data catalog.
• Follow platform security and compliance practices (e.g., Lake Formation, IAM).
Collaborate in Agile Pods
• Actively participate in sprint ceremonies and backlog refinement.
• Work closely with application developers, analysts, and data scientists to clarify requirements and unblock dependencies.
• Promote reuse of pipelines and shared services across pods.
Must Have Skills / Requirements
Data Engineer experience or in a related role.
3–5 years of experience
Hands-on experience with SQL, Python, AWS data services (Glue, Lambda, Kinesis, S3).
3–5 years of experience
Familiarity with orchestration tools (Airflow, Step Functions) and CI/CD workflows.
3–5 years of experience
Technology requirements:
• Hands-on experience with SQL, Python, AWS data services (Glue, Lambda, Kinesis, S3).
• Familiarity with orchestration tools (Airflow, Step Functions) and CI/CD workflows.
• Exposure to modern data platforms such as Snowflake, Databricks, Redshift, or Informatica.
• Strong problem-solving and debugging skills for pipeline operations.
Pay Rate: $45.00 to $55.00/hr on W2
Location: Burbank, CA - Hybrid
Job Responsibilities / Typical Day in the Role
Build & Maintain Pipelines
• Develop ETL/ELT jobs and streaming pipelines using AWS services (Glue, Lambda, Kinesis, Step Functions).
• Write efficient SQL and Python scripts for ingestion, transformation, and enrichment.
• Monitor pipeline health, troubleshoot issues, and ensure SLAs for data freshness.
Support Data Architecture & Models
• Implement data models defined by architects into physical schemas.
• Contribute to pipeline designs that align with canonical and semantic standards.
• Collaborate with application pods to deliver pipelines tailored to product features.
Ensure Data Quality & Governance
• Apply validation rules and monitoring to detect and surface data quality issues.
• Tag, document, and register new datasets in the enterprise data catalog.
• Follow platform security and compliance practices (e.g., Lake Formation, IAM).
Collaborate in Agile Pods
• Actively participate in sprint ceremonies and backlog refinement.
• Work closely with application developers, analysts, and data scientists to clarify requirements and unblock dependencies.
• Promote reuse of pipelines and shared services across pods.
Must Have Skills / Requirements
Data Engineer experience or in a related role.
3–5 years of experience
Hands-on experience with SQL, Python, AWS data services (Glue, Lambda, Kinesis, S3).
3–5 years of experience
Familiarity with orchestration tools (Airflow, Step Functions) and CI/CD workflows.
3–5 years of experience
Technology requirements:
• Hands-on experience with SQL, Python, AWS data services (Glue, Lambda, Kinesis, S3).
• Familiarity with orchestration tools (Airflow, Step Functions) and CI/CD workflows.
• Exposure to modern data platforms such as Snowflake, Databricks, Redshift, or Informatica.
• Strong problem-solving and debugging skills for pipeline operations.