Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a contract length of "unknown", offering a pay rate of "unknown". It requires 3–5 years of experience in SQL, Python, and AWS services, with a focus on building and optimizing data pipelines. Hybrid work in Burbank, CA.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
-
🗓️ - Date discovered
September 13, 2025
🕒 - Project duration
Unknown
-
🏝️ - Location type
Hybrid
-
📄 - Contract type
Unknown
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
Burbank, CA
-
🧠 - Skills detailed
#Cloud #Monitoring #Data Governance #Data Quality #Lambda (AWS Lambda) #SQL (Structured Query Language) #Databricks #IAM (Identity and Access Management) #Datasets #Scala #Batch #Data Engineering #Snowflake #Data Catalog #Python #Automation #Agile #Data Architecture #AWS (Amazon Web Services) #Informatica #Compliance #Documentation #Airflow #Security #Debugging #S3 (Amazon Simple Storage Service) #"ETL (Extract #Transform #Load)" #Redshift #Lean #Data Pipeline #Data Science
Role description
What We Do/Project As part of our transformation, we are evolving how finance, business and technology collaborate, shifting to lean-agile, user-centric small product-oriented delivery teams (PODs) that deliver integrated, intelligent, scalable solutions, and bring together engineers, product owners, designers, data architects, and domain experts. Each pod is empowered to own outcomes end-to-end—refining requirements, building solutions, testing, and delivering in iterative increments. We emphasize collaboration over handoffs, working software over documentation alone, and shared accountability for delivery. Engineers contribute not only code, but also to design reviews, backlog refinement, and retrospectives, ensuring decisions are transparent and scalable across pods. We prioritize reusability, automation, and continuous improvement, balancing rapid delivery with long-term maintainability. The Data Engineer is an integral member of the Platform Pod, focused on building, maintaining, and optimizing data pipelines that deliver trusted data to product pods, analysts, and data scientists. This role works closely with the Senior Data Engineer, Data Architect, and Cloud Architect to implement pipelines aligned with enterprise standards and program goals. Job Responsibilities / Typical Day in the Role Build & Maintain Pipelines • Develop ETL/ELT jobs and streaming pipelines using AWS services (Glue, Lambda, Kinesis, Step Functions). • Write efficient SQL and Python scripts for ingestion, transformation, and enrichment. • Monitor pipeline health, troubleshoot issues, and ensure SLAs for data freshness. Support Data Architecture & Models • Implement data models defined by architects into physical schemas. • Contribute to pipeline designs that align with canonical and semantic standards. • Collaborate with application pods to deliver pipelines tailored to product features. Ensure Data Quality & Governance • Apply validation rules and monitoring to detect and surface data quality issues. • Tag, document, and register new datasets in the enterprise data catalog. • Follow platform security and compliance practices (e.g., Lake Formation, IAM). Collaborate in Agile Pods • Actively participate in sprint ceremonies and backlog refinement. • Work closely with application developers, analysts, and data scientists to clarify requirements and unblock dependencies. • Promote reuse of pipelines and shared services across pods. Must Have Skills / Requirements • Data Engineer experience or in a related role. • 3–5 years of experience • Hands-on experience with SQL, Python, AWS data services (Glue, Lambda, Kinesis, S3). • 3–5 years of experience • Familiarity with orchestration tools (Airflow, Step Functions) and CI/CD workflows. • 3–5 years of experience Nice To Have Skills / Preferred Requirements • Proven ability to optimize pipelines for both batch and streaming use cases. • Knowledge of data governance practices, including lineage, validation, and cataloging. • Strong collaboration and mentoring skills; ability to influence across pods and domains. Soft Skills • Collaborative mindset: Willingness to work in agile pods and contribute to cross-functional outcomes. Technology Requirements • Hands-on experience with SQL, Python, AWS data services (Glue, Lambda, Kinesis, S3). • Familiarity with orchestration tools (Airflow, Step Functions) and CI/CD workflows. • Exposure to modern data platforms such as Snowflake, Databricks, Redshift, or Informatica. • Strong problem-solving and debugging skills for pipeline operations. Additional Notes • Hybrid – 3-days on-site, CA – Burbank #DICE