Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer with 7+ years of experience, focusing on AWS services and data governance. It is a hybrid position in Burbank, CA, with a contract length of "unknown" and a pay rate of "unknown."
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
September 13, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
Hybrid
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Burbank, CA
-
🧠 - Skills detailed
#Cloud #Monitoring #Data Governance #Scripting #Data Quality #Lambda (AWS Lambda) #Metadata #SQL (Structured Query Language) #Databricks #Scala #Anomaly Detection #Batch #Data Engineering #PySpark #Snowflake #Data Catalog #RDS (Amazon Relational Database Service) #Deployment #Spark (Apache Spark) #Python #Automation #Agile #Data Architecture #DynamoDB #AWS (Amazon Web Services) #Informatica #Documentation #Airflow #S3 (Amazon Simple Storage Service) #BI (Business Intelligence) #Data Transformations #"ETL (Extract #Transform #Load)" #Redshift #Lean #ML (Machine Learning) #AI (Artificial Intelligence) #Data Pipeline #Data Science
Role description
As part of our transformation, we are evolving how finance, business and technology collaborate, shifting to lean-agile, user-centric small product-oriented delivery teams (PODs) that deliver integrated, intelligent, scalable solutions, and bring together engineers, product owners, designers, data architects, and domain experts. Each pod is empowered to own outcomes end-to-endβ€”refining requirements, building solutions, testing, and delivering in iterative increments. We emphasize collaboration over handoffs, working software over documentation alone, and shared accountability for delivery. Engineers contribute not only code, but also to design reviews, backlog refinement, and retrospectives, ensuring decisions are transparent and scalable across pods. We prioritize reusability, automation, and continuous improvement, balancing rapid delivery with long-term maintainability. The Senior Data Engineer plays a hands-on role within the Platform Pod, ensuring data pipelines, integrations, and services are performant, reliable, and reusable. This role partners closely with Data Architects, Cloud Architects, and application pods to deliver governed, AI/ML-ready data products. Job Responsibilities / Typical Day in the Role Design & Build Scalable Data Pipelines β€’ Lead development of batch and streaming pipelines using AWS-native tools (Glue, Lambda, Step Functions, Kinesis) and modern orchestration frameworks. β€’ Implement best practices for monitoring, resilience, and cost optimization in high-scale pipelines. β€’ Collaborate with architects to translate canonical and semantic data models into physical implementations. Enable Analytics & AI/ML Workflows β€’ Build pipelines that deliver clean, well-structured data to analysts, BI tools, and ML pipelines. β€’ Work with data scientists to enable feature engineering and deployment of ML models into production environments. Ensure Data Quality & Governance β€’ Embed validation, lineage, and anomaly detection into pipelines. β€’ Contribute to the enterprise data catalog and enforce schema alignment across pods. β€’ Partner with governance teams to implement role-based access, tagging, and metadata standards. Mentor & Collaborate Across Pods β€’ Guide junior data engineers, sharing best practices in pipeline design and coding standards. β€’ Participate in pod ceremonies (backlog refinement, sprint reviews) and program-level architecture syncs. β€’ Promote reusable services and reduce fragmentation by advocating platform-first approaches. Must Have Skills / Requirements β€’ Data Engineering Experience with hands-on expertise in AWS services (Glue, Kinesis, Lambda, RDS, DynamoDB, S3) and orchestration tools (Airflow, Step Functions). β€’ 7+ years of experience β€’ Proven ability to optimize pipelines for both batch and streaming use cases. β€’ 7+ years of experience β€’ Knowledge of data governance practices, including lineage, validation, and cataloging. β€’ 7+ years of experience Nice To Have Skills / Preferred Requirements β€’ Proven ability to optimize pipelines for both batch and streaming use cases. β€’ Knowledge of data governance practices, including lineage, validation, and cataloging. β€’ Strong collaboration and mentoring skills; ability to influence pods and domains. Soft Skills β€’ Strong collaboration and mentoring skills; ability to influence pods and domains. Technology Requirements β€’ Experience with data engineering, with hands-on expertise in AWS services (Glue, Kinesis, Lambda, RDS, DynamoDB, S3) and orchestration tools (Airflow, Step Functions). β€’ Strong skills in SQL, Python, PySpark, and scripting for data transformations. β€’ Experience working with modern data platforms (Snowflake, Databricks, Redshift, Informatica). β€’ Proven ability to optimize pipelines for both batch and streaming use cases. β€’ Knowledge of data governance practices, including lineage, validation, and cataloging. Additional Notes β€’ Hybrid – 3 days on-site in CA – Burbank. #DICE