

Senior Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
680
-
ποΈ - Date discovered
September 16, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Unknown
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Burbank, CA
-
π§ - Skills detailed
#Snowflake #Batch #S3 (Amazon Simple Storage Service) #Security #SQL (Structured Query Language) #Data Architecture #DynamoDB #Anomaly Detection #RDS (Amazon Relational Database Service) #Python #Spark (Apache Spark) #Data Pipeline #Semantic Models #Agile #Data Science #Informatica #Data Catalog #Scripting #Airflow #Data Quality #"ETL (Extract #Transform #Load)" #AI (Artificial Intelligence) #ML (Machine Learning) #Data Governance #Databricks #BI (Business Intelligence) #Metadata #Lambda (AWS Lambda) #Lean #Monitoring #Scala #AWS (Amazon Web Services) #Data Engineering #Redshift #PySpark
Role description
About the Role
As part of our economics transformation, we are reimagining how finance, business, and technology collaborate. Our teams are shifting to lean-agile, product-oriented delivery pods that integrate engineers, product owners, designers, data architects, and domain experts. Each pod owns outcomes end-to-end, with engineers contributing not only code but also to design reviews, backlog refinement, and retrospectives.
The Senior Data Engineer plays a hands-on role in building high-performance data pipelines, integrations, and services that power analytics, AI/ML workflows, and enterprise-wide data products. This role is ideal for an experienced engineer who thrives in collaborative, agile environments and can balance rapid delivery with long-term scalability and governance.
Key Responsibilities
Design & Build Scalable Data Pipelines
β’ Lead development of batch and streaming pipelines using AWS-native services (Glue, Lambda, Step Functions, Kinesis).
β’ Implement best practices for monitoring, resilience, security, and cost optimization in high-scale pipelines.
β’ Collaborate with architects to translate canonical and semantic models into physical implementations.
Enable Analytics & AI/ML Workflows
β’ Build pipelines delivering clean, structured data for BI tools, analytics, and ML pipelines.
β’ Partner with data scientists to support feature engineering and deploy ML models into production.
Ensure Data Quality & Governance
β’ Embed validation, anomaly detection, and lineage into pipelines.
β’ Contribute to the enterprise data catalog and enforce schema alignment across pods.
β’ Implement role-based access controls, metadata standards, and tagging in collaboration with governance teams.
Mentor & Collaborate Across Pods
β’ Provide technical guidance to junior data engineers, sharing best practices in design and coding standards.
β’ Participate in pod ceremonies (standups, backlog refinement, sprint reviews) and cross-program architecture syncs.
β’ Advocate for reusable services and platform-first approaches to reduce fragmentation.
Must-Have Qualifications
β’ 7+ years of experience in data engineering with expertise in AWS services (Glue, Kinesis, Lambda, RDS, DynamoDB, S3).
β’ Hands-on experience with orchestration tools such as Airflow or Step Functions.
β’ Strong skills in SQL, Python, PySpark, and scripting for data transformation.
β’ Experience working with modern data platforms (e.g., Snowflake, Databricks, Redshift, Informatica).
β’ Proven ability to design and optimize pipelines for both batch and streaming use cases.
β’ Knowledge of data governance practices including lineage, validation, and cataloging.
Nice-to-Have Qualifications
β’ Experience collaborating across matrixed, pod-based teams in agile environments.
β’ Strong mentoring and collaboration skills with the ability to influence pods and domains.
β’ Familiarity with large-scale AI/ML-driven workflows.
Benefits:
β’ This role is eligible to enroll in both Mondo's health insurance plan and retirement plan. Mondo defers to the applicable State or local law for paid sick leave eligibility.
About the Role
As part of our economics transformation, we are reimagining how finance, business, and technology collaborate. Our teams are shifting to lean-agile, product-oriented delivery pods that integrate engineers, product owners, designers, data architects, and domain experts. Each pod owns outcomes end-to-end, with engineers contributing not only code but also to design reviews, backlog refinement, and retrospectives.
The Senior Data Engineer plays a hands-on role in building high-performance data pipelines, integrations, and services that power analytics, AI/ML workflows, and enterprise-wide data products. This role is ideal for an experienced engineer who thrives in collaborative, agile environments and can balance rapid delivery with long-term scalability and governance.
Key Responsibilities
Design & Build Scalable Data Pipelines
β’ Lead development of batch and streaming pipelines using AWS-native services (Glue, Lambda, Step Functions, Kinesis).
β’ Implement best practices for monitoring, resilience, security, and cost optimization in high-scale pipelines.
β’ Collaborate with architects to translate canonical and semantic models into physical implementations.
Enable Analytics & AI/ML Workflows
β’ Build pipelines delivering clean, structured data for BI tools, analytics, and ML pipelines.
β’ Partner with data scientists to support feature engineering and deploy ML models into production.
Ensure Data Quality & Governance
β’ Embed validation, anomaly detection, and lineage into pipelines.
β’ Contribute to the enterprise data catalog and enforce schema alignment across pods.
β’ Implement role-based access controls, metadata standards, and tagging in collaboration with governance teams.
Mentor & Collaborate Across Pods
β’ Provide technical guidance to junior data engineers, sharing best practices in design and coding standards.
β’ Participate in pod ceremonies (standups, backlog refinement, sprint reviews) and cross-program architecture syncs.
β’ Advocate for reusable services and platform-first approaches to reduce fragmentation.
Must-Have Qualifications
β’ 7+ years of experience in data engineering with expertise in AWS services (Glue, Kinesis, Lambda, RDS, DynamoDB, S3).
β’ Hands-on experience with orchestration tools such as Airflow or Step Functions.
β’ Strong skills in SQL, Python, PySpark, and scripting for data transformation.
β’ Experience working with modern data platforms (e.g., Snowflake, Databricks, Redshift, Informatica).
β’ Proven ability to design and optimize pipelines for both batch and streaming use cases.
β’ Knowledge of data governance practices including lineage, validation, and cataloging.
Nice-to-Have Qualifications
β’ Experience collaborating across matrixed, pod-based teams in agile environments.
β’ Strong mentoring and collaboration skills with the ability to influence pods and domains.
β’ Familiarity with large-scale AI/ML-driven workflows.
Benefits:
β’ This role is eligible to enroll in both Mondo's health insurance plan and retirement plan. Mondo defers to the applicable State or local law for paid sick leave eligibility.