Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
440
-
πŸ—“οΈ - Date discovered
September 16, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
Unknown
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Burbank, CA
-
🧠 - Skills detailed
#Snowflake #Batch #S3 (Amazon Simple Storage Service) #Security #SQL (Structured Query Language) #Data Architecture #Python #Data Pipeline #Agile #Cloud #Debugging #Data Science #IAM (Identity and Access Management) #Informatica #Data Catalog #Airflow #Data Quality #"ETL (Extract #Transform #Load)" #Data Governance #Databricks #Datasets #Lambda (AWS Lambda) #Lean #Monitoring #Scala #AWS (Amazon Web Services) #Data Engineering #Compliance #Redshift
Role description
About the Role We are reimagining how finance, business, and technology collaborate. Our approach is built on lean-agile, user-centric, product-oriented pods that deliver integrated, scalable solutions. The Data Engineer will be an integral member focused on building, maintaining, and optimizing data pipelines that deliver trusted, governed data to product pods, analysts, and data scientists. You’ll work alongside Senior Data Engineers, Data Architects, and Cloud Architects, contributing to the development of scalable data solutions while participating fully in agile delivery practices. This role is hands-on, collaborative, and outcome-driven β€” perfect for someone who thrives in both building reliable pipelines and shaping how data is leveraged across the enterprise. Key Responsibilities Build & Maintain Pipelines β€’ Develop ETL/ELT jobs and streaming pipelines using AWS services (Glue, Lambda, Kinesis, Step Functions). β€’ Write efficient SQL and Python scripts for ingestion, transformation, and enrichment. β€’ Monitor pipeline health, troubleshoot issues, and ensure SLAs for data freshness and reliability. Support Data Architecture & Models β€’ Implement data models defined by architects into physical schemas. β€’ Design and deliver pipelines that align with canonical and semantic standards. β€’ Partner with application pods to deliver pipelines tailored to product features. Ensure Data Quality & Governance β€’ Apply validation rules and monitoring to detect and surface data quality issues. β€’ Tag, document, and register datasets in the enterprise data catalog. β€’ Follow platform security and compliance practices (e.g., IAM, Lake Formation). Collaborate in Agile Pods β€’ Actively participate in sprint ceremonies, backlog refinement, and retrospectives. β€’ Work closely with developers, analysts, and data scientists to clarify requirements and unblock dependencies. β€’ Promote reuse of pipelines and shared services across pods to improve scalability and efficiency. Must-Have Qualifications β€’ 3–5 years of experience as a Data Engineer (or in a related role). β€’ Hands-on experience with SQL, Python, and AWS data services (Glue, Lambda, Kinesis, S3). β€’ Familiarity with orchestration tools such as Airflow or Step Functions. β€’ Experience with CI/CD workflows for data pipelines. β€’ Strong problem-solving and debugging skills for pipeline operations. Nice-to-Have Qualifications β€’ Proven ability to optimize pipelines for both batch and streaming use cases. β€’ Knowledge of data governance practices (lineage, validation, cataloging). β€’ Exposure to modern data platforms (Snowflake, Databricks, Redshift, Informatica). β€’ Strong collaboration and mentoring skills, with the ability to influence across pods and domains. Benefits: β€’ This role is eligible to enroll in both Mondo's health insurance plan and retirement plan. Mondo defers to the applicable State or local law for paid sick leave eligibility.