Tential Solutions

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with 3+ years of experience in SQL, Python, and AWS services (Glue, Lambda, Kinesis). The contract is hybrid for 3 days on-site in Burbank, CA, with a competitive pay rate.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
January 27, 2026
πŸ•’ - Duration
Unknown
-
🏝️ - Location
Hybrid
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Burbank, CA
-
🧠 - Skills detailed
#Data Pipeline #Databricks #S3 (Amazon Simple Storage Service) #Airflow #Cloud #Snowflake #Debugging #IAM (Identity and Access Management) #Agile #Datasets #Monitoring #Redshift #Automation #Data Engineering #Scala #Lean #Security #Data Science #Data Architecture #Data Quality #Lambda (AWS Lambda) #Documentation #AWS (Amazon Web Services) #Python #"ETL (Extract #Transform #Load)" #SQL (Structured Query Language) #Compliance #Informatica #Data Catalog
Role description
What We Do/Project As part of our transformation, we are evolving how finance, business and technology collaborate, shifting to lean-agile, user-centric small product-oriented delivery teams (PODs) that deliver integrated, intelligent, scalable solutions, and bring together engineers, product owners, designers, data architects, and domain experts. Each pod is empowered to own outcomes end-to-endβ€”refining requirements, building solutions, testing, and delivering in iterative increments. We emphasize collaboration over handoffs, working software over documentation alone, and shared accountability for delivery. Engineers contribute not only code, but also to design reviews, backlog refinement, and retrospectives, ensuring decisions are transparent and scalable across pods. We prioritize reusability, automation, and continuous improvement, balancing rapid delivery with long-term maintainability. The Data Engineer is an integral member of the Platform Pod, focused on building, maintaining, and optimizing data pipelines that deliver trusted data to product pods, analysts, and data scientists. This role works closely with the Senior Data Engineer, Data Architect, and Cloud Architect to implement pipelines aligned with enterprise standards and program goals. Job Responsibilities / Typical Day in the Role Build & Maintain Pipelines β€’ Develop ETL/ELT jobs and streaming pipelines using AWS services (Glue, Lambda, Kinesis, Step Functions). β€’ Write efficient SQL and Python scripts for ingestion, transformation, and enrichment. β€’ Monitor pipeline health, troubleshoot issues, and ensure SLAs for data freshness. Support Data Architecture & Models β€’ Implement data models defined by architects into physical schemas. β€’ Contribute to pipeline designs that align with canonical and semantic standards. β€’ Collaborate with application pods to deliver pipelines tailored to product features. Ensure Data Quality & Governance β€’ Apply validation rules and monitoring to detect and surface data quality issues. β€’ Tag, document, and register new datasets in the enterprise data catalog. β€’ Follow platform security and compliance practices (e.g., Lake Formation, IAM). Collaborate in Agile Pods β€’ Actively participate in sprint ceremonies and backlog refinement. β€’ Work closely with application developers, analysts, and data scientists to clarify requirements and unblock dependencies. β€’ Promote reuse of pipelines and shared services across pods. Must Have Skills / Requirements β€’ Experience as a data engineer β€’ 3+ years of experience. β€’ Hands-on experience with SQL, Python, AWS data services (Glue, Lambda, Kinesis, S3). β€’ 3+ years of experience. β€’ Familiarity with orchestration tools (Airflow, Step Functions) and CI/CD workflows. β€’ 3+ years of experience. Nice To Have Skills / Preferred Requirements β€’ None. Soft Skills β€’ Strong problem-solving and debugging skills for pipeline operations. β€’ Collaborative mindset: Willingness to work in agile pods and contribute to cross-functional outcomes. Technology Requirements β€’ Hands-on experience with SQL, Python, AWS data services (Glue, Lambda, Kinesis, S3). β€’ Familiarity with orchestration tools (Airflow, Step Functions) and CI/CD workflows. β€’ Exposure to modern data platforms such as Snowflake, Databricks, Redshift, or Informatica. Additional Notes β€’ Sourcing in CA – Burbank β€’ Hybrid requirement – 3-days on-site. #DICE