Aloola.io

Senior Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer with a contract length of "unknown" and a pay rate of "$X/hour." Key skills required include Amazon Redshift, dbt, Apache Airflow, and Google Cloud Dataflow. 5+ years of relevant experience is essential, preferably in healthcare.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
April 7, 2026
πŸ•’ - Duration
Unknown
-
🏝️ - Location
Unknown
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Los Angeles, CA
-
🧠 - Skills detailed
#Kafka (Apache Kafka) #Data Modeling #DevOps #Data Analysis #Data Quality #Dataflow #Data Engineering #GIT #Amazon Redshift #Documentation #Apache Beam #Redshift #Batch #Monitoring #Scala #Version Control #dbt (data build tool) #Data Warehouse #BigQuery #Apache Airflow #Data Pipeline #Data Vault #Observability #Airflow #"ETL (Extract #Transform #Load)" #Deployment #Terraform #Cloud #Schema Design #Python #Vault #GCP (Google Cloud Platform) #SQL (Structured Query Language)
Role description
About the Role We are looking for a skilled Data Engineer to design, build, and maintain scalable data pipelines and infrastructure. You will work cross-functionally with analytics, engineering, and business teams to ensure reliable, high-quality data flows that power decision-making across the organization. Responsibilities β€’ Design, develop, and maintain robust ELT/ETL pipelines using dbt and Apache Airflow for orchestration β€’ Build and optimize data workflows using Google Cloud Dataflow for large-scale stream and batch processing β€’ Manage and optimize Amazon Redshift data warehouse, including schema design, query performance tuning, and cluster maintenance β€’ Collaborate with data analysts and scientists to model data in a way that supports self-service analytics and reporting β€’ Implement and enforce data quality checks, monitoring, and alerting across pipelines β€’ Develop and maintain dbt models, tests, and documentation to ensure data consistency and lineage transparency β€’ Partner with platform and DevOps teams on infrastructure-as-code, CI/CD pipelines, and deployment of data assets β€’ Contribute to the development of data engineering best practices, standards, and reusable frameworks Requirements β€’ 5+ years of experience in a data engineering role β€’ Strong proficiency with Amazon Redshift β€” schema design, query optimization, distribution/sort keys, and workload management β€’ Hands-on experience with dbt (Core or Cloud) β€” building models, writing tests, managing sources, and maintaining documentation β€’ Experience orchestrating workflows with Apache Airflow (Cloud Composer or self-managed), including DAG development, scheduling, and dependency management β€’ Experience building pipelines with Google Cloud Dataflow (Apache Beam), including both batch and streaming use cases β€’ Proficiency in SQL and Python β€’ Familiarity with data modeling concepts (star schema, Kimball, data vault) β€’ Experience with version control (Git) and collaborative development workflows Nice to Have β€’ Experience with Google Cloud Platform (BigQuery, GCS, Pub/Sub) β€’ Familiarity with Terraform or other infrastructure-as-code tools β€’ Knowledge of data observability tools (Monte Carlo, Great Expectations, etc.) β€’ Experience in a healthcare or regulated data environment β€’ Exposure to streaming architectures (Kafka, Pub/Sub) What We're Looking For A self-sufficient engineer who takes ownership end-to-end β€” from pipeline design through production monitoring β€” and communicates clearly with both technical and non-technical stakeholders.