

Elios, Inc.
Senior Platform Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is a Senior Platform Data Engineer for a year-long contract, offering a competitive pay rate. Key skills required include Python, SQL, dbt, Apache Airflow, and experience with AWS or Databricks. A background in financial data modeling is preferred.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
February 28, 2026
π - Duration
More than 6 months
-
ποΈ - Location
Unknown
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#OpenSearch #IAM (Identity and Access Management) #Kubernetes #JSON (JavaScript Object Notation) #Terraform #REST (Representational State Transfer) #Data Engineering #Metadata #Migration #Apache Airflow #dbt (data build tool) #Deployment #"ETL (Extract #Transform #Load)" #Data Quality #pydantic #Code Reviews #Cloud #Classification #Delta Lake #Infrastructure as Code (IaC) #JDBC (Java Database Connectivity) #Data Modeling #Monitoring #Scala #AWS (Amazon Web Services) #AWS S3 (Amazon Simple Storage Service) #Redshift #Airflow #Docker #SaaS (Software as a Service) #SQL (Structured Query Language) #Agile #ML (Machine Learning) #S3 (Amazon Simple Storage Service) #Pytest #Normalization #Snowflake #Fivetran #Kafka (Apache Kafka) #Slowly Changing Dimensions #Data Lineage #Python #Data Science #Databricks
Role description
Senior Platform Data Engineer
We are seeking a Senior Platform Data Engineer to support a high-visibility, year-long platform transformation initiative. This is a production engineering role focused on building and operating a centralized, enterprise-grade data platform that institutionalizes proprietary knowledge and delivers data as a product β not a side effect.
You will design, build, and operate models, pipelines, integrations, and infrastructure across the full medallion architecture (Bronze β Silver β Gold). This role requires true end-to-end ownership: design, build, test, deploy, monitor, and continuously improve.
What Youβll Do
β’ Build and maintain ingestion pipelines from raw Bronze landing through curated Gold domain models
β’ Develop transformation models using dbt with testing, Slim CI, and exposure tracking
β’ Author and manage production-grade Airflow DAGs with SLA monitoring and backfill strategies
β’ Design and implement streaming and event-driven pipelines (Kafka, Glue Streaming, Delta Live Tables)
β’ Collaborate with data scientists, platform engineers, and product teams to ship reliable data products
β’ Implement data quality frameworks and enforce governance standards
β’ Containerize workloads and deploy to Kubernetes (EKS)
β’ Contribute to infrastructure as code (Terraform provisioning for S3, IAM, Glue, etc.)
β’ Document lineage, metadata, ownership, and sensitivity classification
This is not a βpipeline-onlyβ role β it is platform ownership at scale.
Core Technical Skills
Python
β’ Production-level standards (type hints, Pydantic, pytest, Ruff, mypy)
SQL
β’ Advanced analytical queries, window functions, optimization
β’ Experience with Redshift, Snowflake, or Databricks SQL
dbt
β’ Gold-layer modeling and testing
β’ Slim CI, exposures, meta configurations
Orchestration
β’ Apache Airflow (DAG authoring, task dependencies, SLAs, backfills)
β’ Prefect or Dagster is a plus
Streaming & Event Pipelines
β’ Kafka topic consumption
β’ Bronze landing jobs
Cloud Data Platforms (hands-on with at least one)
β’ Databricks (Delta Lake, Unity Catalog, Auto Loader, DLT)
β’ AWS (S3, Glue, Lake Formation, Redshift)
Data Quality
β’ Great Expectations, dbt tests, or equivalent
Docker & Kubernetes
β’ Containerized jobs
β’ Kubernetes Jobs and CronJobs
β’ EKS deployments
Infrastructure as Code
β’ Terraform familiarity
Data Modeling
β’ Dimensional modeling
β’ Slowly Changing Dimensions
β’ Medallion architecture design
β’ Financial or time-series modeling experience preferred
Data Platform-Specific Experience
β’ SaaS ingestion via Fivetran, Airbyte, or custom connectors (REST, SFTP, JDBC)
β’ Change Data Capture (Debezium or connector-based CDC modes)
β’ Handling semi-structured and unstructured data (PDF, Excel, JSON)
β’ Designing Bronze schemas for fidelity and downstream flexibility
β’ Vector embedding pipelines (chunking strategies, embeddings, pgvector/OpenSearch)
β’ Data lineage tooling (OpenLineage, DataHub, Unity Catalog)
β’ Governance standards: PII handling, sensitivity classification, column masking, partition-level access
Ways of Working
β’ Agile sprint participation (planning, standups, retros)
β’ Treat pipelines as production software (tests, runbooks, SLAs, alerts)
β’ Migration-safe schema changes (expand/contract pattern)
β’ Proactive data quality ownership
β’ Rigorous peer code reviews
β’ Close collaboration with data science and platform engineering teams
Nice to Have
β’ Migration of legacy data assets (Excel, Access, shared drives)
β’ ML feature engineering and point-in-time correct pipelines
β’ Exposure to financial reporting, chart of accounts normalization, or diligence data
β’ Experience in private equity, investment banking, asset management, or professional services
β’ Open-source data tooling contributions
Qualifications
β’ Bachelorβs degree or equivalent experience
β’ 5β8+ years of experience developing data products and platforms
β’ Strong background in Agile software development
β’ Demonstrated success in prior engineering roles
β’ Strong analytical and problem-solving capabilities
What Weβre Looking For
β’ Clear communicator who can document complex data flows
β’ Results-driven and impact-focused
β’ Proactive and self-directed
β’ Comfortable interfacing across technical and business stakeholders
β’ Entrepreneurial mindset with a willingness to innovate
If youβre a production-minded data engineer who thrives on building scalable platforms and owning pipelines end-to-end, weβd love to connect.
Senior Platform Data Engineer
We are seeking a Senior Platform Data Engineer to support a high-visibility, year-long platform transformation initiative. This is a production engineering role focused on building and operating a centralized, enterprise-grade data platform that institutionalizes proprietary knowledge and delivers data as a product β not a side effect.
You will design, build, and operate models, pipelines, integrations, and infrastructure across the full medallion architecture (Bronze β Silver β Gold). This role requires true end-to-end ownership: design, build, test, deploy, monitor, and continuously improve.
What Youβll Do
β’ Build and maintain ingestion pipelines from raw Bronze landing through curated Gold domain models
β’ Develop transformation models using dbt with testing, Slim CI, and exposure tracking
β’ Author and manage production-grade Airflow DAGs with SLA monitoring and backfill strategies
β’ Design and implement streaming and event-driven pipelines (Kafka, Glue Streaming, Delta Live Tables)
β’ Collaborate with data scientists, platform engineers, and product teams to ship reliable data products
β’ Implement data quality frameworks and enforce governance standards
β’ Containerize workloads and deploy to Kubernetes (EKS)
β’ Contribute to infrastructure as code (Terraform provisioning for S3, IAM, Glue, etc.)
β’ Document lineage, metadata, ownership, and sensitivity classification
This is not a βpipeline-onlyβ role β it is platform ownership at scale.
Core Technical Skills
Python
β’ Production-level standards (type hints, Pydantic, pytest, Ruff, mypy)
SQL
β’ Advanced analytical queries, window functions, optimization
β’ Experience with Redshift, Snowflake, or Databricks SQL
dbt
β’ Gold-layer modeling and testing
β’ Slim CI, exposures, meta configurations
Orchestration
β’ Apache Airflow (DAG authoring, task dependencies, SLAs, backfills)
β’ Prefect or Dagster is a plus
Streaming & Event Pipelines
β’ Kafka topic consumption
β’ Bronze landing jobs
Cloud Data Platforms (hands-on with at least one)
β’ Databricks (Delta Lake, Unity Catalog, Auto Loader, DLT)
β’ AWS (S3, Glue, Lake Formation, Redshift)
Data Quality
β’ Great Expectations, dbt tests, or equivalent
Docker & Kubernetes
β’ Containerized jobs
β’ Kubernetes Jobs and CronJobs
β’ EKS deployments
Infrastructure as Code
β’ Terraform familiarity
Data Modeling
β’ Dimensional modeling
β’ Slowly Changing Dimensions
β’ Medallion architecture design
β’ Financial or time-series modeling experience preferred
Data Platform-Specific Experience
β’ SaaS ingestion via Fivetran, Airbyte, or custom connectors (REST, SFTP, JDBC)
β’ Change Data Capture (Debezium or connector-based CDC modes)
β’ Handling semi-structured and unstructured data (PDF, Excel, JSON)
β’ Designing Bronze schemas for fidelity and downstream flexibility
β’ Vector embedding pipelines (chunking strategies, embeddings, pgvector/OpenSearch)
β’ Data lineage tooling (OpenLineage, DataHub, Unity Catalog)
β’ Governance standards: PII handling, sensitivity classification, column masking, partition-level access
Ways of Working
β’ Agile sprint participation (planning, standups, retros)
β’ Treat pipelines as production software (tests, runbooks, SLAs, alerts)
β’ Migration-safe schema changes (expand/contract pattern)
β’ Proactive data quality ownership
β’ Rigorous peer code reviews
β’ Close collaboration with data science and platform engineering teams
Nice to Have
β’ Migration of legacy data assets (Excel, Access, shared drives)
β’ ML feature engineering and point-in-time correct pipelines
β’ Exposure to financial reporting, chart of accounts normalization, or diligence data
β’ Experience in private equity, investment banking, asset management, or professional services
β’ Open-source data tooling contributions
Qualifications
β’ Bachelorβs degree or equivalent experience
β’ 5β8+ years of experience developing data products and platforms
β’ Strong background in Agile software development
β’ Demonstrated success in prior engineering roles
β’ Strong analytical and problem-solving capabilities
What Weβre Looking For
β’ Clear communicator who can document complex data flows
β’ Results-driven and impact-focused
β’ Proactive and self-directed
β’ Comfortable interfacing across technical and business stakeholders
β’ Entrepreneurial mindset with a willingness to innovate
If youβre a production-minded data engineer who thrives on building scalable platforms and owning pipelines end-to-end, weβd love to connect.






