

SilverSearch, Inc.
Senior Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer with an 8+ year background in data engineering, focusing on Azure and Databricks. Contract length is unspecified, with a competitive pay rate. Key skills include Azure Data Factory, Python, and Data Vault 2.0.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
February 14, 2026
π - Duration
Unknown
-
ποΈ - Location
Unknown
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#Data Lake #Apache Airflow #Automation #Data Pipeline #Batch #"ETL (Extract #Transform #Load)" #Azure ADLS (Azure Data Lake Storage) #Azure Data Factory #ADLS (Azure Data Lake Storage) #Vault #Scala #dbt (data build tool) #Fivetran #Databricks #SQL (Structured Query Language) #ADF (Azure Data Factory) #Data Vault #Documentation #Replication #Spark (Apache Spark) #DevOps #Datasets #Airflow #Data Engineering #PySpark #Storage #Python #Azure #Monitoring #Agile #Data Quality
Role description
Senior Data Engineer
About the Role
We are seeking a highly skilled, hands-on Senior Data Engineer to build and scale modern lakehouse and analytics solutions in Azure and Databricks. This role will focus on developing high-performance data pipelines, implementing scalable data models, and delivering trusted datasets that power enterprise reporting and advanced analytics.
Youβll partner closely with architects, analysts, and stakeholders to translate business requirements into production-grade data solutions while contributing to platform evolution and engineering best practices.
What Youβll Do
β’ Design and build scalable ETL/ELT pipelines using Azure Data Factory (ADF), Databricks, and DBT
β’ Implement batch and near real-time processing using DLT
β’ Orchestrate workflows using Apache Airflow, Databricks LakeFlow, and ADF
β’ Develop ingestion and replication pipelines using Fivetran, SQDR, Rivery, or custom Python/PySpark solutions
β’ Design and implement Data Vault 2.0 models to support enterprise analytics
β’ Write optimized SQL, Python, and PySpark for complex transformations and performance tuning
β’ Ensure data quality, reliability, monitoring, and operational excellence across pipelines
β’ Contribute to CI/CD, DevOps automation, documentation, and Agile delivery practices
β’ Provide technical mentorship and promote strong engineering standards
What Weβre Looking For
β’ 8+ years of experience in data or software engineering
β’ 5+ years building enterprise ETL/ELT and analytics solutions
β’ Strong experience with Azure Data Factory and Azure Data Lake Storage
β’ Hands-on experience with Databricks, Delta Live Tables, and Unity Catalog
β’ Advanced SQL expertise (T-SQL, performance tuning, optimization)
β’ Strong Python and PySpark development experience
β’ Experience with DBT and Apache Airflow
β’ Experience implementing Data Vault 2.0 modeling
β’ Familiarity with CI/CD and DevOps practices
Must-Have Technical Stack
β’ Azure Data Factory (ADF)
β’ Azure Data Lake Storage
β’ Databricks
β’ Delta Live Tables (DLT)
β’ Unity Catalog
β’ DBT
β’ Apache Airflow
β’ Python & PySpark
β’ Advanced SQL
β’ Data Vault 2.0
We are not accepting third party submissions for this role.
Senior Data Engineer
About the Role
We are seeking a highly skilled, hands-on Senior Data Engineer to build and scale modern lakehouse and analytics solutions in Azure and Databricks. This role will focus on developing high-performance data pipelines, implementing scalable data models, and delivering trusted datasets that power enterprise reporting and advanced analytics.
Youβll partner closely with architects, analysts, and stakeholders to translate business requirements into production-grade data solutions while contributing to platform evolution and engineering best practices.
What Youβll Do
β’ Design and build scalable ETL/ELT pipelines using Azure Data Factory (ADF), Databricks, and DBT
β’ Implement batch and near real-time processing using DLT
β’ Orchestrate workflows using Apache Airflow, Databricks LakeFlow, and ADF
β’ Develop ingestion and replication pipelines using Fivetran, SQDR, Rivery, or custom Python/PySpark solutions
β’ Design and implement Data Vault 2.0 models to support enterprise analytics
β’ Write optimized SQL, Python, and PySpark for complex transformations and performance tuning
β’ Ensure data quality, reliability, monitoring, and operational excellence across pipelines
β’ Contribute to CI/CD, DevOps automation, documentation, and Agile delivery practices
β’ Provide technical mentorship and promote strong engineering standards
What Weβre Looking For
β’ 8+ years of experience in data or software engineering
β’ 5+ years building enterprise ETL/ELT and analytics solutions
β’ Strong experience with Azure Data Factory and Azure Data Lake Storage
β’ Hands-on experience with Databricks, Delta Live Tables, and Unity Catalog
β’ Advanced SQL expertise (T-SQL, performance tuning, optimization)
β’ Strong Python and PySpark development experience
β’ Experience with DBT and Apache Airflow
β’ Experience implementing Data Vault 2.0 modeling
β’ Familiarity with CI/CD and DevOps practices
Must-Have Technical Stack
β’ Azure Data Factory (ADF)
β’ Azure Data Lake Storage
β’ Databricks
β’ Delta Live Tables (DLT)
β’ Unity Catalog
β’ DBT
β’ Apache Airflow
β’ Python & PySpark
β’ Advanced SQL
β’ Data Vault 2.0
We are not accepting third party submissions for this role.






