

Pantheon Inc
Senior Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer with a contract length of "unknown," offering a pay rate of "unknown." Key skills include Azure, Snowflake, Delta Lake, Databricks, and CI/CD. Experience with AI/ML workloads and data governance is essential.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
January 28, 2026
π - Duration
Unknown
-
ποΈ - Location
Unknown
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
The Woodlands, TX
-
π§ - Skills detailed
#"ACID (Atomicity #Consistency #Isolation #Durability)" #Data Quality #Agile #Python #ML (Machine Learning) #Deployment #Snowflake #PySpark #Spark (Apache Spark) #Data Architecture #Cloud #Automation #Databricks #"ETL (Extract #Transform #Load)" #Version Control #Data Engineering #Storage #Data Governance #AI (Artificial Intelligence) #Data Science #Datasets #Scala #Azure #Data Pipeline #Delta Lake #SQL (Structured Query Language)
Role description
Overview
The Senior Data Engineer will lead the design, development, automation, and optimization of scalable cloud and on-premises data infrastructure supporting enterprise applications, advanced analytics, and AI/ML workloads. Operating within an agile environment, this role requires close collaboration with software engineering, infrastructure, and data science teams to deliver secure, reliable, and high-performance data systems aligned with enterprise standards.
Responsibilities
β’ Design, build, and optimize scalable ETL/ELT pipelines using Databricks, Delta Lake, Python, SQL, and related data engineering technologies.
β’ Demonstrate expert-level proficiency in PySpark and Databricks Lakehouse architectures, including schema evolution, ACID transactions, data quality enforcement, and cost and performance optimized handling of large, complex datasets.
β’ Improve data ecosystem efficiency by implementing Lakehouse based bronze/silver/gold architectures, reducing processing times through optimization techniques, enforcing data quality frameworks such as Great Expectations, and enhancing storage and retrieval patterns.
β’ Collaborate with cross functional teamsβdata scientists, engineers, product owners, and business stakeholdersβto translate business requirements into robust technical designs supporting AI/ML enablement, feature engineering, real time analytics, and production grade data workflows.
β’ Strengthen CI/CD and data governance practices through automation, version control, testing frameworks, and scalable deployment patterns for data pipelines and platform components.
Qualifications
Strong background across Azure, Snowflake, Delta Lake, Databricks and CI/CD automation, bringing the cross cloud, governance minded, and automation driven engineering approach required for the organizations integrated enterprise-scale data architecture.
Top 3 Skills:
β’ Azure
β’ Data Knowledge on Snowflake, Delta Lake and Databricks
β’ CI/CD
Overview
The Senior Data Engineer will lead the design, development, automation, and optimization of scalable cloud and on-premises data infrastructure supporting enterprise applications, advanced analytics, and AI/ML workloads. Operating within an agile environment, this role requires close collaboration with software engineering, infrastructure, and data science teams to deliver secure, reliable, and high-performance data systems aligned with enterprise standards.
Responsibilities
β’ Design, build, and optimize scalable ETL/ELT pipelines using Databricks, Delta Lake, Python, SQL, and related data engineering technologies.
β’ Demonstrate expert-level proficiency in PySpark and Databricks Lakehouse architectures, including schema evolution, ACID transactions, data quality enforcement, and cost and performance optimized handling of large, complex datasets.
β’ Improve data ecosystem efficiency by implementing Lakehouse based bronze/silver/gold architectures, reducing processing times through optimization techniques, enforcing data quality frameworks such as Great Expectations, and enhancing storage and retrieval patterns.
β’ Collaborate with cross functional teamsβdata scientists, engineers, product owners, and business stakeholdersβto translate business requirements into robust technical designs supporting AI/ML enablement, feature engineering, real time analytics, and production grade data workflows.
β’ Strengthen CI/CD and data governance practices through automation, version control, testing frameworks, and scalable deployment patterns for data pipelines and platform components.
Qualifications
Strong background across Azure, Snowflake, Delta Lake, Databricks and CI/CD automation, bringing the cross cloud, governance minded, and automation driven engineering approach required for the organizations integrated enterprise-scale data architecture.
Top 3 Skills:
β’ Azure
β’ Data Knowledge on Snowflake, Delta Lake and Databricks
β’ CI/CD






