Santcore Technologies

Enterprise Data Engineer (Databricks)

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an Enterprise Data Engineer (Databricks) in a hybrid location (Austin, TX) for 5 months at $65/hr. Requires 5+ years in Data Engineering, expertise in Databricks, Delta Lake, Python, SQL, and experience with ServiceNow and ApptioOne integrations.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
520
-
🗓️ - Date
April 15, 2026
🕒 - Duration
3 to 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
1099 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Austin, TX
-
🧠 - Skills detailed
#Azure DevOps #Monitoring #Microsoft Power BI #Azure #Data Quality #Data Engineering #Scala #Data Access #Data Architecture #Storage #Data Modeling #Data Governance #ADLS (Azure Data Lake Storage) #Delta Lake #PySpark #DevOps #BI (Business Intelligence) #Automation #Data Ingestion #Spark (Apache Spark) #AI (Artificial Intelligence) #Spark SQL #GitHub #Agile #Databricks #Python #Documentation #"ETL (Extract #Transform #Load)" #ML (Machine Learning) #MLflow #Datasets #SQL (Structured Query Language) #Data Pipeline
Role description
Job Title: Enterprise Data Engineer (Databricks) Location: Hybrid – Austin, TX Duration: 5 Months Pay Rate: $65/hr on 1099 Role Summary The Enterprise Data Engineer will design, build, and operate scalable data pipelines within an Azure-based Databricks Lakehouse architecture. The primary focus is to deliver and maintain a software-driven data model for analytics and data consumption. This is a hands-on, execution-focused role responsible for engineering reliable data ingestion from multiple data sources, performing transformations, implementing data quality checks, and delivering curated datasets integrated with ServiceNow (ITSM/ITSLM) and ApptioOne (ITFM). The role involves close collaboration with data architects, platform teams, providers, and stakeholders to translate architectural designs into scalable, governed, and production-ready data solutions. The work follows Agile software engineering practices, including GitHub-based workflows and CI/CD-driven SDLC processes. Key Responsibilities • Design, build, and maintain data models supporting data consumption, integration, semantic analytics, reporting, and executive dashboards. • Develop scalable data ingestion and transformation pipelines using Azure PaaS services, Databricks, Delta Lake, Python, and Spark SQL. • Implement integrations for ServiceNow operational data (SLA, incidents, CMDB) and ApptioOne financial and cost allocation data. • Develop and enforce data quality checks, validation rules, and monitoring mechanisms for end-to-end pipeline reliability. • Apply Unity Catalog governance including data access control, lineage management, and schema enforcement as per architectural standards. • Optimize Databricks Lakehouse performance including pipeline efficiency, storage layout, and query optimization. • Support CI/CD pipelines and DevOps automation for data engineering workflows using Azure DevOps and GitHub Actions. • Collaborate with architects, stakeholders, Capgemini teams, and service providers to deliver reporting and analytics solutions. • Troubleshoot production data issues and ensure operational stability of analytics and reporting systems. • Maintain documentation, runbooks, and operational standards for Databricks data pipelines. Required Skills & Experience • 5+ years of experience in Data Engineering or Analytics Engineering roles. • Hands-on experience with Databricks, Delta Lake, and Spark-based data pipelines. • Strong understanding of Medallion Architecture, especially Gold/Platinum layer implementation. • Proficiency in Python, SQL, and Spark (PySpark or Spark SQL). • Experience integrating enterprise systems such as ServiceNow (SLA, incident, CMDB data). • Experience working with financial or cost management platforms (e.g., ApptioOne or similar ITFM tools). • Strong understanding of data modeling techniques and methodologies. • Familiarity with Unity Catalog for data governance and access control. • Experience with Power BI or similar BI tools consuming Lakehouse datasets. • Experience with Azure data services (e.g., ADLS Gen2, orchestration tools, integration patterns), Azure DevOps, and GitHub-based CI/CD pipelines. Preferred Qualifications • Experience supporting public sector data initiatives. • Familiarity with ITIL 4 / ITIL 5 frameworks and SLA-based reporting. • Experience with financial systems, SLA analytics, operational KPIs, or cost transparency dashboards. • Exposure to MLflow, Feature Store, or AI/ML pipelines (implementation support role, not architecture ownership).