

Integrated Resources, Inc ( IRI )
Senior Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer in Houston, TX, offering a 7-month contract at a competitive pay rate. Key skills include Databricks administration, data governance, and strong Python/PySpark expertise. Experience in large-scale enterprise data platforms is preferred.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
760
-
🗓️ - Date
April 10, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Houston, TX
-
🧠 - Skills detailed
#Observability #DevOps #Automated Testing #Airflow #Azure #Compliance #Data Ingestion #Spark (Apache Spark) #GCP (Google Cloud Platform) #Data Engineering #Data Modeling #Data Quality #Security #GitHub #PySpark #Monitoring #Data Governance #Data Access #Databricks #AI (Artificial Intelligence) #Documentation #ML (Machine Learning) #Azure DevOps #Delta Lake #SQL (Structured Query Language) #AWS (Amazon Web Services) #Python
Role description
Job Title: Platform Data Engineer (W2 only)
Job Location: Houston, TX – 77079 (Hybrid – 2 to 3 days)
Job Duration: 7 months contract (with highly possibility of extension)
Job Description:
About the Role
• We are seeking a Platform Engineer with deep expertise in Databricks administration, data governance, and platform‐level engineering standards. This role enables multiple analytics and AI teams to build safely, efficiently, and consistently on a shared Databricks platform by enforcing data quality, ingestion standards, security policies, and cost governance.
• You will be the technical owner of platform guardrails, operational stability, access patterns, and cost controls—ensuring the platform scales reliably across business teams.
Key Responsibilities
Platform Administration & Governance
• Administer Databricks workspaces, clusters, jobs, Unity Catalog, compute policies, environment configuration, and platform guardrails.
• Implement and maintain RBAC and ABAC access controls for secure, compliant data access.
• Define and enforce data ingestion standards, naming conventions, schema rules, Delta Lake design patterns, and data quality expectations.
Data Quality & Ingestion Standards
• Set platform‐wide standards for ingestion pipelines, Delta architecture, lineage, versioning, and validation.
• Review and approve onboarded pipelines for compliance with platform requirements.
• Partner with data engineering teams to uplift patterns and enforce consistency.
Security, Compliance & Access Controls
• Manage workspace and catalog permissions, row/column‐level policies, attribute‐based filtering, and workspace isolation.
• Collaborate with security teams to maintain compliance and enforce global data protection standards.
Cost Management & Monitoring
• Implement cost thresholds, alerts, compute policies, and usage dashboards to prevent overspend.
• Monitor job and cluster costs, detect anomalies, and recommend optimization actions.
• Provide visibility into SKU‐level spend and workspace cost patterns.
Operational Stability & Observability
• Ensure platform reliability through automated testing, CI/CD templates, and code governance.
• Build dashboards to track code compliance, data access, pipeline health, schema drift, and cost thresholds.
• Resolve platform incidents and prevent recurrence by strengthening guardrails and configurations.
Enablement & Best Practices
• Define “handrails” for building on the platform: ingestion, Delta conventions, CI/CD, observability, and AI/ML patterns.
• Coach data/analytics teams on compliant onboarding and optimal platform usage.
• Maintain internal documentation, patterns, code templates, and guidance.
Required Skills & Experience
• 5+ years in data engineering or platform engineering, with at least 2–3 years in Databricks administration.
• Expert knowledge of Unity Catalog, cluster policies, Delta Lake, Spark, workspace configuration, and jobs.
• Strong grounding in data governance, data modeling, ingestion frameworks, schema enforcement, versioning, and lineage.
• Proven experience implementing RBAC and ABAC in Databricks or similar platforms.
• Experience with cost optimization, monitoring, billing logs, and compute governance.
• Strong Python/PySpark and SQL skills; familiarity with DLT, Airflow, or Databricks Workflows.
• Strong communication skills with ability to set standards and influence teams diplomatically.
Preferred Qualifications
• Experience in large-scale enterprise data platforms (Azure/AWS/GCP).
• Familiarity with Trading & Supply or other high‐stakes analytical environments.
• Experience creating dashboards for governance, cost, compliance, and pipeline health.
• Experience with CI/CD, GitHub Actions, Azure DevOps, or similar tools.
Success Indicators
• Teams consistently follow ingestion and data standards.
• Platform costs stabilized and predictable.
• Strong adoption of platform guardrails, templates, and operational dashboards.
• Reduced incidents related to access, cost, or ingestion quality.
Job Title: Platform Data Engineer (W2 only)
Job Location: Houston, TX – 77079 (Hybrid – 2 to 3 days)
Job Duration: 7 months contract (with highly possibility of extension)
Job Description:
About the Role
• We are seeking a Platform Engineer with deep expertise in Databricks administration, data governance, and platform‐level engineering standards. This role enables multiple analytics and AI teams to build safely, efficiently, and consistently on a shared Databricks platform by enforcing data quality, ingestion standards, security policies, and cost governance.
• You will be the technical owner of platform guardrails, operational stability, access patterns, and cost controls—ensuring the platform scales reliably across business teams.
Key Responsibilities
Platform Administration & Governance
• Administer Databricks workspaces, clusters, jobs, Unity Catalog, compute policies, environment configuration, and platform guardrails.
• Implement and maintain RBAC and ABAC access controls for secure, compliant data access.
• Define and enforce data ingestion standards, naming conventions, schema rules, Delta Lake design patterns, and data quality expectations.
Data Quality & Ingestion Standards
• Set platform‐wide standards for ingestion pipelines, Delta architecture, lineage, versioning, and validation.
• Review and approve onboarded pipelines for compliance with platform requirements.
• Partner with data engineering teams to uplift patterns and enforce consistency.
Security, Compliance & Access Controls
• Manage workspace and catalog permissions, row/column‐level policies, attribute‐based filtering, and workspace isolation.
• Collaborate with security teams to maintain compliance and enforce global data protection standards.
Cost Management & Monitoring
• Implement cost thresholds, alerts, compute policies, and usage dashboards to prevent overspend.
• Monitor job and cluster costs, detect anomalies, and recommend optimization actions.
• Provide visibility into SKU‐level spend and workspace cost patterns.
Operational Stability & Observability
• Ensure platform reliability through automated testing, CI/CD templates, and code governance.
• Build dashboards to track code compliance, data access, pipeline health, schema drift, and cost thresholds.
• Resolve platform incidents and prevent recurrence by strengthening guardrails and configurations.
Enablement & Best Practices
• Define “handrails” for building on the platform: ingestion, Delta conventions, CI/CD, observability, and AI/ML patterns.
• Coach data/analytics teams on compliant onboarding and optimal platform usage.
• Maintain internal documentation, patterns, code templates, and guidance.
Required Skills & Experience
• 5+ years in data engineering or platform engineering, with at least 2–3 years in Databricks administration.
• Expert knowledge of Unity Catalog, cluster policies, Delta Lake, Spark, workspace configuration, and jobs.
• Strong grounding in data governance, data modeling, ingestion frameworks, schema enforcement, versioning, and lineage.
• Proven experience implementing RBAC and ABAC in Databricks or similar platforms.
• Experience with cost optimization, monitoring, billing logs, and compute governance.
• Strong Python/PySpark and SQL skills; familiarity with DLT, Airflow, or Databricks Workflows.
• Strong communication skills with ability to set standards and influence teams diplomatically.
Preferred Qualifications
• Experience in large-scale enterprise data platforms (Azure/AWS/GCP).
• Familiarity with Trading & Supply or other high‐stakes analytical environments.
• Experience creating dashboards for governance, cost, compliance, and pipeline health.
• Experience with CI/CD, GitHub Actions, Azure DevOps, or similar tools.
Success Indicators
• Teams consistently follow ingestion and data standards.
• Platform costs stabilized and predictable.
• Strong adoption of platform guardrails, templates, and operational dashboards.
• Reduced incidents related to access, cost, or ingestion quality.






