

New York Technology Partners
Senior Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer with a contract length of "unknown," offering a pay rate of "unknown." It is remote, requiring onsite presence in Houston, TX, twice a month. Key skills include Databricks, SQL, and data governance experience.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
800
-
🗓️ - Date
April 14, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Houston, TX
-
🧠 - Skills detailed
#AI (Artificial Intelligence) #SQL (Structured Query Language) #Data Enrichment #Kafka (Apache Kafka) #BI (Business Intelligence) #Data Engineering #dbt (data build tool) #Cloud #Microsoft Power BI #Databricks #Data Governance #Data Management #Data Quality #Automation #Metadata #Scala #AWS (Amazon Web Services) #Data Pipeline
Role description
We are seeking a Senior Data Engineer to support the advancement of data governance and data management capabilities within a modern Databricks Lakehouse environment. This is a hands-on engineering role focused on building, automating, and operationalizing scalable data solutions using code and AI-driven approaches.
This role is ideal for someone who enjoys turning frameworks into real, production-ready systems that improve data quality, trust, and accessibility across the organization.
•
• Remote but requires someone willing to come onsite 2 times per month (Houston, TX)
•
• Key Responsibilities
• Implement data governance and data management solutions within Databricks, including Unity Catalog, access controls, metadata, and data quality frameworks
• Translate governance frameworks into scalable, production-ready engineering solutions
• Build and optimize advanced data pipelines using Databricks, DBT, and Kafka
• Develop reusable templates, automation, and standardized patterns to enable broader adoption across teams
• Design and implement AI/GenAI-enabled solutions (e.g., metadata enrichment, automated data quality checks, intelligent workflows)
• Monitor and report on data quality and governance health using SQL and reporting tools such as Power BI
• Collaborate with engineering, architecture, and business stakeholders to drive adoption and maturity of data practices
• Troubleshoot, optimize, and performance-tune data pipelines and streaming systems
Required Qualifications
• Strong experience as a Data Engineer or Software Engineer in a production environment
• Hands-on experience with Databricks, including Unity Catalog
• Experience building and optimizing data pipelines using tools such as DBT and Kafka
• Strong SQL skills and experience working with large-scale data systems
• Experience with data governance concepts, including metadata management, access control, and data quality frameworks
• Experience with performance tuning in Databricks and/or Kafka environments
• Familiarity with AWS or similar cloud platforms
• Strong communication skills with the ability to work directly with stakeholders and drive initiatives independently
We are seeking a Senior Data Engineer to support the advancement of data governance and data management capabilities within a modern Databricks Lakehouse environment. This is a hands-on engineering role focused on building, automating, and operationalizing scalable data solutions using code and AI-driven approaches.
This role is ideal for someone who enjoys turning frameworks into real, production-ready systems that improve data quality, trust, and accessibility across the organization.
•
• Remote but requires someone willing to come onsite 2 times per month (Houston, TX)
•
• Key Responsibilities
• Implement data governance and data management solutions within Databricks, including Unity Catalog, access controls, metadata, and data quality frameworks
• Translate governance frameworks into scalable, production-ready engineering solutions
• Build and optimize advanced data pipelines using Databricks, DBT, and Kafka
• Develop reusable templates, automation, and standardized patterns to enable broader adoption across teams
• Design and implement AI/GenAI-enabled solutions (e.g., metadata enrichment, automated data quality checks, intelligent workflows)
• Monitor and report on data quality and governance health using SQL and reporting tools such as Power BI
• Collaborate with engineering, architecture, and business stakeholders to drive adoption and maturity of data practices
• Troubleshoot, optimize, and performance-tune data pipelines and streaming systems
Required Qualifications
• Strong experience as a Data Engineer or Software Engineer in a production environment
• Hands-on experience with Databricks, including Unity Catalog
• Experience building and optimizing data pipelines using tools such as DBT and Kafka
• Strong SQL skills and experience working with large-scale data systems
• Experience with data governance concepts, including metadata management, access control, and data quality frameworks
• Experience with performance tuning in Databricks and/or Kafka environments
• Familiarity with AWS or similar cloud platforms
• Strong communication skills with the ability to work directly with stakeholders and drive initiatives independently






