Senior Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer in Houston, TX, on a contract basis. Requires 12+ years of experience, proficiency in SQL, Python, and big data technologies, and expertise in Databricks Unity Catalog and data modeling.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
September 25, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
On-site
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Houston, TX
-
🧠 - Skills detailed
#Security #Data Science #BigQuery #Data Engineering #AWS (Amazon Web Services) #Data Governance #Data Pipeline #Databricks #Data Warehouse #DevOps #Computer Science #Data Modeling #"ETL (Extract #Transform #Load)" #Azure #Hadoop #Programming #SQL (Structured Query Language) #Data Lake #Scala #Snowflake #Big Data #Kafka (Apache Kafka) #Data Quality #Spark (Apache Spark) #Luigi #Cloud #Datasets #Java #Airflow #Version Control #GCP (Google Cloud Platform) #GIT #Python #Redshift
Role description
β€’ Job Title: Senior Data Engineer β€’ Location: Houston, TX (On Site) β€’ Employment Type: Contract β€’ Experience: 12+ years Job Summary We are looking for an experienced Senior Data Engineer with 12+ years of experience to design, develop, and optimize large-scale data pipelines and solutions. The ideal candidate will have extensive experience in building robust data systems, expertise in Databricks Unity Catalog and data modeling, mentoring junior engineers, and collaborating with cross-functional teams to enable data-driven decision-making. Key Responsibilities β€’ Design, build, and maintain scalable, efficient, and secure data pipelines and ETL processes. β€’ Architect, implement, and optimize data lake and data warehouse solutions. β€’ Implement and manage data governance, security, and access control using Databricks Unity Catalog. β€’ Collaborate with data scientists, analysts, and business stakeholders to deliver high-quality datasets and reporting solutions. β€’ Ensure data quality, reliability, and security across systems. β€’ Evaluate and implement new tools, technologies, and best practices for data engineering. β€’ Troubleshoot and resolve issues related to data workflows, performance, and availability. β€’ Mentor and guide junior data engineers, promoting a culture of excellence and continuous improvement. β€’ Document data flows, system architectures, and operational procedures. Required Qualifications β€’ Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field. β€’ 12+ years of experience in data engineering or related roles. β€’ Strong proficiency in SQL and programming languages such as Python, Java, or Scala. β€’ Hands-on experience with big data technologies (e.g., Hadoop, Spark, Hive, Kafka). β€’ Expertise with cloud platforms (AWS, Azure, GCP) and their data services (e.g., Redshift, BigQuery, Snowflake, Databricks). β€’ Proven experience with Databricks Unity Catalog for data governance, cataloging, and security. β€’ Strong expertise in data modeling, including conceptual, logical, and physical modeling. β€’ Strong understanding of ETL/ELT frameworks and data warehousing concepts. β€’ Knowledge of workflow orchestration tools (e.g., Airflow, Luigi). β€’ Familiarity with DevOps practices, CI/CD pipelines, and version control (Git). β€’ Excellent problem-solving, analytical, and communication skills.