Lead Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Lead Data Engineer in Houston, TX, with a contract length of "unknown" and a pay rate of "unknown." Requires 6-8 years of Data Engineering experience, including 2-3 years in Snowflake, strong SQL and Python skills, and expertise in data pipelines and management.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
September 12, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
Hybrid
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Houston, TX
-
🧠 - Skills detailed
#"ETL (Extract #Transform #Load)" #Data Pipeline #Scripting #Data Warehouse #GitHub #Storage #SQL (Structured Query Language) #Data Lake #Automation #Observability #Python #Jenkins #Scala #Data Storage #Data Management #Metadata #Data Science #Snowflake #Data Engineering #Data Quality #AI (Artificial Intelligence) #ML (Machine Learning)
Role description
Designation – Lead Data Engineer / Sr. Data Engineer Hybrid - 3 days per week between Mon to Thursday. And every month first week it will be 4 or 5 days Work Location: Houston, TX (Houston locals only) Experience: 6-8 + years in Data Engineering, with 2 or 3 years in Snowflake Core Skills: Strong proficiency in Snowflake, SQL and Python Scripting Key Responsibilities: β€’ Designing and implementing scalable data pipelines β€’ Building and managing data warehouses and data lakes β€’ Ensuring data quality and implementing data management best practices β€’ Optimizing data storage and retrieval processes β€’ Collaborate closely with data scientists, analysts, and product teams to support analytics and machine learning initiatives. β€’ CI/CD orchestration and automation tools: Experience with tools such as Jenkins, GitHub etc. β€’ Monitor and tune Snowflake query performance, warehouse usage, and credit consumption. β€’ Collaborate closely with data scientists, analysts, and product teams to support analytics and machine learning initiatives. β€’ Design and enforce row-level access policies and dynamic masking in Snowflake for sensitive data fields (PII, financials). β€’ Enabled data sharing with external teams using secure shares and reader accounts while maintaining strict RBAC controls. β€’ Experience with ETL /Scheduler tools. β€’ Strong interpersonal, written, and verbal communication skills to interact effectively across teams and stakeholders. β€’ Designing semantic layers, aggregate tables, and data models (Star/Snowflake) to support scalable, governed, and business-friendly analytics architecture. Good to have: β€’ Machine Learning and AI/LLM model training / implementation. β€’ Background in data observability, lineage tracking, or metadata management tools.