Digitive

Data Engineering Tech Lead (Contract-to-Hire)

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineering Tech Lead (Contract-to-Hire) in a hybrid location (Houston), offering a competitive pay rate. Candidates must have 8+ years of data engineering experience, strong AWS skills, and a background in supply chain data.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
February 27, 2026
πŸ•’ - Duration
Unknown
-
🏝️ - Location
Hybrid
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Houston, TX
-
🧠 - Skills detailed
#S3 (Amazon Simple Storage Service) #Observability #Python #AWS (Amazon Web Services) #"ETL (Extract #Transform #Load)" #Data Architecture #Monitoring #Batch #Data Quality #Lambda (AWS Lambda) #Data Access #BigQuery #dbt (data build tool) #Deployment #Database Performance #Scala #Aurora #Data Modeling #Data Engineering #SQL (Structured Query Language) #GCP (Google Cloud Platform) #Redshift #Kafka (Apache Kafka) #Data Pipeline
Role description
Role: Data Engineering Tech Lead (Contract-to-Hire) Location: Hybrid (Houston ) Role Summary We are seeking a hands-on Data Engineering Tech Lead to strengthen and modernize our Data Platform supporting outbound supply chain analytics and real-time data flows. This role will lead technical design, elevate engineering standards, and improve platform reliability, performance, and scalability across batch and near real-time workloads. The ideal candidate combines strong architectural thinking with deep hands-on engineering experience and a passion for building stable, high-quality data systems. Key Responsibilities Β· Lead design and development of scalable data pipelines (batch and near real-time) using AWS technologies Β· Establish and enforce data architecture standards, modeling best practices, and engineering guardrails Β· Drive improvements in platform stability, monitoring, observability, and alerting Β· Improve CI/CD practices and automate deployment processes for data workloads Β· Optimize Redshift and database performance, including cost and workload management Β· Strengthen data quality controls, validation logic, and governance practices Β· Review code, mentor engineers, and raise overall technical standards across the team Β· Support modernization efforts, including refactoring legacy ETL workflows Β· Collaborate with cross-functional teams to ensure reliable and secure data access Required Skills Β· 8+ years in Data Engineering Β· Strong hands-on experience with: o AWS (Redshift, Aurora Postgres, S3, Lambda) o SQL performance tuning o Python-based ETL o Data modeling (operational + analytical) Β· Experience implementing CI/CD for data platforms Β· Strong understanding of observability and monitoring frameworks Β· Experience working with cross-functional architecture teams Preferred Β· Kafka / streaming exposure Β· dbt experience Β· GCP BigQuery/Streams Β· Experience modernizing legacy ETL systems Β· Exposure to supply chain/transportation data Β· Experience leading offshore teams