TEK Staffing Solutions Inc.

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Engineer position in Westlake, TX, offering a W2 contract. Requires 4–10+ years of experience, expertise in Apache Iceberg, AWS, Kafka, and Python, with strong skills in ETL pipelines and data modeling. Local candidates only.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
560
-
🗓️ - Date
February 25, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Dallas-Fort Worth Metroplex
-
🧠 - Skills detailed
#Metadata #Data Processing #Kafka (Apache Kafka) #S3 (Amazon Simple Storage Service) #SQL (Structured Query Language) #AWS (Amazon Web Services) #Cloud #Data Lake #Data Pipeline #Scala #Lambda (AWS Lambda) #PySpark #Data Modeling #"ETL (Extract #Transform #Load)" #Automation #Spark (Apache Spark) #Python #Data Engineering #IAM (Identity and Access Management) #Apache Iceberg
Role description
Title: Data Engineer Location: Westlake, TX (Local candidates only) Type: W2 contract, No C2C Visa: Must be a US Citizen or a Green Card Holder only Role Summary We are seeking a highly skilled Data Engineer to design, build, and optimize our modern data platform leveraging Apache Iceberg on AWS, with strong expertise in Spark, Kafka and Python. The ideal candidate has deep experience building scalable, high quality data pipelines, distributed data processing systems, and table-format–based lakehouse architectures. This role is ideal for engineers who love building robust data foundations, enabling fast and reliable analytics, and working with cutting edge open data lake technologies. Required Qualifications • 4–10+ years of experience in Data Engineering or similar roles. • Strong hands-on experience with Apache Iceberg (table design, evolution, metadata, partitioning). • Deep experience with AWS data stack: S3, EMR, Lambda, Glue, IAM, Step Functions, CloudWatch • Strong proficiency in Kafka (producers/consumers, schema registry, partitioning strategies). • Fluency in Python for data pipelines, automation, and APIs. • Experience with distributed engines such as Spark, Flink, or PySpark. • Expertise in scalable ETL/ELT pipelines and real-time streaming architectures. • Strong SQL and data modeling expertise. If this opportunity is a fit for your skills and experience, then Apply today or Email resumes to dkumar@tekstaffingsolutions.com