

W2 Only :: 15 + Years Senior Data Lake Engineer :: TX
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a W2 Senior Data Lake Engineer with 15+ years of experience, focused on AWS Data Lake technologies. Contract length is unspecified, with a pay rate of "TBD." Must possess strong Python skills and reside in the U.S. for at least 7 years.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
August 1, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Unknown
-
π - Contract type
W2 Contractor
-
π - Security clearance
Unknown
-
π - Location detailed
Texas, United States
-
π§ - Skills detailed
#Infrastructure as Code (IaC) #GitLab #Data Lake #Alteryx #Data Pipeline #DynamoDB #DevOps #Aurora #Scripting #AWS Glue #XML (eXtensible Markup Language) #Data Modeling #Terraform #JSON (JavaScript Object Notation) #Spark (Apache Spark) #Kafka (Apache Kafka) #PySpark #AWS (Amazon Web Services) #Python #Docker #Security #Data Engineering #Big Data #Database Design #IAM (Identity and Access Management) #S3 (Amazon Simple Storage Service) #"ETL (Extract #Transform #Load)" #Kubernetes #Programming #Redshift #Lambda (AWS Lambda) #Cloud
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Make sure the LinkedIn profile is at least 2 years old.
Strong verbal and written communication skills are non-negotiable.
Must have lived and worked in the U.S. for at least 7 years
No West Coast candidates will be considered
1. Core Technical Skills (Must-Have)
β’ AWS Data Lake Expertise:
β’ Lake Formation, S3, Glue (Crawler, Catalog, Glue Jobs), Step Functions, DynamoDB, IAM, Lambda.
β’ Programming & Data Engineering:
β’ Strong Python (AWS SDK, Lambda development, scripting).
β’ PySpark for big data ETL pipelines.
β’ DevOps & Infrastructure as Code:
β’ GitLab pipelines, Terraform, CloudFormation, or Serverless frameworks.
β’ ETL & Data Pipelines:
β’ Proven experience building, optimizing, and automating ETL flows in cloud environments.
β’ Data Formats & Processing:
β’ Proficiency with JSON, Parquet, XML, and other structured/semi-structured data formats.
1. Nice-to-Haves (Highly Desired)
β’ AuroraDB, Redshift, AWS Glue Registry, Kafka, Alteryx.
β’ Strong understanding of Data Modeling, Database Design, and CI/CD Pipelines.
β’ Experience with containerization (Docker, Kubernetes) and Security practices.
1. Experience Level
β’ 15+ years total IT experience (the team felt 10 years was "too mid-level").
β’ Deep background in Data Engineering/Development (not just analysis).
β’ Ability to write Lambda functions and automate workflows in Python.