

Senior Data Lake Engineer--W2
β - Featured Role | Apply direct with Data Freelance Hub
This role is a Senior Data Lake Engineer for a long-term contract in Dallas, TX, offering competitive pay. Requires 15+ years of IT experience, including 7+ in Data Engineering, with expertise in AWS, Python, and ETL solutions.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
August 2, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Hybrid
-
π - Contract type
W2 Contractor
-
π - Security clearance
Unknown
-
π - Location detailed
Dallas, TX
-
π§ - Skills detailed
#Infrastructure as Code (IaC) #Agile #Data Modeling #PySpark #S3 (Amazon Simple Storage Service) #Database Design #Data Lake #AI (Artificial Intelligence) #XML (eXtensible Markup Language) #Lambda (AWS Lambda) #Data Architecture #Alteryx #Data Pipeline #Terraform #Automation #Kafka (Apache Kafka) #Python #Scala #"ETL (Extract #Transform #Load)" #Cloud #Data Engineering #Security #AWS (Amazon Web Services) #Aurora #ML (Machine Learning) #Spark (Apache Spark) #DynamoDB #JSON (JavaScript Object Notation) #AWS Glue #Compliance #GitLab #Redshift
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Title: Senior Data Lake Engineer
Location: Dallas, TX (Hybrid Onboarding)
Type: Long-term Contract
About the Role
Southwest Airlines is seeking a Senior Data Lake Engineer to join our AI Data Deliver Team. This role focuses on architecting and building cloud-native data engineering solutions, with a strong emphasis on AWS, Python, and serverless technologies. The ideal candidate is a seasoned data engineer with 15+ years of experience in designing and implementing large-scale data pipelines and ETL solutions.
Responsibilities
β’ Architect, build, and optimize AWS-based data lake solutions (Lake Formation, S3, Glue).
β’ Develop and maintain scalable ETL pipelines using Python and PySpark.
β’ Implement serverless workflows with Lambda, Step Functions, and DynamoDB.
β’ Automate CI/CD pipelines using GitLab and Infrastructure as Code (Terraform, CloudFormation).
β’ Collaborate with cross-functional teams (AI/ML, Data, and Product) to deliver real-time analytics solutions.
β’ Work with stakeholders to troubleshoot data-related issues and improve data reliability and quality.
β’ Ensure security, compliance, and performance tuning of data pipelines.
Required Qualifications
β’ 15+ years of IT experience, with 7+ years in Data Engineering.
β’ Expert-level AWS Data Lake skills (Lake Formation, S3, Glue).
β’ Advanced Python (Lambda functions, automation) and PySpark expertise.
β’ Strong experience with data formats (Parquet, JSON, XML, etc.) and ETL design.
β’ Hands-on experience with CI/CD tools (GitLab) and Infrastructure as Code (Terraform).
β’ Strong knowledge of data modeling and database design principles.
β’ Familiarity with Agile methodologies and cloud security best practices.
Preferred Qualifications
β’ Experience with AuroraDB, Redshift, Kafka, AWS Glue Registry, and Alteryx.
β’ Background in AI/ML pipelines or streaming data architecture.
β’ Experience with serverless frameworks (Serverless, Stacker).