AWS Data Lead

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an AWS Data Lead with a contract length of unspecified duration, located in Fort Mill, SC (Hybrid). The pay rate is also unspecified. Key skills include AWS services, ETL/ELT processes, and leadership experience in data engineering.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
September 3, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
Hybrid
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Columbia, SC
-
🧠 - Skills detailed
#Kubernetes #Data Lake #AWS (Amazon Web Services) #Data Modeling #Spark (Apache Spark) #Docker #Data Processing #"ETL (Extract #Transform #Load)" #Agile #Leadership #PySpark #Data Science #Scala #Athena #S3 (Amazon Simple Storage Service) #Deployment #Business Analysis #Lambda (AWS Lambda) #Data Framework #SQL (Structured Query Language) #Data Engineering #Data Pipeline #Big Data #Security #Redshift #Data Warehouse #AWS Glue #Snowflake #Cloud #Data Governance #Python
Role description
Job Title: AWS Tech Lead Location: Fort Mill, SC (Hybrid/Onsite as required) Job Summary We are looking for a highly experienced Data Engineer – AWS Tech Lead to design and lead scalable cloud-based data solutions. This role will combine hands-on engineering with leadership responsibilities, guiding a team of data engineers while collaborating with architects, product owners, and business stakeholders. The ideal candidate will have deep expertise in AWS data services, ETL/ELT pipelines, and big data frameworks, with proven experience in leading technical delivery. Key Responsibilities β€’ Lead the design, development, and deployment of large-scale data pipelines and solutions on AWS. β€’ Provide technical leadership and mentorship to a team of data engineers, ensuring best practices in coding, architecture, and performance optimization. β€’ Architect data lake and data warehouse solutions leveraging AWS Glue, Redshift, S3, Athena, EMR, and Lambda. β€’ Drive development of ETL/ELT processes using PySpark, Python, and SQL. β€’ Implement and enforce data governance, quality checks, and security standards across the platform. β€’ Collaborate with business analysts, data scientists, and product teams to translate business requirements into scalable solutions. β€’ Optimize cost and performance of AWS-based data workloads. β€’ Troubleshoot production issues and ensure high availability of data pipelines. β€’ Participate in sprint planning, backlog refinement, and Agile ceremonies as a technical lead. Required Qualifications β€’ 13+ years of experience in Data Engineering, with at least 4+ years in a technical leadership role. β€’ Strong expertise in AWS data services: Glue, S3, Redshift, Athena, Lambda, Step Functions, EMR. β€’ Proficiency in PySpark, Python, and SQL for large-scale data processing. β€’ Solid experience in data modeling (star, snowflake, dimensional modeling) and data lake/warehouse architectures. β€’ Proven track record in leading and mentoring data engineering teams. β€’ Strong understanding of Agile methodologies and CI/CD practices for data pipelines. Preferred Qualifications β€’ Experience in BFSI / Wealth Management domain. β€’ AWS Certification (Data Analytics Specialty, Solutions Architect, or Big Data). β€’ Familiarity with containerization (Docker, Kubernetes) and orchestration frameworks.