Tuppl

13+ Years Only :::: Lead AWS Glue Data Engineering Candidates ::: Fort Mill, SC or New York City, NY (Hybrid)

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Lead AWS Glue Data Engineer in Fort Mill, SC or New York City, NY (Hybrid) with a contract length of "unknown." Candidates must have 12+ years of data engineering experience, including 5+ years with AWS services, and strong skills in PySpark, data modeling, and ETL. Pay rate is "unknown."
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
November 26, 2025
πŸ•’ - Duration
Unknown
-
🏝️ - Location
Hybrid
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
New York, NY
-
🧠 - Skills detailed
#Visualization #Business Analysis #Python #Data Warehouse #Scrum #Microsoft Power BI #AWS (Amazon Web Services) #Redshift #SQL (Structured Query Language) #Security #GitHub #Storage #Data Transformations #Datasets #Jenkins #Data Modeling #Monitoring #Data Lake #Data Engineering #Data Pipeline #BI (Business Intelligence) #Lambda (AWS Lambda) #Agile #Data Security #AWS Glue #Athena #Compliance #Computer Science #Data Quality #"ETL (Extract #Transform #Load)" #Data Science #Snowflake #Tableau #Spark (Apache Spark) #Scala #S3 (Amazon Simple Storage Service) #Data Storage #PySpark #Cloud
Role description
Job Title: AWS Glue Data Engineer Location: Fort Mill, SC or New York City, NY (Hybrid) Job Summary We are seeking a highly skilled AWS Glue Data Engineer to design, develop, and optimize large-scale data pipelines and ETL workflows on AWS. The ideal candidate will have strong expertise in AWS cloud-native data services, data modeling, and pipeline orchestration, with hands-on experience building robust and scalable data solutions for enterprise environments. Key Responsibilities β€’ Design, develop, and maintain ETL pipelines using AWS Glue, Glue Studio, and Glue Catalog. β€’ Ingest, transform, and load large datasets from structured and unstructured sources into AWS data lakes/warehouses. β€’ Work with S3, Redshift, Athena, Lambda, and Step Functions for data storage, query, and orchestration. β€’ Build and optimize PySpark/Scala scripts within AWS Glue for complex transformations. β€’ Implement data quality checks, lineage, and monitoring across pipelines. β€’ Collaborate with business analysts, data scientists, and product teams to deliver reliable data solutions. β€’ Ensure compliance with data security, governance, and regulatory requirements (BFSI preferred). β€’ Troubleshoot production issues and optimize pipeline performance. Required Qualifications β€’ 12+ years of experience in Data Engineering, with at least 5+ years on AWS cloud data services. β€’ Strong expertise in AWS Glue, S3, Redshift, Athena, Lambda, Step Functions, CloudWatch. β€’ Proficiency in PySpark, Python, SQL for ETL and data transformations. β€’ Experience in data modeling (star, snowflake, dimensional models) and performance tuning. β€’ Hands-on experience with data lake/data warehouse architecture and implementation. β€’ Strong problem-solving skills and ability to work in Agile/Scrum environments. Preferred Qualifications β€’ Experience in BFSI / Wealth Management domain. β€’ AWS Certified Data Analytics – Specialty or AWS Solutions Architect certification. β€’ Familiarity with CI/CD pipelines for data engineering (CodePipeline, Jenkins, GitHub Actions). β€’ Knowledge of BI/Visualization tools like Tableau, Power BI, QuickSight. Education β€’ Bachelor’s degree in Computer Science, Information Technology, Engineering, or related field. β€’ Master’s degree preferred.