

Sr AWS Data Engineer (Spark, AWS, Glue)---W2 Requirement(Need USC,GC,H4 EAD,L2 EAD,H1 Transfer)
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr AWS Data Engineer (Spark, AWS, Glue) on a contract-W2 basis in Fort Mill, SC (Hybrid) for 10+ years of experience. Key skills include AWS, Spark, Glue, Python, and data pipeline development.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
September 4, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Hybrid
-
π - Contract type
W2 Contractor
-
π - Security clearance
Unknown
-
π - Location detailed
Fort Mill, SC
-
π§ - Skills detailed
#AWS Glue #Python #SQL (Structured Query Language) #AWS (Amazon Web Services) #Redshift #API (Application Programming Interface) #Spark (Apache Spark) #PySpark #Data Design #Snowflake #Data Engineering #Aurora #Big Data #Cloud #"ETL (Extract #Transform #Load)" #Data Pipeline #Lambda (AWS Lambda) #Datasets #REST (Representational State Transfer)
Role description
Title: AWS Data Engineer (Spark, AWS, Glue)
Location: Fort Mill, SC (Hybrid)
Job Type: Contract-W2
Minimum 10 years for experience in data engineering.
Must Skill:
- Work with development teams and other project leaders/stakeholders to provide technical solutions that enable business capabilities
- Design and develop data applications using big data technologies (AWS, Spark) to ingest, process, and analyze large disparate datasets
- Build robust data pipelines on the Cloud using AWS Glue, Aurora Postgres, EKS, Redshift, PySpark, Lambda, and Snowflake.
- Build Rest-based Data API using Python and Lambda.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from various data sources using SQL and AWS βbig dataβ technologies.
- Work with data and analytics experts to strive for greater functionality in our data systems.
- Implement architectures to handle large-scale data and its organization
- Execute strategies that inform data design and architecture partnering with enterprise standard
β’ - Work across teams to deliver meaningful reference architectures that outline architecture principles and best practices for technology advancement\\
Best Regards,
Rajesh(Ken)
Cymansys Solutions LLC
Email: ken@cymansys.com
Tel. 512.503.2202
Title: AWS Data Engineer (Spark, AWS, Glue)
Location: Fort Mill, SC (Hybrid)
Job Type: Contract-W2
Minimum 10 years for experience in data engineering.
Must Skill:
- Work with development teams and other project leaders/stakeholders to provide technical solutions that enable business capabilities
- Design and develop data applications using big data technologies (AWS, Spark) to ingest, process, and analyze large disparate datasets
- Build robust data pipelines on the Cloud using AWS Glue, Aurora Postgres, EKS, Redshift, PySpark, Lambda, and Snowflake.
- Build Rest-based Data API using Python and Lambda.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from various data sources using SQL and AWS βbig dataβ technologies.
- Work with data and analytics experts to strive for greater functionality in our data systems.
- Implement architectures to handle large-scale data and its organization
- Execute strategies that inform data design and architecture partnering with enterprise standard
β’ - Work across teams to deliver meaningful reference architectures that outline architecture principles and best practices for technology advancement\\
Best Regards,
Rajesh(Ken)
Cymansys Solutions LLC
Email: ken@cymansys.com
Tel. 512.503.2202