

Adroit Innovative Solutions Inc
Data Engineer (AWS)
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer (AWS) with 8+ years of experience, located in California or Boston, offering $50/hr on a C2C basis. Key skills include AWS, Python, PySpark, SQL, and ETL/ELT processes.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
400
-
🗓️ - Date
January 6, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
California City, CA
-
🧠 - Skills detailed
#Datasets #Spark (Apache Spark) #Python #AWS S3 (Amazon Simple Storage Service) #Scala #Data Pipeline #Redshift #AWS (Amazon Web Services) #Security #PySpark #"ETL (Extract #Transform #Load)" #SQL (Structured Query Language) #EC2 #SQL Queries #S3 (Amazon Simple Storage Service) #Lambda (AWS Lambda) #Data Quality #Data Engineering #Big Data
Role description
Position: Data EngineerExperience: 8+ YearsLocation: California / BostonRelocation: OpenClient: InfosysRate: $50/hr on C2CVisa: GC / USC only
Required Skills
Strong experience in AWS (S3, EC2, Glue, Lambda, Redshift, EMR)
Proficiency in Python
Hands-on experience with PySpark
Strong knowledge of SQL
Experience in building and optimizing data pipelines
Experience with ETL/ELT processes
Knowledge of data warehousing and big data concepts
Ability to work with large-scale datasets
Strong problem-solving and communication skills
Responsibilities
Design, develop, and maintain scalable data pipelines on AWS
Process structured and unstructured data using PySpark
Optimize SQL queries for performance
Collaborate with cross-functional teams to deliver data solutions
Ensure data quality, security, and performance best practices
Pay: $1.00 per hour
Expected hours: 40.0 per week
Work Location: In person
Position: Data EngineerExperience: 8+ YearsLocation: California / BostonRelocation: OpenClient: InfosysRate: $50/hr on C2CVisa: GC / USC only
Required Skills
Strong experience in AWS (S3, EC2, Glue, Lambda, Redshift, EMR)
Proficiency in Python
Hands-on experience with PySpark
Strong knowledge of SQL
Experience in building and optimizing data pipelines
Experience with ETL/ELT processes
Knowledge of data warehousing and big data concepts
Ability to work with large-scale datasets
Strong problem-solving and communication skills
Responsibilities
Design, develop, and maintain scalable data pipelines on AWS
Process structured and unstructured data using PySpark
Optimize SQL queries for performance
Collaborate with cross-functional teams to deliver data solutions
Ensure data quality, security, and performance best practices
Pay: $1.00 per hour
Expected hours: 40.0 per week
Work Location: In person






