

Visionary Innovative Technology Solutions LLC
AWS Data Engineer with Sage Maker
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an AWS Data Engineer with SageMaker in Reston, VA, for 6+ months. Key skills include AWS services, PySpark, CI/CD, and scripting. Experience in building scalable ETL pipelines and integrating ML workflows is essential.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
October 1, 2025
🕒 - Duration
More than 6 months
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Reston, VA
-
🧠 - Skills detailed
#Spark (Apache Spark) #Python #Monitoring #PostgreSQL #Aurora PostgreSQL #Scala #GitHub #Automation #SQS (Simple Queue Service) #Java #Lambda (AWS Lambda) #AWS (Amazon Web Services) #Data Engineering #ML (Machine Learning) #"ETL (Extract #Transform #Load)" #Redshift #Data Processing #Jenkins #Shell Scripting #Scripting #PySpark #Deployment #SNS (Simple Notification Service) #Aurora #SageMaker #Batch #BitBucket
Role description
Position: AWS Data Engineer with Sage maker
Location: Reston, VA (Day 1 onsite)
Duration: 6+ Month
Job Description:
We are seeking a motivated Senior Data Engineer with strong AWS and data platform experience to design, build, and operationalize scalable data processing and ML-ready pipelines. The ideal candidate will be hands-on with PySpark, RedShift, Glue, and automation using CI/CD and scripting.
Key Responsibilities
• Design and implement scalable ETL/ELT pipelines on AWS for batch and near-real-time workloads.
• Build and optimize data processing jobs using PySpark on EMR and Glue.
• Develop and manage RedShift schemas, queries, and Spectrum for external table access.
• Integrate machine learning workflows with SageMaker and Lambda-driven orchestration.
• Automate deployments and testing using CI/CD tools and source control (Jenkins, UCD, Bitbucket, GitHub).
• Create and maintain operational scripts and tooling (Shell, Python) for monitoring, troubleshooting, and performance tuning.
Must-have Skills
• AWS services (EMR, SageMaker, Lambda, RedShift, Glue, SNS, SQS)
• PySpark and data processing frameworks
• Shell scripting and Python development
• CI/CD tooling experience (Jenkins, UCD)
• Source control experience with Bitbucket and GitHub
• Experience building and maintaining scripts/tools for automation
Nice-to-have
• Familiarity with AWS ECS
• Experience with Aurora PostgreSQL
• Java for tooling or pipeline components
Position: AWS Data Engineer with Sage maker
Location: Reston, VA (Day 1 onsite)
Duration: 6+ Month
Job Description:
We are seeking a motivated Senior Data Engineer with strong AWS and data platform experience to design, build, and operationalize scalable data processing and ML-ready pipelines. The ideal candidate will be hands-on with PySpark, RedShift, Glue, and automation using CI/CD and scripting.
Key Responsibilities
• Design and implement scalable ETL/ELT pipelines on AWS for batch and near-real-time workloads.
• Build and optimize data processing jobs using PySpark on EMR and Glue.
• Develop and manage RedShift schemas, queries, and Spectrum for external table access.
• Integrate machine learning workflows with SageMaker and Lambda-driven orchestration.
• Automate deployments and testing using CI/CD tools and source control (Jenkins, UCD, Bitbucket, GitHub).
• Create and maintain operational scripts and tooling (Shell, Python) for monitoring, troubleshooting, and performance tuning.
Must-have Skills
• AWS services (EMR, SageMaker, Lambda, RedShift, Glue, SNS, SQS)
• PySpark and data processing frameworks
• Shell scripting and Python development
• CI/CD tooling experience (Jenkins, UCD)
• Source control experience with Bitbucket and GitHub
• Experience building and maintaining scripts/tools for automation
Nice-to-have
• Familiarity with AWS ECS
• Experience with Aurora PostgreSQL
• Java for tooling or pipeline components