

Visionary Innovative Technology Solutions LLC
AWS Data Engineer- Looking Local Profile
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an AWS Data Engineer in Herndon, VA, with a contract duration. Key skills include AWS services (EMR, SageMaker), PySpark, CI/CD tooling (Jenkins), and scripting (Shell, Python). Experience in ETL processes and data engineering is required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
October 1, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Herndon, VA
-
🧠 - Skills detailed
#Spark (Apache Spark) #Python #Monitoring #Scala #GitHub #Automation #SQS (Simple Queue Service) #AWS (Amazon Web Services) #Lambda (AWS Lambda) #Data Engineering #ML (Machine Learning) #"ETL (Extract #Transform #Load)" #Redshift #Data Processing #Jenkins #Shell Scripting #Scripting #PySpark #Deployment #SNS (Simple Notification Service) #SageMaker #Batch #BitBucket
Role description
Role : AWS Data Engineer
Location : Herndon, VA (Day 1 onsite)
Duration: Contract
Job Description:
• AWS services (EMR, SageMaker, Lambda, RedShift, Glue, SNS, SQS)
• PySpark and data processing frameworks
• Shell scripting and Python development
• CI/CD tooling experience (Jenkins, UCD)
• Source control experience with Bitbucket and GitHub
• Experience building and maintaining scripts/tools for automation
Key Responsibilities
• Design and implement scalable ETL/ELT pipelines on AWS for batch and near-real-time workloads.
• Build and optimize data processing jobs using PySpark on EMR and Glue.
• Develop and manage RedShift schemas, queries, and Spectrum for external table access.
• Integrate machine learning workflows with SageMaker and Lambda-driven orchestration.
• Automate deployments and testing using CI/CD tools and source control (Jenkins, UCD, Bitbucket, GitHub).
• Create and maintain operational scripts and tooling (Shell, Python) for monitoring, troubleshooting, and performance tuning.
Role : AWS Data Engineer
Location : Herndon, VA (Day 1 onsite)
Duration: Contract
Job Description:
• AWS services (EMR, SageMaker, Lambda, RedShift, Glue, SNS, SQS)
• PySpark and data processing frameworks
• Shell scripting and Python development
• CI/CD tooling experience (Jenkins, UCD)
• Source control experience with Bitbucket and GitHub
• Experience building and maintaining scripts/tools for automation
Key Responsibilities
• Design and implement scalable ETL/ELT pipelines on AWS for batch and near-real-time workloads.
• Build and optimize data processing jobs using PySpark on EMR and Glue.
• Develop and manage RedShift schemas, queries, and Spectrum for external table access.
• Integrate machine learning workflows with SageMaker and Lambda-driven orchestration.
• Automate deployments and testing using CI/CD tools and source control (Jenkins, UCD, Bitbucket, GitHub).
• Create and maintain operational scripts and tooling (Shell, Python) for monitoring, troubleshooting, and performance tuning.