AWS Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an AWS Data Engineer for a 6-12 month contract, remote, with a pay rate of "unknown." Key skills required include AWS, Databricks, PySpark, CDC, and Azure DevOps, with experience in data migration and optimization.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
September 12, 2025
πŸ•’ - Project duration
More than 6 months
-
🏝️ - Location type
Remote
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
United States
-
🧠 - Skills detailed
#Bash #Data Pipeline #Scripting #IAM (Identity and Access Management) #DevOps #Logging #Azure DevOps #Web Services #Lambda (AWS Lambda) #Databricks #Monitoring #Data Processing #SQL (Structured Query Language) #Spark SQL #Automation #Version Control #Azure #Azure Repos #Big Data #Data Integration #Infrastructure as Code (IaC) #Python #Scala #Data Migration #Security #DMS (Data Migration Service) #Deployment #RDS (Amazon Relational Database Service) #GIT #Terraform #AWS (Amazon Web Services) #Databases #Data Analysis #AWS CloudWatch #Cloud #Data Engineering #S3 (Amazon Simple Storage Service) #Database Migration #VPC (Virtual Private Cloud) #Migration #Spark (Apache Spark) #PySpark #EC2 #AWS DMS (AWS Database Migration Service)
Role description
Role - AWS Data Engineer Duration - 6 – 12 months Remote / EST hours Proficient in AWS, Databricks, and Azure DevOps, with a focus on strong analytical skills in PySpark, Delta Live Tables, Change Data Capture (CDC), and on-premises to AWS data migration. Technical Skills AWS (Amazon Web Services): β€’ Core Services: Proficiency with core AWS services like EC2, S3, RDS, Lambda, and VPC. β€’ Data Services: Experience with AWS data services Glue and EMR. β€’ AWS DMS: Knowledge of AWS Database Migration Service (DMS) for migrating databases to AWS. β€’ CDC: Understanding of Change Data Capture (CDC) techniques to capture and replicate changes from source databases to target databases. β€’ Security: Understanding of AWS security best practices, IAM, and encryption. Databricks: β€’ PySpark & Spark SQL: Strong analytical skills in PySpark & Spark SQL for big data processing and analysis. β€’ Delta Live Tables: Expertise in using Delta Live Tables for building reliable and scalable data pipelines. β€’ Notebooks: Strong utilization of Databricks Notebooks for data analysis. β€’ Workflows : Setting up and monitoring Databricks Workflows. β€’ Data Integration: Experience integrating Databricks with AWS services. DevOps Principles: β€’ CI/CD Pipelines: CI/CD pipelines using Azure Pipelines. β€’ Version Control: Proficiency with Azure Repos and Git for version control. β€’ Automation: Scripting and automation using PowerShell, Bash, or Python. Automating the build, test, and deployment processes Infrastructure as Code (IaC): β€’ Terraform: Experience with Terraform for managing AWS and Azure infrastructure. On Prem integration with AWS β€’ Integrating on prem data with AWS and Databricks. β€’ Thoroughly test and validate the data to ensure it has been transferred correctly and is fully functional. Optimization and Monitoring: β€’ Optimize AWS services and Databricks for performance and cost-efficiency. β€’ Proficiency in setting up monitoring and logging using tools like AWS CloudWatch to track the performance and health of the complete data flow.