AWS Cloud Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an AWS Cloud Data Engineer with a contract length of "unknown" and a pay rate of "$X/hour." Required skills include SQL, Python, Spark, and experience with AWS services. Experience with IoT data streams and unstructured data is essential.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
August 27, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
Unknown
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Atlanta, GA
-
🧠 - Skills detailed
#Cloud #Spark (Apache Spark) #DynamoDB #NoSQL #ML (Machine Learning) #SageMaker #IoT (Internet of Things) #AI (Artificial Intelligence) #AWS (Amazon Web Services) #SQL (Structured Query Language) #Data Engineering #BI (Business Intelligence) #Athena #Data Science #Python #"ETL (Extract #Transform #Load)" #Data Lake #Data Warehouse #Redshift
Role description
Core Role: β€’ Primary Focus: Cloud-native data engineering (ETL/ELT pipelines, data lakes, data warehouses). β€’ Cloud Platform: AWS-heavy stack (Glue, Redshift, Athena, Step Functions, Lake Formation, DynamoDB). β€’ Skill Set: SQL, Python, Spark, orchestration (CI/CD), relational + NoSQL DBs. β€’ Output Users: BI teams, data scientists, and downstream apps. Unique Elements: β€’ IoT: They specifically mention ingestion/processing of real-time IoT data streams (device telemetry, events). β€’ Unstructured Data: Handling non-traditional data types (images, video, audio, documents) beyond just structured tabular data. β€’ AI/ML: Not a pure ML engineer role, but you’d be expected to enable ML workflows (using SageMaker, Rekognition, Comprehend, etc.), and collaborate with data scientists for productionizing ML models.