

HAN Staffing
Cloud Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Cloud Engineer with a contract length of "unknown", offering a pay rate of "unknown". It requires AWS Data Engineering expertise, proficiency in Python and PySpark, advanced SQL skills, and experience in data warehousing and governance.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
576
-
🗓️ - Date
March 20, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
New Jersey, United States
-
🧠 - Skills detailed
#Kafka (Apache Kafka) #S3 (Amazon Simple Storage Service) #Data Lake #DevOps #Data Governance #Programming #Scala #AWS (Amazon Web Services) #PySpark #Spark (Apache Spark) #Datasets #Batch #Cloud #dbt (data build tool) #Data Quality #SQL (Structured Query Language) #Data Pipeline #Distributed Computing #Jenkins #"ETL (Extract #Transform #Load)" #Deployment #Data Engineering #Storage #Redshift #Agile #Alation #Python #GitLab #Databricks #IAM (Identity and Access Management) #Lambda (AWS Lambda)
Role description
SR AWS Data Engineer - New Jersey- Hybrid must (2 days A week)
AWS Data Engineering Expertise — Hands‑on experience with Glue, Lambda, S3, Redshift, Databricks, IAM, and distributed computing.
Large‑Scale Data Pipeline Development — Proven ability to build, optimize, and maintain batch data pipelines and architectures.
Python & PySpark Proficiency — Strong programming skills for ETL, transformations, and scalable processing.
Advanced SQL & Query Optimization — Ability to write and tune complex SQL for high‑volume datasets.
Data Warehousing & Data Lake Design — Experience designing secure, scalable, high‑performance storage layers.
DevOps & CI/CD — Familiarity with Jenkins, GitLab, automated deployments, and Agile delivery.
Data Governance & Quality — Knowledge of Alation, Glue Data Quality, data mesh concepts, and governance frameworks.
Solution Architecture Skills — Ability to propose end‑to‑end data solutions across diverse tech stacks.
Nice‑to‑Have: Real‑Time Streaming — Kafka, Spark Streaming, and event‑driven ingestion.
Nice‑to‑Have: DBT — Experience with DBT for modeling and transformation.
SR AWS Data Engineer - New Jersey- Hybrid must (2 days A week)
AWS Data Engineering Expertise — Hands‑on experience with Glue, Lambda, S3, Redshift, Databricks, IAM, and distributed computing.
Large‑Scale Data Pipeline Development — Proven ability to build, optimize, and maintain batch data pipelines and architectures.
Python & PySpark Proficiency — Strong programming skills for ETL, transformations, and scalable processing.
Advanced SQL & Query Optimization — Ability to write and tune complex SQL for high‑volume datasets.
Data Warehousing & Data Lake Design — Experience designing secure, scalable, high‑performance storage layers.
DevOps & CI/CD — Familiarity with Jenkins, GitLab, automated deployments, and Agile delivery.
Data Governance & Quality — Knowledge of Alation, Glue Data Quality, data mesh concepts, and governance frameworks.
Solution Architecture Skills — Ability to propose end‑to‑end data solutions across diverse tech stacks.
Nice‑to‑Have: Real‑Time Streaming — Kafka, Spark Streaming, and event‑driven ingestion.
Nice‑to‑Have: DBT — Experience with DBT for modeling and transformation.






