

Enterprise Solutions Inc.
Cloud Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an AWS Data Engineer in Whippany, NJ, for 6-12 months, with a pay rate of "unknown." Key skills include AWS services, Python, Kafka, and data pipeline design. Requires 5+ years of relevant experience.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
480
-
🗓️ - Date
December 16, 2025
🕒 - Duration
More than 6 months
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Whippany, NJ
-
🧠 - Skills detailed
#AWS Lambda #Data Quality #Data Engineering #Lambda (AWS Lambda) #Data Ingestion #Cloud #Compliance #Data Modeling #Security #Data Pipeline #Scala #Data Science #Terraform #Apache Kafka #AWS (Amazon Web Services) #SQL (Structured Query Language) #Kafka (Apache Kafka) #Python #Automation #Redshift #IAM (Identity and Access Management) #S3 (Amazon Simple Storage Service) #"ETL (Extract #Transform #Load)" #Infrastructure as Code (IaC)
Role description
Role: AWS Data Engineer
Location: Whippany NJ, Onsite
Duration: 6-12 Months (with possible extension)
Job Summary:
We are looking for a highly skilled AWS Data Engineer with strong expertise in Python, Kafka, AWS Lambda, and Kinesis. The ideal candidate will design and implement scalable, real-time data pipelines and ensure reliable data ingestion, transformation, and delivery across the analytics ecosystem.
Key Responsibilities:
• Design, build, and optimize data pipelines and streaming solutions using AWS services (Kinesis, Lambda, Glue, S3, Redshift, etc.).
• Develop and maintain real-time data ingestion frameworks leveraging Kafka and Kinesis.
• Write clean, efficient, and reusable Python code for ETL and data transformation.
• Implement event-driven and serverless architectures using AWS Lambda.
• Collaborate with data scientists, analysts, and application teams to deliver high-quality data solutions.
• Monitor and troubleshoot data flow and performance issues in production environments.
• Ensure data quality, security, and compliance with enterprise standards.
Required Skills & Experience:
• 5+ years of experience as a Data Engineer or similar role.
• Strong hands-on expertise with AWS cloud services (Kinesis, Lambda, Glue, S3, IAM, Redshift).
• Proficiency in Python for data engineering and automation tasks.
• Experience with Apache Kafka for streaming and messaging pipelines.
• Strong understanding of data modeling, ETL workflows, and distributed data systems.
• Working knowledge of SQL and data warehousing concepts.
• Familiarity with CI/CD pipelines and infrastructure as code (IaC) using CloudFormation or Terraform is a plus.
Role: AWS Data Engineer
Location: Whippany NJ, Onsite
Duration: 6-12 Months (with possible extension)
Job Summary:
We are looking for a highly skilled AWS Data Engineer with strong expertise in Python, Kafka, AWS Lambda, and Kinesis. The ideal candidate will design and implement scalable, real-time data pipelines and ensure reliable data ingestion, transformation, and delivery across the analytics ecosystem.
Key Responsibilities:
• Design, build, and optimize data pipelines and streaming solutions using AWS services (Kinesis, Lambda, Glue, S3, Redshift, etc.).
• Develop and maintain real-time data ingestion frameworks leveraging Kafka and Kinesis.
• Write clean, efficient, and reusable Python code for ETL and data transformation.
• Implement event-driven and serverless architectures using AWS Lambda.
• Collaborate with data scientists, analysts, and application teams to deliver high-quality data solutions.
• Monitor and troubleshoot data flow and performance issues in production environments.
• Ensure data quality, security, and compliance with enterprise standards.
Required Skills & Experience:
• 5+ years of experience as a Data Engineer or similar role.
• Strong hands-on expertise with AWS cloud services (Kinesis, Lambda, Glue, S3, IAM, Redshift).
• Proficiency in Python for data engineering and automation tasks.
• Experience with Apache Kafka for streaming and messaging pipelines.
• Strong understanding of data modeling, ETL workflows, and distributed data systems.
• Working knowledge of SQL and data warehousing concepts.
• Familiarity with CI/CD pipelines and infrastructure as code (IaC) using CloudFormation or Terraform is a plus.






