RevereIT LLC

AWS Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an AWS Data Engineer with a contract length of "unknown," offering a pay rate of "unknown." Key skills include AWS services, Python, PySpark, SQL, and experience with ETL/ELT pipelines. Industry experience in cloud-based environments is essential.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
April 29, 2026
πŸ•’ - Duration
Unknown
-
🏝️ - Location
Unknown
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Berkeley Heights, NJ
-
🧠 - Skills detailed
#Lambda (AWS Lambda) #AI (Artificial Intelligence) #Security #S3 (Amazon Simple Storage Service) #SQL (Structured Query Language) #"ETL (Extract #Transform #Load)" #PySpark #Data Pipeline #Hadoop #Scala #Terraform #Data Engineering #Data Warehouse #Apache Spark #Data Architecture #Data Science #Data Modeling #Microsoft Power BI #Batch #ML (Machine Learning) #Data Processing #Data Lake #Spark (Apache Spark) #TensorFlow #Snowflake #Python #Redshift #BI (Business Intelligence) #Monitoring #Athena #Big Data #Infrastructure as Code (IaC) #SageMaker #Data Quality #AWS (Amazon Web Services) #Cloud
Role description
Job Description We are seeking an experienced AWS Data Engineer to design, build, and maintain scalable data pipelines and data platforms. The role involves working with large-scale distributed systems and cloud-based data architectures to support both batch and real-time data processing. The engineer will be responsible for building and optimizing data lakes and data warehouses, implementing efficient data modeling strategies, and integrating AI/ML capabilities into data pipelines. This role requires close collaboration with cross-functional teams including data scientists, analysts, and business stakeholders to deliver high-quality data solutions and actionable insights. Required Skills β€’ Strong experience with AWS services: S3, Redshift, Glue, Lambda, EMR, Athena β€’ Expertise in Python, PySpark, and SQL β€’ Hands-on experience with Big Data technologies such as Hadoop and Apache Spark β€’ Experience building and maintaining ETL/ELT data pipelines β€’ Strong understanding of data modeling techniques: Star Schema, Snowflake Schema, Dimensional Modeling β€’ Experience with Infrastructure as Code tools like Terraform β€’ Experience developing dashboards using Power BI β€’ Knowledge of data lake and data warehouse architectures Required Qualifications β€’ Proven experience designing scalable data pipelines for batch and real-time processing β€’ Hands-on experience with distributed data processing systems β€’ Strong understanding of data quality, governance, and security best practices β€’ Experience working in cloud-based data environments, preferably AWS β€’ Ability to collaborate with cross-functional teams and stakeholders Additional Skills β€’ Exposure to AI/ML frameworks such as SageMaker or TensorFlow β€’ Experience integrating machine learning models into data pipelines β€’ Strong problem-solving and performance optimization skills β€’ Experience monitoring and optimizing cloud-based data systems