

Intellectt Inc
Lead Data Engineer (W2 Only)
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Lead Data Engineer (W2 Only) in Berkeley Heights, NJ, with a contract length of "unknown". Key skills include advanced SQL, Python, PySpark, AWS, and Terraform. Experience with data modeling and big data concepts is required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
March 27, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Berkeley Heights, NJ
-
🧠 - Skills detailed
#Data Pipeline #Data Quality #Scala #Big Data #SQL Queries #Datasets #AWS (Amazon Web Services) #Cloud #PySpark #Redshift #Infrastructure as Code (IaC) #Spark (Apache Spark) #AWS S3 (Amazon Simple Storage Service) #S3 (Amazon Simple Storage Service) #Terraform #Snowflake #Data Modeling #Data Engineering #Hadoop #Python #"ETL (Extract #Transform #Load)" #SQL (Structured Query Language)
Role description
Role: Data Engineer
Location: Berkeley Heights, NJ (Onsite)
Skills Required:
• Data Modeling (Star/Snowflake)
• SQL (advanced queries, optimization)
• Python, PySpark
• Data Pipelines (ETL/ELT)
• AWS (S3, Glue, EMR, Redshift)
• Terraform (Infrastructure as Code)
• Hadoop & Big Data concepts
Responsibilities:
• Build and maintain data pipelines
• Process large datasets using PySpark
• Design scalable data models
• Optimize SQL queries
• Work with AWS cloud services
• Ensure data quality and performance
Role: Data Engineer
Location: Berkeley Heights, NJ (Onsite)
Skills Required:
• Data Modeling (Star/Snowflake)
• SQL (advanced queries, optimization)
• Python, PySpark
• Data Pipelines (ETL/ELT)
• AWS (S3, Glue, EMR, Redshift)
• Terraform (Infrastructure as Code)
• Hadoop & Big Data concepts
Responsibilities:
• Build and maintain data pipelines
• Process large datasets using PySpark
• Design scalable data models
• Optimize SQL queries
• Work with AWS cloud services
• Ensure data quality and performance






