

iPivot
AWS Data Engineer with Databricks || USC Only || W2 Only
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an AWS Data Engineer with Databricks in Princeton, NJ, on a long-term contract at a W2 pay rate. Requires 5+ years of data engineering experience, proficiency in Databricks, Spark, Python, and AWS services. USC only.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
December 16, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Princeton, NJ
-
🧠 - Skills detailed
#Hadoop #Apache Spark #Data Engineering #Lambda (AWS Lambda) #Computer Science #Data Modeling #Security #Scala #Databricks #AWS (Amazon Web Services) #SQL (Structured Query Language) #Kafka (Apache Kafka) #AWS Glue #Python #Spark (Apache Spark) #Big Data #Redshift #Airflow #S3 (Amazon Simple Storage Service) #"ETL (Extract #Transform #Load)"
Role description
AWS Data Engineer with Databricks
Princeton, NJ – Hybrid - Need Locals or Neaby
Duration: Long Term
Due to federal project requirements, this position is available only to U.S. citizens.
Key Responsibilities
• Design and implement ETL/ELT pipelines with Databricks, Apache Spark, AWS Glue, S3, Redshift, and EMR for processing large-scale structured and unstructured data.
• Optimize data flows, monitor performance, and troubleshoot issues to maintain reliability and scalability.
• Collaborate on data modeling, governance, security, and integration with tools like Airflow or Step Functions.
• Document processes and mentor junior team members on best practices.
Required Qualifications
• Bachelor's degree in Computer Science, Engineering, or related field.
• 5+ years of data engineering experience, with strong proficiency in Databricks, Spark, Python, SQL, and AWS services (S3, Glue, Redshift, Lambda).
• Familiarity with big data tools like Kafka, Hadoop, and data warehousing concepts.
AWS Data Engineer with Databricks
Princeton, NJ – Hybrid - Need Locals or Neaby
Duration: Long Term
Due to federal project requirements, this position is available only to U.S. citizens.
Key Responsibilities
• Design and implement ETL/ELT pipelines with Databricks, Apache Spark, AWS Glue, S3, Redshift, and EMR for processing large-scale structured and unstructured data.
• Optimize data flows, monitor performance, and troubleshoot issues to maintain reliability and scalability.
• Collaborate on data modeling, governance, security, and integration with tools like Airflow or Step Functions.
• Document processes and mentor junior team members on best practices.
Required Qualifications
• Bachelor's degree in Computer Science, Engineering, or related field.
• 5+ years of data engineering experience, with strong proficiency in Databricks, Spark, Python, SQL, and AWS services (S3, Glue, Redshift, Lambda).
• Familiarity with big data tools like Kafka, Hadoop, and data warehousing concepts.






