

Sr. Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr. Data Engineer with 10+ years of experience, including 5+ years in ETL development using Python/Scala on Databricks/Spark. Located in Princeton, NJ (Hybrid), the contract lasts 6+ months, requiring AWS, SQL, and Agile skills.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
July 31, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
Hybrid
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Princeton, NJ
-
π§ - Skills detailed
#Kafka (Apache Kafka) #Tableau #Python #Spark (Apache Spark) #AWS (Amazon Web Services) #Databases #Oracle #Scala #Data Modeling #Visualization #Cloud #Databricks #Java #R #"ETL (Extract #Transform #Load)" #Data Engineering #Containers #Agile #Linux #Scripting #SQL (Structured Query Language)
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Job Title: Sr. Data Engineer (Databricks & Spark)
Location: Princeton, NJ (Hybrid)
Duration: 6+ months (Contract to Hire)
What Weβre Looking For:
β’ Minimum 10+ years of working experience in Technology (application development and production support).
β’ 5+ years of experience in development of pipelines that extract, transform, and load data into an information product that helps the organization reach its strategic goals.
β’ Minimum 3+ years of experience in developing & supporting ETLs and using Python/Scala in Databricks/Spark platform.
β’ Experience with Python, Spark, and Hive and Understanding of data-warehousing and data modeling techniques.
β’ Knowledge of industry-wide visualization and analytics tools (ex: Tableau, R)
β’ Strong data engineering skills with AWS cloud platform
β’ Experience with streaming frameworks such as Kafka
β’ Knowledge of Core Java, Linux, SQL, and any scripting language
β’ Experience working with any relational Databases preferably Oracle.
β’ Experience in continuous delivery through CI/CD pipelines, containers, and orchestration technologies.
β’ Experience working in an Agile development environment.