

Python Data Engineer - 10 Yrs Exp
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Python Data Engineer with 10 years of experience, focusing on data pipelines in AWS and Databricks. It requires strong SQL skills, Python programming, and financial domain experience, based in Columbus, OH or Jersey City, NJ.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
600
-
ποΈ - Date discovered
July 23, 2025
π - Project duration
Unknown
-
ποΈ - Location type
On-site
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Jersey City, NJ
-
π§ - Skills detailed
#Data Engineering #SQL (Structured Query Language) #Programming #Databricks #Cloud #PySpark #Python #AWS (Amazon Web Services) #Data Lake #Data Pipeline #Spark (Apache Spark)
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Job Title: Python PySpark/ Data Engineer β 10 years exp must
Location: Columbus, OH and Jersey City, NJ β 5 days onsite role
Long Term Project
Note: Candidate should be coming from well-known universities and financial domain experience is mandatory.
Job Description:
β’ 6+ years of professional work experience designing and implementing data pipelines in a cloud environment is required.
β’ 5+ years of experience migrating/developing data solutions in the AWS cloud is required.
β’ 2+ years of experience building/implementing data pipelines using Databricks or similar cloud database.
β’ Expert level knowledge of using SQL to write complex, highly optimized queries across large volumes of data.
β’ Hands-on object-oriented programming experience using Python is required.
β’ Professional work experience building real-time data streams using Spark and Experience in Spark.
β’ Knowledge or experience in architectural best practices in building data lakes