

PySpark/Data Engineer at Columbus, OH and Jersey City, NJ
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Python PySpark/Data Engineer with 12 years of experience, requiring expertise in AWS, SQL, and Apache Spark. It is a long-term, on-site position in Columbus, OH, and Jersey City, NJ, with a pay rate of "unknown."
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
July 26, 2025
π - Project duration
Unknown
-
ποΈ - Location type
On-site
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
New Jersey, United States
-
π§ - Skills detailed
#Data Engineering #Data Pipeline #PySpark #SQL Queries #Data Lake #Cloud #Databricks #Python #Spark (Apache Spark) #Programming #SQL (Structured Query Language) #Apache Spark #AWS (Amazon Web Services)
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Job Title: Python PySpark/Data Engineer β 12 years experience required
Location: Columbus, OH and Jersey City, NJ β 5 days onsite
Duration: Long Term Project
Job Description:
β’ 6+ years of professional experience designing and implementing data pipelines in a cloud environment
β’ 5+ years of experience migrating/developing data solutions in the AWS cloud
β’ 2+ years of experience building/implementing data pipelines using Databricks or similar cloud platforms
β’ Expert-level proficiency in writing complex, highly optimized SQL queries on large data volumes
β’ Hands-on object-oriented programming experience with Python
β’ Experience building real-time data streams using Apache Spark
β’ Knowledge or experience with architectural best practices for building data lakes