

New York Technology Partners
Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Engineer position for a contract length of "unknown" with a pay rate of "unknown," located in a "client-facing environment." Key skills include PySpark, SQL, AWS, and experience with data pipelines and MPP data warehouses.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
April 22, 2026
π - Duration
Unknown
-
ποΈ - Location
Unknown
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Irvine, CA
-
π§ - Skills detailed
#Data Warehouse #PySpark #SQL (Structured Query Language) #Scala #Cloud #Data Modeling #Data Pipeline #Data Engineering #Kafka (Apache Kafka) #Spark SQL #Redshift #Datasets #Databricks #AWS (Amazon Web Services) #Python #Airflow #Spark (Apache Spark) #Version Control #GIT
Role description
Overview
Weβre looking for a Data Engineer to build and scale modern data solutions in a client-facing environment. This role focuses on developing high-performance data pipelines, working with large datasets, and leveraging cloud and distributed technologies.
Key Responsibilities
β’ Design and maintain scalable data pipelines and processing frameworks
β’ Work with large-scale data using distributed systems
β’ Partner with cross-functional teams and clients to deliver data-driven solutions
β’ Optimize data workflows across platforms like Databricks and Kafka
β’ Apply best practices in data modeling, warehousing, and governance
β’ Support collaboration with offshore teams and global delivery efforts
Required Skills & Experience
β’ Strong hands-on experience with PySpark, SQL, Python, and Hive
β’ Cloud experience (preferably AWS)
β’ Experience with Airflow or similar orchestration tools
β’ Familiarity with version control systems (Git-based)
β’ Experience with MPP data warehouses (e.g., Redshift)
β’ Solid understanding of data warehousing concepts
β’ Exposure to Databricks
β’ Strong communication skills and client engagement experience
Overview
Weβre looking for a Data Engineer to build and scale modern data solutions in a client-facing environment. This role focuses on developing high-performance data pipelines, working with large datasets, and leveraging cloud and distributed technologies.
Key Responsibilities
β’ Design and maintain scalable data pipelines and processing frameworks
β’ Work with large-scale data using distributed systems
β’ Partner with cross-functional teams and clients to deliver data-driven solutions
β’ Optimize data workflows across platforms like Databricks and Kafka
β’ Apply best practices in data modeling, warehousing, and governance
β’ Support collaboration with offshore teams and global delivery efforts
Required Skills & Experience
β’ Strong hands-on experience with PySpark, SQL, Python, and Hive
β’ Cloud experience (preferably AWS)
β’ Experience with Airflow or similar orchestration tools
β’ Familiarity with version control systems (Git-based)
β’ Experience with MPP data warehouses (e.g., Redshift)
β’ Solid understanding of data warehousing concepts
β’ Exposure to Databricks
β’ Strong communication skills and client engagement experience






