

New York Technology Partners
Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer, contract length unspecified, offering a competitive pay rate. Key skills include PySpark, SQL, AWS, and experience with data pipelines. Familiarity with Airflow, Databricks, and MPP data warehouses is required.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
May 1, 2026
π - Duration
Unknown
-
ποΈ - Location
Unknown
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Irvine, CA
-
π§ - Skills detailed
#Databricks #AWS (Amazon Web Services) #Data Pipeline #GIT #Data Warehouse #Spark (Apache Spark) #PySpark #Kafka (Apache Kafka) #Redshift #Scala #Data Engineering #GitHub #Version Control #SQL (Structured Query Language) #BitBucket #Cloud #Data Modeling #Python #Datasets #Airflow
Role description
Weβre looking for a Data Engineer to design, build, and optimize scalable data pipelines in a modern cloud environment. This role requires strong technical expertise and the ability to collaborate effectively with cross-functional and client-facing teams.
Key Responsibilities
β’ Design, develop, and maintain scalable data pipelines and processing systems
β’ Work with large datasets using distributed frameworks
β’ Partner with internal teams and client stakeholders to deliver data solutions
β’ Optimize performance across platforms such as Databricks and Kafka
β’ Apply best practices in data modeling, warehousing, and governance
β’ Support global delivery through offshore coordination
Required Skills & Experience
β’ Strong hands-on experience with PySpark, Hive, SQL, and Python
β’ Experience with cloud platforms (preferably AWS)
β’ Familiarity with workflow orchestration tools like Airflow
β’ Experience with MPP data warehouses (e.g., Redshift)
β’ Understanding of data warehousing concepts and best practices
β’ Exposure to Databricks
β’ Experience with Git-based version control (GitHub, Bitbucket)
β’ Strong communication and client-facing skills
Nice to Have
β’ Experience working with Kafka or similar streaming platforms
β’ Prior experience collaborating with offshore teams
Weβre looking for a Data Engineer to design, build, and optimize scalable data pipelines in a modern cloud environment. This role requires strong technical expertise and the ability to collaborate effectively with cross-functional and client-facing teams.
Key Responsibilities
β’ Design, develop, and maintain scalable data pipelines and processing systems
β’ Work with large datasets using distributed frameworks
β’ Partner with internal teams and client stakeholders to deliver data solutions
β’ Optimize performance across platforms such as Databricks and Kafka
β’ Apply best practices in data modeling, warehousing, and governance
β’ Support global delivery through offshore coordination
Required Skills & Experience
β’ Strong hands-on experience with PySpark, Hive, SQL, and Python
β’ Experience with cloud platforms (preferably AWS)
β’ Familiarity with workflow orchestration tools like Airflow
β’ Experience with MPP data warehouses (e.g., Redshift)
β’ Understanding of data warehousing concepts and best practices
β’ Exposure to Databricks
β’ Experience with Git-based version control (GitHub, Bitbucket)
β’ Strong communication and client-facing skills
Nice to Have
β’ Experience working with Kafka or similar streaming platforms
β’ Prior experience collaborating with offshore teams






