

New York Technology Partners
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a contract length of "unknown" and a pay rate of "unknown." Key skills include PySpark, SQL, AWS, and data pipeline optimization. Experience with cloud platforms and MPP data warehouses is required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
May 6, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
New York, NY
-
🧠 - Skills detailed
#Version Control #Redshift #Azure #Scala #Databricks #Airflow #GIT #Data Engineering #Data Pipeline #Datasets #Data Processing #Data Warehouse #SQL (Structured Query Language) #Data Modeling #Spark (Apache Spark) #Python #AWS (Amazon Web Services) #Synapse #Kafka (Apache Kafka) #PySpark #Cloud
Role description
We are looking for a Data Engineer to build and optimize scalable data pipelines and support advanced analytics in a modern cloud environment. This role requires strong technical expertise and the ability to work closely with cross-functional teams and client stakeholders.
Key Responsibilities
• Design, develop, and maintain robust data pipelines and processing systems
• Work with large-scale datasets using distributed processing frameworks
• Collaborate with internal teams and clients to deliver data-driven solutions
• Optimize performance across data platforms and streaming systems
• Implement best practices in data modeling, warehousing, and governance
• Support global delivery efforts and coordinate with offshore teams
Qualifications
• Hands-on experience with PySpark, Hive, SQL, and Python
• Experience with cloud platforms, preferably AWS
• Familiarity with workflow orchestration tools (e.g., Airflow)
• Knowledge of MPP data warehouses (e.g., Redshift, Azure Synapse)
• Understanding of data warehousing concepts and best practices
• Exposure to Databricks and real-time data processing (e.g., Kafka)
• Experience with version control systems (e.g., Git)
• Strong communication skills and client-facing experience
We are looking for a Data Engineer to build and optimize scalable data pipelines and support advanced analytics in a modern cloud environment. This role requires strong technical expertise and the ability to work closely with cross-functional teams and client stakeholders.
Key Responsibilities
• Design, develop, and maintain robust data pipelines and processing systems
• Work with large-scale datasets using distributed processing frameworks
• Collaborate with internal teams and clients to deliver data-driven solutions
• Optimize performance across data platforms and streaming systems
• Implement best practices in data modeling, warehousing, and governance
• Support global delivery efforts and coordinate with offshore teams
Qualifications
• Hands-on experience with PySpark, Hive, SQL, and Python
• Experience with cloud platforms, preferably AWS
• Familiarity with workflow orchestration tools (e.g., Airflow)
• Knowledge of MPP data warehouses (e.g., Redshift, Azure Synapse)
• Understanding of data warehousing concepts and best practices
• Exposure to Databricks and real-time data processing (e.g., Kafka)
• Experience with version control systems (e.g., Git)
• Strong communication skills and client-facing experience






