GraceMark Solutions

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer in New York (Hybrid), offering $55 per hour for a contract position. Key skills include Python, PySpark, Apache Airflow, and advanced SQL. Experience in ETL, data warehousing, and API integration is required.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
April 15, 2026
πŸ•’ - Duration
Unknown
-
🏝️ - Location
Hybrid
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
New York City Metropolitan Area
-
🧠 - Skills detailed
#Data Warehouse #Data Processing #Qlik #Microsoft Power BI #JSON (JavaScript Object Notation) #Triggers #SQL Server #Data Engineering #Apache Spark #Scala #Data Modeling #Airflow #Databases #PySpark #RDBMS (Relational Database Management System) #Tableau #SSIS (SQL Server Integration Services) #BI (Business Intelligence) #REST (Representational State Transfer) #SQL Queries #Oracle #Spark (Apache Spark) #Programming #Agile #Python #MDM (Master Data Management) #"ETL (Extract #Transform #Load)" #Data Lake #Apache Airflow #Datasets #SQL (Structured Query Language) #Data Pipeline
Role description
Role: Data Engineer Location: New York (Hybrid) Rate: 55 usd per hour We’re looking for a hands-on Data Engineer to build and scale modern data pipelines in a fast-paced, agile environment. This role is focused on end-to-end data engineeringβ€”from designing ETL pipelines to enabling analytics and reporting across the business. Responsibilities β€’ Design, build, and optimize ETL/ELT pipelines using Python and PySpark β€’ Develop and manage Airflow DAGs for orchestration and scheduling β€’ Write efficient, scalable SQL queries across multiple databases (SQL Server, Oracle, etc.) β€’ Build and maintain data models (data warehouse, data lake, MDM concepts) β€’ Integrate systems using REST, SOAP, and ETL frameworks (SSIS or similar) β€’ Develop reusable, production-grade code for data processing workflows β€’ Work with structured and semi-structured data (JSON, large-scale datasets) β€’ Support reporting and analytics via BI tools (Power BI, Tableau, Qlik) β€’ Collaborate within an Agile SDLC team to deliver data solutions end-to-end β€’ Continuously improve data performance, reliability, and scalability Requirements (Must-Haves) β€’ Strong experience in Python for data engineering (non-negotiable) β€’ Hands-on expertise with PySpark / Apache Spark β€’ Experience building pipelines with Apache Airflow (DAGs) β€’ Advanced SQL skills across relational databases (SQL Server, Oracle, DB2, etc.) β€’ Solid understanding of data engineering fundamentals (ETL, data warehousing, data lakes) β€’ Experience with RDBMS concepts and database programming (stored procedures, triggers, DML) β€’ Experience integrating systems via APIs (REST/SOAP) or ETL tools β€’ Strong understanding of data modeling (schemas, ER models)