

KPG99 INC
Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr. Data Engineer in Manhattan, NY, offering a 12-month contract at a competitive pay rate. Requires a Bachelor's in Computer Science, 7+ years in data engineering, and expertise in Azure, Databricks, PySpark, SQL, and ETL/ELT pipelines.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
560
-
ποΈ - Date
October 8, 2025
π - Duration
More than 6 months
-
ποΈ - Location
On-site
-
π - Contract
1099 Contractor
-
π - Security
Unknown
-
π - Location detailed
Manhattan, NY
-
π§ - Skills detailed
#GIT #Delta Lake #DevOps #Cloud #Spark (Apache Spark) #Azure Data Factory #SQL (Structured Query Language) #Python #Data Modeling #Data Engineering #Distributed Computing #"ETL (Extract #Transform #Load)" #ADF (Azure Data Factory) #Data Lake #PySpark #Azure DevOps #Azure #Storage #Azure SQL #Computer Science #Scala #Databricks #Jenkins
Role description
Title: Sr. Data Engineer
Location: Manhattan, NY (3 days a week on site)
Duration: 12 month+ contract
Required Qualifications
β’ Bachelorβs in Computer Science, Engineering, or related field.
β’ 7+ years in data engineering with hands-on experience in cloud (Azure preferred).
β’ Strong experience with Databricks, PySpark, Azure Data Factory, and SQL.
β’ Strong Python experience
β’ Proven ability to design and optimize scalable ETL/ELT pipelines.
β’ Knowledge of Delta Lake, data modeling, and distributed computing.
β’ Familiarity with DevOps/CI/CD practices and Git-based workflows.
β’ Excellent communication and collaboration skills.
Tech Stack
β’ Databricks, PySpark, Delta Lake
β’ Azure Data Factory, Azure Data Lake, Blob Storage, Azure SQL
β’ SQL (advanced), Python (for pipelines/integrations)
β’ DevOps tools (Azure DevOps, Jenkins), Git
Title: Sr. Data Engineer
Location: Manhattan, NY (3 days a week on site)
Duration: 12 month+ contract
Required Qualifications
β’ Bachelorβs in Computer Science, Engineering, or related field.
β’ 7+ years in data engineering with hands-on experience in cloud (Azure preferred).
β’ Strong experience with Databricks, PySpark, Azure Data Factory, and SQL.
β’ Strong Python experience
β’ Proven ability to design and optimize scalable ETL/ELT pipelines.
β’ Knowledge of Delta Lake, data modeling, and distributed computing.
β’ Familiarity with DevOps/CI/CD practices and Git-based workflows.
β’ Excellent communication and collaboration skills.
Tech Stack
β’ Databricks, PySpark, Delta Lake
β’ Azure Data Factory, Azure Data Lake, Blob Storage, Azure SQL
β’ SQL (advanced), Python (for pipelines/integrations)
β’ DevOps tools (Azure DevOps, Jenkins), Git