

Kforce Inc
Data Engineer - PySpark / Databricks - Hybird
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer specializing in PySpark and Databricks, with a contract length of "unknown" and a pay rate of "unknown." Located in New York City, key skills include Terraform, AWS services, SQL, and ETL development.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
800
-
ποΈ - Date
January 8, 2026
π - Duration
Unknown
-
ποΈ - Location
On-site
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
New York, NY
-
π§ - Skills detailed
#Aurora #Spark (Apache Spark) #Data Modeling #PySpark #"ETL (Extract #Transform #Load)" #Storage #Data Migration #Cloud #DevSecOps #Jenkins #Terraform #Python #Data Quality #Delta Lake #Data Catalog #SQL (Structured Query Language) #Data Engineering #S3 (Amazon Simple Storage Service) #BI (Business Intelligence) #Data Pipeline #Data Architecture #Lambda (AWS Lambda) #Databricks #RDS (Amazon Relational Database Service) #Security #GitLab #AI (Artificial Intelligence) #Infrastructure as Code (IaC) #Migration #AWS (Amazon Web Services) #Version Control #Athena #Compliance #Scala
Role description
Responsibilities
Kforce is seeking a highly skilled Data Engineer to join an enterprise-level data team supporting one of our Top Financial Services clients in New York City. This role is focused on modernizing data infrastructure, building scalable pipelines, and creating a centralized data platform to drive data-driven decision-making across multiple business functions. Summary: The Data Engineer will be a key contributor in migrating and modernizing on-premises data into Databricks on AWS, leveraging Terraform, PySpark, and AWS services to deliver a robust and efficient data platform. The team needs resources that have experience in Pyspark for ETL Development, SparkSQL and Databricks and experience in AWS resources like S3, Glue, Athena and Aurora Postgres. Key Responsibilities:
β’ Design, build, and maintain scalable data pipelines for ingestion, transformation, and delivery of enterprise data
β’ Support modernization initiatives, migrating on-premises data into Databricks to enable enterprise-scale analytics
β’ Implement Infrastructure as Code (IaC) for data environments using Terraform
β’ Collaborate with data architects and platform engineers to optimize AWS compute, database, and storage resources
β’ Develop and manage ETL/ELT workflows using Databricks (PySpark, Python, SQL)
β’ Ensure data quality, security, and compliance across all environments
Requirements
β’ Infrastructure as Code (IaC): Strong hands-on experience with Terraform.
β’ Databricks Expertise: Building notebooks with PySpark and Python; Solid knowledge of Databricks architecture and provisioning; Familiarity with Medallion Architecture (Delta Lake, Lakehouse)
β’ AWS Cloud Services: Practical experience with Glue, Lambda, Step Functions, RDS, and related AWS resources
β’ SQL Proficiency: Strong SQL skills for data cataloging, ingestion, and transformations
β’ ETL Development: Proven experience building complex ETL processes beyond simple lift-and-shift
β’ Data Migration & Modeling: Knowledge of on-premises to cloud migrations and modern data modeling practices
Preferred/Nice to have:
β’ CI/CD Pipelines: Experience with GitLab and/or Jenkins for version control, code commits, and automated pipelines
β’ Shift-Left Testing/DevSecOps: Familiarity with SAST/DAST tools and security-first development practices
β’ Exposure to analytics tools and BI platforms
The pay range is the lowest to highest compensation we reasonably in good faith believe we would pay at posting for this role. We may ultimately pay more or less than this range. Employee pay is based on factors like relevant education, qualifications, certifications, experience, skills, seniority, location, performance, union contract and business needs. This range may be modified in the future.
We offer comprehensive benefits including medical/dental/vision insurance, HSA, FSA, 401(k), and life, disability & ADD insurance to eligible employees. Salaried personnel receive paid time off. Hourly employees are not eligible for paid time off unless required by law. Hourly employees on a Service Contract Act project are eligible for paid sick leave.
Note: Pay is not considered compensation until it is earned, vested and determinable. The amount and availability of any compensation remains in Kforce's sole discretion unless and until paid and may be modified in its discretion consistent with the law.
This job is not eligible for bonuses, incentives or commissions.
Kforce is an Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, pregnancy, sexual orientation, gender identity, national origin, age, protected veteran status, or disability status.
By clicking βApply Todayβ you agree to receive calls, AI-generated calls, text messages or emails from Kforce and its affiliates, and service providers. Note that if you choose to communicate with Kforce via text messaging the frequency may vary, and message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You will always have the right to cease communicating via text by using key words such as STOP.
Responsibilities
Kforce is seeking a highly skilled Data Engineer to join an enterprise-level data team supporting one of our Top Financial Services clients in New York City. This role is focused on modernizing data infrastructure, building scalable pipelines, and creating a centralized data platform to drive data-driven decision-making across multiple business functions. Summary: The Data Engineer will be a key contributor in migrating and modernizing on-premises data into Databricks on AWS, leveraging Terraform, PySpark, and AWS services to deliver a robust and efficient data platform. The team needs resources that have experience in Pyspark for ETL Development, SparkSQL and Databricks and experience in AWS resources like S3, Glue, Athena and Aurora Postgres. Key Responsibilities:
β’ Design, build, and maintain scalable data pipelines for ingestion, transformation, and delivery of enterprise data
β’ Support modernization initiatives, migrating on-premises data into Databricks to enable enterprise-scale analytics
β’ Implement Infrastructure as Code (IaC) for data environments using Terraform
β’ Collaborate with data architects and platform engineers to optimize AWS compute, database, and storage resources
β’ Develop and manage ETL/ELT workflows using Databricks (PySpark, Python, SQL)
β’ Ensure data quality, security, and compliance across all environments
Requirements
β’ Infrastructure as Code (IaC): Strong hands-on experience with Terraform.
β’ Databricks Expertise: Building notebooks with PySpark and Python; Solid knowledge of Databricks architecture and provisioning; Familiarity with Medallion Architecture (Delta Lake, Lakehouse)
β’ AWS Cloud Services: Practical experience with Glue, Lambda, Step Functions, RDS, and related AWS resources
β’ SQL Proficiency: Strong SQL skills for data cataloging, ingestion, and transformations
β’ ETL Development: Proven experience building complex ETL processes beyond simple lift-and-shift
β’ Data Migration & Modeling: Knowledge of on-premises to cloud migrations and modern data modeling practices
Preferred/Nice to have:
β’ CI/CD Pipelines: Experience with GitLab and/or Jenkins for version control, code commits, and automated pipelines
β’ Shift-Left Testing/DevSecOps: Familiarity with SAST/DAST tools and security-first development practices
β’ Exposure to analytics tools and BI platforms
The pay range is the lowest to highest compensation we reasonably in good faith believe we would pay at posting for this role. We may ultimately pay more or less than this range. Employee pay is based on factors like relevant education, qualifications, certifications, experience, skills, seniority, location, performance, union contract and business needs. This range may be modified in the future.
We offer comprehensive benefits including medical/dental/vision insurance, HSA, FSA, 401(k), and life, disability & ADD insurance to eligible employees. Salaried personnel receive paid time off. Hourly employees are not eligible for paid time off unless required by law. Hourly employees on a Service Contract Act project are eligible for paid sick leave.
Note: Pay is not considered compensation until it is earned, vested and determinable. The amount and availability of any compensation remains in Kforce's sole discretion unless and until paid and may be modified in its discretion consistent with the law.
This job is not eligible for bonuses, incentives or commissions.
Kforce is an Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, pregnancy, sexual orientation, gender identity, national origin, age, protected veteran status, or disability status.
By clicking βApply Todayβ you agree to receive calls, AI-generated calls, text messages or emails from Kforce and its affiliates, and service providers. Note that if you choose to communicate with Kforce via text messaging the frequency may vary, and message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You will always have the right to cease communicating via text by using key words such as STOP.






