

Senior Spark Developer (Python, AWS, SQL) In Person Interview Local Candidate Only
β - Featured Role | Apply direct with Data Freelance Hub
This role is a Senior Spark Developer contract position in New York City, requiring 8+ years of data engineering experience. Key skills include Python, SQL, and AWS. In-person interviews are mandatory, and local candidates are preferred.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
September 5, 2025
π - Project duration
Unknown
-
ποΈ - Location type
On-site
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
New York, United States
-
π§ - Skills detailed
#Data Pipeline #Python #Terraform #Agile #Kafka (Apache Kafka) #Vault #Spark (Apache Spark) #Snowflake #Apache Spark #Data Engineering #Data Extraction #PySpark #Lambda (AWS Lambda) #SQL (Structured Query Language) #Programming #Cloud #"ETL (Extract #Transform #Load)" #S3 (Amazon Simple Storage Service) #Scala #Batch #SQL Queries #Data Vault #Kubernetes #Automation #Redshift #AWS (Amazon Web Services) #Data Transformations #Docker #Data Modeling #Data Quality #Databricks
Role description
K&K Global Talent Solutions Inc is an International recruiting agency that has been providing technical resources in the USA region since 1993.
This position is with one of our clients in The USA, who is actively hiring candidates to expand their teams.
Role: Senior Spark Developer (Python, AWS, SQL)
Employment type: Contract
Technology: Python, AWS, SQL
Location: New York City, New York, US
In person Interview is must Need Local
Responsibilities
Design, develop, and maintain scalable data pipelines using Apache Spark (batch and/or streaming).
Build, optimize, and manage ETL/ELT workflows integrating multiple data sources.
Develop data solutions in Python for data transformations, automation, and orchestration.
Leverage AWS services (S3, EMR, Glue, Lambda, Redshift, Kinesis, etc.) to implement cloud-native data platforms.
Write efficient SQL queries for data extraction, transformation, and reporting.
Ensure data quality, lineage, and governance across pipelines.
Collaborate with data engineers, architects, and analysts to deliver end-to-end data solutions.
Troubleshoot performance bottlenecks and optimize Spark jobs for speed and cost-efficiency.
Skills Must have
β’ 8+ years of experience in data engineering or backend development.
β’ Hands-on experience with Apache Spark (PySpark) in large-scale data environments.
β’ Strong proficiency in Python programming.
β’ Expertise in SQL (including advanced queries, performance tuning, and optimization).
β’ Experience working with AWS services such as S3, Glue, EMR, Lambda, Redshift, or Kinesis.
β’ Understanding of data warehousing concepts and ETL best practices.
β’ Strong problem-solving skills and ability to work in an agile, collaborative environment.
Nice to have
β’ Experience with Databricks or similar Spark-based platforms.
β’ Knowledge of streaming frameworks (Kafka, Flink).
β’ Familiarity with CI/CD pipelines, Docker, Kubernetes, Terraform.
β’ Exposure to data modeling (star schema, snowflake, data vault).
β’ Experience in financial services / capital markets
K&K Global Talent Solutions Inc is an International recruiting agency that has been providing technical resources in the USA region since 1993.
This position is with one of our clients in The USA, who is actively hiring candidates to expand their teams.
Role: Senior Spark Developer (Python, AWS, SQL)
Employment type: Contract
Technology: Python, AWS, SQL
Location: New York City, New York, US
In person Interview is must Need Local
Responsibilities
Design, develop, and maintain scalable data pipelines using Apache Spark (batch and/or streaming).
Build, optimize, and manage ETL/ELT workflows integrating multiple data sources.
Develop data solutions in Python for data transformations, automation, and orchestration.
Leverage AWS services (S3, EMR, Glue, Lambda, Redshift, Kinesis, etc.) to implement cloud-native data platforms.
Write efficient SQL queries for data extraction, transformation, and reporting.
Ensure data quality, lineage, and governance across pipelines.
Collaborate with data engineers, architects, and analysts to deliver end-to-end data solutions.
Troubleshoot performance bottlenecks and optimize Spark jobs for speed and cost-efficiency.
Skills Must have
β’ 8+ years of experience in data engineering or backend development.
β’ Hands-on experience with Apache Spark (PySpark) in large-scale data environments.
β’ Strong proficiency in Python programming.
β’ Expertise in SQL (including advanced queries, performance tuning, and optimization).
β’ Experience working with AWS services such as S3, Glue, EMR, Lambda, Redshift, or Kinesis.
β’ Understanding of data warehousing concepts and ETL best practices.
β’ Strong problem-solving skills and ability to work in an agile, collaborative environment.
Nice to have
β’ Experience with Databricks or similar Spark-based platforms.
β’ Knowledge of streaming frameworks (Kafka, Flink).
β’ Familiarity with CI/CD pipelines, Docker, Kubernetes, Terraform.
β’ Exposure to data modeling (star schema, snowflake, data vault).
β’ Experience in financial services / capital markets