

Senior Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer in NYC, NY, with a contract length of "unknown" and a pay rate of "$$$". Key skills include Python, Spark, AWS data lake services, and experience with NoSQL databases. Agile methodology experience is required.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
June 3, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Unknown
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
New York, NY
-
π§ - Skills detailed
#Spark (Apache Spark) #Oracle #Airflow #Java #Batch #AWS (Amazon Web Services) #Data Storage #Documentation #Programming #Cloud #JSON (JavaScript Object Notation) #Storage #Vault #Data Lakehouse #Data Lake #Infrastructure as Code (IaC) #AI (Artificial Intelligence) #Agile #GIT #Python #Terraform #Jenkins #Scala #Data Engineering #Data Vault #Databricks #MongoDB #NoSQL #Hadoop
Role description
itle: Sr Data Engineer
Location: NYC, NY
Job Description :-
Required Skills:
β’ Proficiency in data engineering programming languages (preferably Python, alternatively Scala or Java)
β’ Proficiency in atleast one cluster computing frameworks (preferably Spark, alternatively Flink or Storm)
β’ Proficiency in atleast one cloud data lakehouse platforms (preferably AWS data lake services or Databricks, alternatively Hadoop), atleast one relational data stores (Postgres, Oracle or similar) and atleast one NOSQL data stores (Cassandra, Dynamo, MongoDB or similar)
β’ Proficiency in atleast one scheduling/orchestration tools (preferably Airflow, alternatively AWS Step Functions or similar)
β’ Proficiency with data structures, data serialization formats (JSON, AVRO, Protobuf, or similar), big-data storage formats (Parquet, Iceberg, or similar), data
β’ processing methodologies (batch, micro-batching, and stream), one or more data modelling techniques (Dimensional, Data Vault, Kimball, Inmon, etc.), Agile methodology (develop PI plans and roadmaps), TDD (or BDD) and CI/CD tools (Jenkins, Git,)
β’ Strong organizational, problem-solving and critical thinking skills; Strong documentation skills
Preferred skills:
β’ Experience using AWS Bedrock APIs
β’ Knowledge of Generative AI concepts (such as RAG, Vector embeddings, Model fine tuning, Agentic AI)
β’ Experience in IaC (preferably Terraform, alternatively AWS cloud formation)
itle: Sr Data Engineer
Location: NYC, NY
Job Description :-
Required Skills:
β’ Proficiency in data engineering programming languages (preferably Python, alternatively Scala or Java)
β’ Proficiency in atleast one cluster computing frameworks (preferably Spark, alternatively Flink or Storm)
β’ Proficiency in atleast one cloud data lakehouse platforms (preferably AWS data lake services or Databricks, alternatively Hadoop), atleast one relational data stores (Postgres, Oracle or similar) and atleast one NOSQL data stores (Cassandra, Dynamo, MongoDB or similar)
β’ Proficiency in atleast one scheduling/orchestration tools (preferably Airflow, alternatively AWS Step Functions or similar)
β’ Proficiency with data structures, data serialization formats (JSON, AVRO, Protobuf, or similar), big-data storage formats (Parquet, Iceberg, or similar), data
β’ processing methodologies (batch, micro-batching, and stream), one or more data modelling techniques (Dimensional, Data Vault, Kimball, Inmon, etc.), Agile methodology (develop PI plans and roadmaps), TDD (or BDD) and CI/CD tools (Jenkins, Git,)
β’ Strong organizational, problem-solving and critical thinking skills; Strong documentation skills
Preferred skills:
β’ Experience using AWS Bedrock APIs
β’ Knowledge of Generative AI concepts (such as RAG, Vector embeddings, Model fine tuning, Agentic AI)
β’ Experience in IaC (preferably Terraform, alternatively AWS cloud formation)