

Senior Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is a Senior Data Engineer position in NYC, NY, on a contract basis. It requires 12+ years of experience, proficiency in Python or Scala, Spark, AWS data lake services, and strong organizational skills. In-person interview mandatory.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
June 18, 2025
π - Project duration
Unknown
-
ποΈ - Location type
On-site
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
New York City Metropolitan Area
-
π§ - Skills detailed
#Agile #Data Lakehouse #Scala #MongoDB #Cloud #Oracle #JSON (JavaScript Object Notation) #Data Vault #NoSQL #Vault #AI (Artificial Intelligence) #GIT #Data Lake #Storage #Spark (Apache Spark) #Airflow #Terraform #Data Engineering #Jenkins #Data Storage #AWS (Amazon Web Services) #Documentation #Java #Batch #Python #Hadoop #Databricks #Infrastructure as Code (IaC) #Programming
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Job Title: Sr. Data Engineer
Location: NYC, NY
Job type: Contract
In person interview Mandatory
Experience Level: 12+ Years
Job Description :-
Required Skills:
β’ Proficiency in data engineering programming languages (preferably Python, alternatively Scala or Java)
β’ Proficiency in at least one cluster computing frameworks (preferably Spark, alternatively Flink or Storm)
β’ Proficiency in at least one cloud data Lakehouse platforms (preferably AWS data lake services or Databricks, alternatively Hadoop), at least one relational data stores (Postgres, Oracle or similar) and at least one NOSQL data stores (Cassandra, Dynamo, MongoDB or similar)
β’ Proficiency in at least one scheduling/orchestration tools (preferably Airflow, alternatively AWS Step Functions or similar)
β’ Proficiency with data structures, data serialization formats (JSON, AVRO, Protobuf, or similar), big-data storage formats (Parquet, Iceberg, or similar), data
β’ processing methodologies (batch, micro-batching, and stream), one or more data modelling techniques (Dimensional, Data Vault, Kimball, Inmon, etc.), Agile methodology (develop PI plans and roadmaps), TDD (or BDD) and CI/CD tools (Jenkins, Git,)
β’ Strong organizational, problem-solving, and critical thinking skills; Strong documentation skills
Preferred skills:
β’ Experience using AWS Bedrock APIs
β’ Knowledge of Generative AI concepts (such as RAG, Vector embeddings, Model fine tuning, Agentic AI)
β’ Experience in IaC (preferably Terraform, alternatively AWS cloud formation)