Sr. Data Engineer (W2 Contract)

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr. Data Engineer (W2 Contract) in NYC, NY, requiring 8+ years of experience. Key skills include proficiency in Python, Spark, AWS, and data modeling techniques. Local candidates from NY & NJ preferred.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
August 12, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
On-site
-
πŸ“„ - Contract type
W2 Contractor
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
New York, NY
-
🧠 - Skills detailed
#Python #Data Lake #Terraform #Infrastructure as Code (IaC) #Storage #Java #AI (Artificial Intelligence) #Programming #Oracle #Batch #Vault #Spark (Apache Spark) #Agile #JSON (JavaScript Object Notation) #Data Storage #NoSQL #GIT #Databricks #Documentation #MongoDB #Airflow #Data Vault #AWS (Amazon Web Services) #Data Engineering #Hadoop #Jenkins #Scala #Data Lakehouse #Cloud
Role description
Dice is the leading career destination for tech experts at every stage of their careers. Our client, Donato Technologies Inc, is seeking the following. Apply via Dice today! Hi, Greetings!! My name is Nikhil Gabriel and I am a Staffing Specialist at Donato Technologies, Inc. I am reaching out to you on an exciting job opportunity with one of our clients. Job Title: Sr Data Engineer Job Location: NYC, NY(Onsite) Experience: 8+ years Need someone local to NY & NJ Job Description: β€’ Proficiency in data engineering programming languages (preferably Python, alternatively Scala or Java) β€’ Proficiency in at least one cluster computing framework (preferably Spark, alternatively Flink or Storm) β€’ Proficiency in at least one cloud data lakehouse platform (preferably AWS data lake services or Databricks, alternatively Hadoop), at least one relational data store (Postgres, Oracle or similar), and at least one NOSQL data store (Cassandra, Dynamo, MongoDB or similar) β€’ Proficiency in at least one scheduling/orchestration tool (preferably Airflow, alternatively AWS Step Functions or similar) β€’ Proficiency with data structures, data serialization formats (JSON, AVRO, Protobuf, or similar), big-data storage formats (Parquet, Iceberg, or similar), data β€’ processing methodologies (batch, micro-batching, and stream), one or more data modelling techniques (Dimensional, Data Vault, Kimball, Inmon, etc.), Agile methodology (develop PI plans and roadmaps), TDD (or BDD), and CI/CD tools (Jenkins, Git) β€’ Strong organizational, problem-solvin,g and critical thinking skills; Strong documentation skills Preferred skills: β€’ Experience using AWS Bedrock APIs β€’ Knowledge of Generative AI concepts (such as RAG, Vector embeddings, Model fine-tuning, Agentic AI) β€’ Experience in IaC (preferably Terraform, alternatively AWS CloudFormation)