

Sr. Data Engineer ( W2 Contract)
Dice is the leading career destination for tech experts at every stage of their careers. Our client, Donato Technologies Inc, is seeking the following. Apply via Dice today!
Job Title: Sr. Data Engineer
Job Location: New York City, NY
Job Duration: 12+ Months
Experience: 10+ Years
Need candidates within 35 miles of NYC, NY
Job Description:
Required Skills:
• Proficiency in data engineering programming languages (preferably Python, alternatively Scala or Java)
• Proficiency in at least one cluster computing framework (preferably Spark, alternatively Flink or Storm)
• Proficiency in at least one cloud data lakehouse platform (preferably AWS data lake services or Databricks, alternatively Hadoop), at least one relational data store (Postgres, Oracle or similar) and atleast one NOSQL data stores (Cassandra, Dynamo, MongoDB or similar)
• Proficiency in at least one scheduling/orchestration tool (preferably Airflow, alternatively AWS Step Functions or similar)
• Proficiency with data structures, data serialization formats (JSON, AVRO, Protobuf, or similar), big-data storage formats (Parquet, Iceberg, or similar), data processing methodologies (batch, micro-batching, and stream), one or more data modelling techniques (Dimensional, Data Vault, Kimball, Inmon, etc.), Agile methodology (develop PI plans and roadmaps), TDD (or BDD) and CI/CD tools (Jenkins, Git,)
• Strong organizational, problem-solving, and critical thinking skills; Strong documentation skills
Preferred skills:
• Experience using AWS Bedrock APIs
• Knowledge of Generative AI concepts (such as RAG, Vector embeddings, Model fine-tuning, Agentic AI)
• Experience in IaC (preferably Terraform, alternatively AWS CloudFormation)
Dice is the leading career destination for tech experts at every stage of their careers. Our client, Donato Technologies Inc, is seeking the following. Apply via Dice today!
Job Title: Sr. Data Engineer
Job Location: New York City, NY
Job Duration: 12+ Months
Experience: 10+ Years
Need candidates within 35 miles of NYC, NY
Job Description:
Required Skills:
• Proficiency in data engineering programming languages (preferably Python, alternatively Scala or Java)
• Proficiency in at least one cluster computing framework (preferably Spark, alternatively Flink or Storm)
• Proficiency in at least one cloud data lakehouse platform (preferably AWS data lake services or Databricks, alternatively Hadoop), at least one relational data store (Postgres, Oracle or similar) and atleast one NOSQL data stores (Cassandra, Dynamo, MongoDB or similar)
• Proficiency in at least one scheduling/orchestration tool (preferably Airflow, alternatively AWS Step Functions or similar)
• Proficiency with data structures, data serialization formats (JSON, AVRO, Protobuf, or similar), big-data storage formats (Parquet, Iceberg, or similar), data processing methodologies (batch, micro-batching, and stream), one or more data modelling techniques (Dimensional, Data Vault, Kimball, Inmon, etc.), Agile methodology (develop PI plans and roadmaps), TDD (or BDD) and CI/CD tools (Jenkins, Git,)
• Strong organizational, problem-solving, and critical thinking skills; Strong documentation skills
Preferred skills:
• Experience using AWS Bedrock APIs
• Knowledge of Generative AI concepts (such as RAG, Vector embeddings, Model fine-tuning, Agentic AI)
• Experience in IaC (preferably Terraform, alternatively AWS CloudFormation)