Sr. Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr. Data Engineer with a 10+ year experience requirement, offering a competitive pay rate. Key skills include proficiency in Python, Spark, AWS, and data modeling techniques. Familiarity with Generative AI and IaC is preferred.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
🗓️ - Date discovered
April 25, 2025
🕒 - Project duration
Unknown
🏝️ - Location type
Unknown
📄 - Contract type
Unknown
🔒 - Security clearance
Unknown
📍 - Location detailed
New York, NY
🧠 - Skills detailed
#Infrastructure as Code (IaC) #Batch #Programming #Hadoop #MongoDB #Data Vault #Databricks #Jenkins #Data Storage #Agile #Storage #Vault #Data Lakehouse #Spark (Apache Spark) #HTTP & HTTPS (Hypertext Transfer Protocol & Hypertext Transfer Protocol Secure) #Python #AWS (Amazon Web Services) #Cloud #Terraform #Java #Oracle #JSON (JavaScript Object Notation) #HTML (Hypertext Markup Language) #Documentation #NoSQL #Data Lake #Scala #AI (Artificial Intelligence) #Airflow #Data Engineering #GIT #Data Processing
Role description

Job Description :-

Experience: 10+ Years

Required Skills

   • Proficiency in data engineering programming languages (preferably Python, alternatively Scala or Java)

   • Proficiency in atleast one cluster computing frameworks (preferably Spark, alternatively Flink or Storm)

   • Proficiency in atleast one cloud data lakehouse platforms (preferably AWS data lake services or Databricks, alternatively Hadoop), atleast one relational data stores (Postgres, Oracle or similar) and atleast one NOSQL data stores (Cassandra, Dynamo, MongoDB or similar)

Proficiency in atleast one scheduling/orchestration tools (preferably Airflow, alternatively AWS Step Functions or similar)

   • Proficiency with data structures, data serialization formats (JSON, AVRO, Protobuf, or similar), big-data storage formats (Parquet, Iceberg, or similar), data processing methodologies (batch, micro-batching, and stream), one or more data modelling techniques (Dimensional, Data Vault, Kimball, Inmon, etc.), Agile methodology (develop PI plans and roadmaps), TDD (or BDD) and CI/CD tools (Jenkins, Git,)

Strong organizational, problem-solving and critical thinking skills; Strong documentation skills

Preferred Skills

Experience using AWS Bedrock APIs

Knowledge of Generative AI concepts (such as RAG, Vector embeddings, Model fine tuning, Agentic AI)

Experience in IaC (preferably Terraform, alternatively AWS cloud formation)

While others say it, we do it: we care. We have great people and we do great work. Just as importantly, we have great relationships with an impressive clientele. Over 1,000 talented, diverse, and career-minded professionals are carving out their role and experiencing a good mix of challenges and opportunities - and we're rooting for them along the way, every day. For more, click: https://www.mindteck.com/career/life-at-mindteck.html

Mindteck is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, religion, color, national origin, sex, sexual orientation, gender identity, age, status as a protected veteran, status as a qualified individual with a disability, or any other trait protected by law.