HireTalent - Diversity Staffing & Recruiting Firm

Senior Big Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Big Data Engineer with 8+ years of experience, focusing on AWS, Hadoop, and AI/ML integration. Contract duration is 6-12 months, with a pay rate of "TBD". Work location is hybrid in Rockville, MD, Tysons, VA, Jersey City, NJ, or New York, NY.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
April 11, 2026
πŸ•’ - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Rockville, MD
-
🧠 - Skills detailed
#AI (Artificial Intelligence) #Data Engineering #Python #AWS Glue #S3 (Amazon Simple Storage Service) #Athena #HDFS (Hadoop Distributed File System) #Kubernetes #AWS S3 (Amazon Simple Storage Service) #Hadoop #Scrum #AWS (Amazon Web Services) #Presto #Metadata #Data Access #Spark (Apache Spark) #Datasets #ML (Machine Learning) #Scala #Shell Scripting #Scripting #Data Pipeline #Trino #Data Lake #Cloud #Jenkins #Grafana #SQL (Structured Query Language) #Data Catalog #Big Data
Role description
Title: Senior Big Data Engineer Location: Rockville, MD or Tysons, VA or Jersey City, NJ or New York, NY/Hybrid Duration: 6-12 Months Contract. Project Description Sr. Big Data Engineer – AWS, Hadoop, Athena, Glue (Catalog), Lake Formation, Trino, EKS, AI, CI/CD & MCP Server We are looking for a Senior Big Data Engineer who thinks beyond tickets and task execution. This role is for someone who questions the β€œwhy,” modernizes tech stacks, optimizes performance, and drives outcomes, not just a task executor. What You’ll Do β€’ Design and build scalable data lakes and pipelines on AWS using cloud-native and automated solutions. β€’ Enable fast, federated analytics using Amazon Athena and Trino, with performance tuning for large-scale queries. β€’ Manage metadata, schemas, and discovery using AWS Glue Data Catalog. β€’ Implement fine-grained data access and governance using AWS Lake Formation, KMS encryption, and SSL. β€’ Build and operate data services on EKS (Kubernetes). β€’ Work with the Hadoop ecosystem (Spark, Hive, HDFS) using partitioning, bucketing, and columnar formats (Parquet, ORC). β€’ Troubleshoot and resolve complex big data issues across pipelines, clusters, and queries. β€’ Design, implement, and maintain CI/CD pipelines using Jenkins or similar tools. β€’ Monitor and observe pipelines and clusters using CloudWatch and Grafana. β€’ Prepare high-quality datasets for AI/ML use cases. β€’ Build, configure, and operate an MCP server for AI/ML integration. β€’ Collaborate in Scrum teams; proactively identify gaps and propose out-of-the-box, scalable solutions. What We’re Looking For β€’ 8+ years of experience in Big Data / Data Engineering. β€’ Strong hands-on experience with AWS (S3, Glue Data Catalog, Lake Formation, Athena, EMR, EKS). β€’ Proven experience with Trino (or Presto) and optimizing query performance. β€’ Working knowledge of Kubernetes / EKS for data workloads. β€’ Strong SQL, Python, and shell scripting skills. β€’ Experience with CI/CD pipelines and Jenkins. β€’ Experience building and configuring MCP server for AI/ML integration. β€’ Ownership mindset β€” problem solver, not a task executor. Good to Have β€’ AI/ML data pipeline exposure β€’ Cloud-native data modernization experience