

Optomi
Big Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Big Data Engineer with a contract length of "unknown" and a pay rate of "unknown." It requires expertise in Hadoop, Spark, Kubernetes, and AWS services, along with hands-on experience in data pipeline management and AI development tools.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
520
-
🗓️ - Date
April 28, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Tysons Corner, VA
-
🧠 - Skills detailed
#Data Processing #Docker #AWS (Amazon Web Services) #Data Engineering #Data Pipeline #Data Integration #Kubernetes #S3 (Amazon Simple Storage Service) #GitHub #Athena #Lambda (AWS Lambda) #Trino #ChatGPT #SQL (Structured Query Language) #Infrastructure as Code (IaC) #"ETL (Extract #Transform #Load)" #Hadoop #AI (Artificial Intelligence) #Amazon EMR (Amazon Elastic MapReduce) #Spark (Apache Spark) #Big Data #Scala
Role description
Open to Virginia location or Maryland!
Overview:
• We are seeking a highly skilled and experienced Big Data Engineer to design, develop, and optimize large-scale data processing systems. You will work closely with cross-functional teams to architect data pipelines, implement data integration solutions, and ensure the performance, scalability, and reliability of big data platforms.
Job Must Haves:
• Experience with Big Data technologies such as Hadoop, Spark, Hive & Trino
• Strong experience with Kubernetes architecture
• Hands-on experience with Amazon EMR on EKS
• Deep understanding of Spark's core architecture
• Experience with AWS services like S3, EMR, EMR on EKS, Glue, Lambda, Athena
• Proficiency with SQL window functions, multi-table joins, and aggregations
• Hands-on experience with AI development tools (GitHub Copilot, Q Developer, ChatGPT, Claude, etc.)
Job Nice to Haves:
• Experience with managing production data pipelines/ETL systems
• Experience with CI/CD pipelines
• Experience with Infrastructure as Code
• Experience with Docker and container image optimization
• Knowledge of service mesh technologies
• AWS certifications
Open to Virginia location or Maryland!
Overview:
• We are seeking a highly skilled and experienced Big Data Engineer to design, develop, and optimize large-scale data processing systems. You will work closely with cross-functional teams to architect data pipelines, implement data integration solutions, and ensure the performance, scalability, and reliability of big data platforms.
Job Must Haves:
• Experience with Big Data technologies such as Hadoop, Spark, Hive & Trino
• Strong experience with Kubernetes architecture
• Hands-on experience with Amazon EMR on EKS
• Deep understanding of Spark's core architecture
• Experience with AWS services like S3, EMR, EMR on EKS, Glue, Lambda, Athena
• Proficiency with SQL window functions, multi-table joins, and aggregations
• Hands-on experience with AI development tools (GitHub Copilot, Q Developer, ChatGPT, Claude, etc.)
Job Nice to Haves:
• Experience with managing production data pipelines/ETL systems
• Experience with CI/CD pipelines
• Experience with Infrastructure as Code
• Experience with Docker and container image optimization
• Knowledge of service mesh technologies
• AWS certifications






