

Optomi
Big Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Big Data Engineer on a hybrid contract for 3 days a week in Tysons Corner, VA, or Rockville, MA. Key skills include Hadoop, Spark, SQL, AWS, and AI tools. Experience with financial data is preferred.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
456
-
🗓️ - Date
February 20, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Tysons Corner, VA
-
🧠 - Skills detailed
#Spark (Apache Spark) #Programming #AWS (Amazon Web Services) #S3 (Amazon Simple Storage Service) #Hadoop #Big Data #Complex Queries #"ETL (Extract #Transform #Load)" #Data Processing #Scala #ChatGPT #Trino #Data Pipeline #SQL (Structured Query Language) #AI (Artificial Intelligence) #Python #Apache Spark #Data Engineering #Agile #GitHub #Cloud
Role description
Hybrid 3 days a week in either locations: Tysons Corner, VA or Rockville, MA!
About the Position / Current Initiatives: The position involves designing, developing, and optimizing large-scale data processing systems. The role requires working on petabyte-scale data with Big Data technologies, troubleshooting resource limitations, and leveraging AI tools for prompt engineering and workflow improvements.
Job Must Haves:
• Experience with Big Data technologies such as Hadoop, Spark, Hive, and Trino
• Strong SQL skills, including complex queries, window functions, and multi-table joins
• Cloud experience, specifically with AWS services like S3, Glue, and EMR
• Proficiency in AI tools (e.g., GitHub Copilot, ChatGPT) and prompt engineering
• Hands-on experience with Apache Spark development, internals, and tuning
• Programming expertise in Python or Scala
• Agile methodology experience, CI/CD practices, and financial data experience preferred
Job Nice to Haves:
• AWS certifications
• Experience managing production data pipelines/ETL systems
• Experience writing test cases
• Knowledge of serverless technologies and EKS
Hybrid 3 days a week in either locations: Tysons Corner, VA or Rockville, MA!
About the Position / Current Initiatives: The position involves designing, developing, and optimizing large-scale data processing systems. The role requires working on petabyte-scale data with Big Data technologies, troubleshooting resource limitations, and leveraging AI tools for prompt engineering and workflow improvements.
Job Must Haves:
• Experience with Big Data technologies such as Hadoop, Spark, Hive, and Trino
• Strong SQL skills, including complex queries, window functions, and multi-table joins
• Cloud experience, specifically with AWS services like S3, Glue, and EMR
• Proficiency in AI tools (e.g., GitHub Copilot, ChatGPT) and prompt engineering
• Hands-on experience with Apache Spark development, internals, and tuning
• Programming expertise in Python or Scala
• Agile methodology experience, CI/CD practices, and financial data experience preferred
Job Nice to Haves:
• AWS certifications
• Experience managing production data pipelines/ETL systems
• Experience writing test cases
• Knowledge of serverless technologies and EKS





