

Big Data Developer
β - Featured Role | Apply direct with Data Freelance Hub
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
September 9, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Unknown
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Virginia, United States
-
π§ - Skills detailed
#Data Ingestion #SQL (Structured Query Language) #Hadoop #Data Processing #Data Management #AWS (Amazon Web Services) #Spark (Apache Spark) #ML (Machine Learning) #Scala #Java #Python #Big Data #Databricks #Cloud #NiFi (Apache NiFi) #Data Quality #Data Pipeline #Airflow #Monitoring #Data Engineering
Role description
Role: Big Data Engineer
Location: Arlington, Virginia (Day 1 Onsite)
Contract
Job Description:
Must Have β Big Data Skills , Hadoop, Skala, Spark
-Responsible for the what and the how
β’ -Should be able to collaborate with Software Engineering team on design as well as data contracts Job Description:
o Ability to easily move between business, data management, and technical teams; ability to quickly intuit the business use case and identify technical solutions to enable it
o Experience building robust and efficient data pipelines end-to-end with a strong focus on data quality
o High proficiency in using Python or Scala, Spark, Hadoop platforms & tools (Hive, Airflow, NiFi, Scoop), SQL to build Big Data products & platforms
o Experience in building and deploying production-level data-driven applications and data processing workflows/pipelines and/or
o Implementing machine learning systems at scale in Java, Scala, or Python and deliver analytics involving all phases like data ingestion, feature engineering, modeling, tuning, evaluating, monitoring, and presenting
- Cloud knowledge (Databricks or AWS ecosystem) is a plus .
Role: Big Data Engineer
Location: Arlington, Virginia (Day 1 Onsite)
Contract
Job Description:
Must Have β Big Data Skills , Hadoop, Skala, Spark
-Responsible for the what and the how
β’ -Should be able to collaborate with Software Engineering team on design as well as data contracts Job Description:
o Ability to easily move between business, data management, and technical teams; ability to quickly intuit the business use case and identify technical solutions to enable it
o Experience building robust and efficient data pipelines end-to-end with a strong focus on data quality
o High proficiency in using Python or Scala, Spark, Hadoop platforms & tools (Hive, Airflow, NiFi, Scoop), SQL to build Big Data products & platforms
o Experience in building and deploying production-level data-driven applications and data processing workflows/pipelines and/or
o Implementing machine learning systems at scale in Java, Scala, or Python and deliver analytics involving all phases like data ingestion, feature engineering, modeling, tuning, evaluating, monitoring, and presenting
- Cloud knowledge (Databricks or AWS ecosystem) is a plus .