Big Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Big Data Engineer with a contract length of "unknown," offering a pay rate of "unknown." Key skills include Hadoop, Scala, Spark, Python, and SQL. Experience in building data pipelines and deploying data-driven applications is required.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
September 12, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
Unknown
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
United States
-
🧠 - Skills detailed
#Data Pipeline #Hadoop #NiFi (Apache NiFi) #Databricks #Monitoring #Data Processing #SQL (Structured Query Language) #Airflow #Python #Big Data #Scala #Data Management #Java #AWS (Amazon Web Services) #Cloud #Data Quality #Data Engineering #Spark (Apache Spark) #Data Ingestion #ML (Machine Learning)
Role description
Dice is the leading career destination for tech experts at every stage of their careers. Our client, Valueprosite, is seeking the following. Apply via Dice today! Skills Must Have Big Data Skills , Hadoop, Skala, Spark β€’ Responsible for the what and the how β€’ Should be able to collaborate with Software Engineering team on design as well as data contracts Job Description: β€’ Ability to easily move between business, data management, and technical teams; ability to quickly intuit the business use case and identify technical solutions to enable it β€’ Experience building robust and efficient data pipelines end-to-end with a strong focus on data quality β€’ High proficiency in using Python or Scala, Spark, Hadoop platforms & tools (Hive, Airflow, NiFi, Scoop), SQL to build Big Data products & platforms β€’ Experience in building and deploying production-level data-driven applications and data processing workflows/pipelines and/or β€’ Implementing machine learning systems at scale in Java, Scala, or Python and deliver analytics involving all phases like data ingestion, feature engineering, modeling, tuning, evaluating, monitoring, and presenting β€’ Cloud knowledge (Databricks or AWS ecosystem) is a plus but not required