

Prospance Inc
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Pipeline Engineer IV in Mountain View, CA, with a contract duration of over 6 months. Pay rate is W2 or Fulltime. Key skills required include Apache Spark, Kafka, and REST APIs, with a minimum of 3 years’ experience in Big Data technologies.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
720
-
🗓️ - Date
October 11, 2025
🕒 - Duration
More than 6 months
-
🏝️ - Location
On-site
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Mountain View, CA
-
🧠 - Skills detailed
#GCP (Google Cloud Platform) #Data Modeling #BI (Business Intelligence) #Big Data #AWS (Amazon Web Services) #API (Application Programming Interface) #Apache Spark #Cloud #Unix #Datasets #Data Quality #MLflow #Data Engineering #Computer Science #Azure #Predictive Modeling #PyTorch #Documentation #AI (Artificial Intelligence) #Security #Python #Automation #Scripting #FastAPI #AWS EMR (Amazon Elastic MapReduce) #TensorFlow #Clustering #Django #Scala #Spark (Apache Spark) #Programming #Code Reviews #Classification #YARN (Yet Another Resource Negotiator) #Data Pipeline #Kafka (Apache Kafka) #REST API #Apache Iceberg #Linux #Regression #Scrum #REST (Representational State Transfer) #"ETL (Extract #Transform #Load)" #Flask #Airflow #Data Science #ML (Machine Learning)
Role description
Data Pipeline Engineer IV
📍 Location: Mountain View, CA(Onsite only locals)/only W2 or Fulltime
🕓 Contract Role through our engineering services firm
About the Role
⚡ Now Hiring: Data Pipeline Engineer IV – Cloud Data Services ✨
We’re collaborating with a global technology leader to hire a Data Pipeline Engineer IV — a key player in building and optimizing large-scale, cloud-based data ecosystems that power intelligent applications and services worldwide.
In this role, you’ll architect, design, and implement robust, high-performance data pipelines that ingest, process, and deliver massive datasets across distributed systems. You’ll work hands-on with modern data technologies such as Apache Spark, Kafka, Airflow, and cloud platforms (AWS, GCP, or Azure) to ensure data reliability, scalability, and security across the enterprise.
You’ll collaborate closely with data scientists, platform engineers, and product teams to enable real-time analytics, predictive modeling, and machine learning at scale. Your work will directly influence data-driven decision-making across product, business, and engineering functions.
This position is ideal for data engineering experts passionate about big data, streaming architectures, and automation — professionals who thrive in solving complex data challenges and delivering seamless, high-quality data experiences that empower innovation.
General Description:
The Cloud services group is looking for world class engineers to join our technology innovation group focused on the rapid development of cloud based end-to-end mobile applications and services.
This is a great opportunity for a talented Data Engineer to step up to the next level and build secure cloud services for users of the world’s best-selling mobile devices. The Samsung Knox Cloud Team is focused on productizing research projects and supporting their corresponding cloud-based service platforms and infrastructure.
Position Summary:
Looking for world class server software engineers with Big Data Infrastructure and Data warehousing experience to join our technology innovation group focused on the rapid development of AI driven cloud based end-to-end mobile applications and services.
Responsibilities Include:
· Implement, maintain and evolve big data platform and infrastructure
· Responsible for designing, implementing and maintaining backend REST API Services and ETL processes for Predictive Data Modeling, Machine Learning, Personalization, Recommendation, and Business Intelligence system
· Perform extensive research and analysis to make optimal architecture and design decisions
· Write large amounts of code, perform code reviews, write unit tests and documentation
· Interface with other groups including Product Management, QA and Operations
· Create quick proof-of-concept prototypes
· Work in a scrum team with ML & Cloud Engineers
Requirements:
· Minimum of 3 years of Hand-on experience in Big Data technologies like Apache Iceberg, Spark, Spark ML, & Kafka
· Strong skills in statistical analysis: correlation analysis, regression analysis, univariate and multivariate analysis
· Strong skills in algorithms, Data structures, UNIX/Linux, Scripting, Machine Learning (especially classification and unsupervised clustering), Data Modeling, Data warehousing, and Networking
· Hands-on experience in building REST API’s
· Team player with strong communication skills and ability to mentor ML & Cloud engineers in areas of Data Engineering
· Desire to learn fast and pick up latest and greatest technologies
· MS in Computer Science or equivalent experience
Preferred:
· Programming Language: Python
· Preferred 7+ years of experience in Big Data Technologies: AWS EMR, Iceberg, Spark, Kafka, & YARN
· Machine Learning: PyTorch, TensorFlow, Spark ML
· REST API: Flask, FastAPI, or Django
· Data Quality tools such as Monte Carlo, Great Expectations, or Databand
· Experience with teams using MLflow
📩 Apply now and be part of shaping the future of data-driven cloud intelligence.
#DataEngineering #BigData #CloudData #ETL #DataPipelines #Spark #Kafka #AWS #Airflow #HiringNow #MachineLearning #DataInfrastructure
Data Pipeline Engineer IV
📍 Location: Mountain View, CA(Onsite only locals)/only W2 or Fulltime
🕓 Contract Role through our engineering services firm
About the Role
⚡ Now Hiring: Data Pipeline Engineer IV – Cloud Data Services ✨
We’re collaborating with a global technology leader to hire a Data Pipeline Engineer IV — a key player in building and optimizing large-scale, cloud-based data ecosystems that power intelligent applications and services worldwide.
In this role, you’ll architect, design, and implement robust, high-performance data pipelines that ingest, process, and deliver massive datasets across distributed systems. You’ll work hands-on with modern data technologies such as Apache Spark, Kafka, Airflow, and cloud platforms (AWS, GCP, or Azure) to ensure data reliability, scalability, and security across the enterprise.
You’ll collaborate closely with data scientists, platform engineers, and product teams to enable real-time analytics, predictive modeling, and machine learning at scale. Your work will directly influence data-driven decision-making across product, business, and engineering functions.
This position is ideal for data engineering experts passionate about big data, streaming architectures, and automation — professionals who thrive in solving complex data challenges and delivering seamless, high-quality data experiences that empower innovation.
General Description:
The Cloud services group is looking for world class engineers to join our technology innovation group focused on the rapid development of cloud based end-to-end mobile applications and services.
This is a great opportunity for a talented Data Engineer to step up to the next level and build secure cloud services for users of the world’s best-selling mobile devices. The Samsung Knox Cloud Team is focused on productizing research projects and supporting their corresponding cloud-based service platforms and infrastructure.
Position Summary:
Looking for world class server software engineers with Big Data Infrastructure and Data warehousing experience to join our technology innovation group focused on the rapid development of AI driven cloud based end-to-end mobile applications and services.
Responsibilities Include:
· Implement, maintain and evolve big data platform and infrastructure
· Responsible for designing, implementing and maintaining backend REST API Services and ETL processes for Predictive Data Modeling, Machine Learning, Personalization, Recommendation, and Business Intelligence system
· Perform extensive research and analysis to make optimal architecture and design decisions
· Write large amounts of code, perform code reviews, write unit tests and documentation
· Interface with other groups including Product Management, QA and Operations
· Create quick proof-of-concept prototypes
· Work in a scrum team with ML & Cloud Engineers
Requirements:
· Minimum of 3 years of Hand-on experience in Big Data technologies like Apache Iceberg, Spark, Spark ML, & Kafka
· Strong skills in statistical analysis: correlation analysis, regression analysis, univariate and multivariate analysis
· Strong skills in algorithms, Data structures, UNIX/Linux, Scripting, Machine Learning (especially classification and unsupervised clustering), Data Modeling, Data warehousing, and Networking
· Hands-on experience in building REST API’s
· Team player with strong communication skills and ability to mentor ML & Cloud engineers in areas of Data Engineering
· Desire to learn fast and pick up latest and greatest technologies
· MS in Computer Science or equivalent experience
Preferred:
· Programming Language: Python
· Preferred 7+ years of experience in Big Data Technologies: AWS EMR, Iceberg, Spark, Kafka, & YARN
· Machine Learning: PyTorch, TensorFlow, Spark ML
· REST API: Flask, FastAPI, or Django
· Data Quality tools such as Monte Carlo, Great Expectations, or Databand
· Experience with teams using MLflow
📩 Apply now and be part of shaping the future of data-driven cloud intelligence.
#DataEngineering #BigData #CloudData #ETL #DataPipelines #Spark #Kafka #AWS #Airflow #HiringNow #MachineLearning #DataInfrastructure