

Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer on a 12+ month contract, 100% remote, with a pay rate of $65 - $85/hr. Key skills include Apache Spark, Ray, Conductor, Kubernetes, and cloud platforms (AWS, GCP, Azure). A Bachelor's degree in Computer Science is required.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
680
-
ποΈ - Date discovered
August 15, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
Remote
-
π - Contract type
W2 Contractor
-
π - Security clearance
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#GCP (Google Cloud Platform) #ML (Machine Learning) #Apache Spark #Big Data #Redis #Kubernetes #Datasets #Monitoring #Data Pipeline #Spark (Apache Spark) #AWS (Amazon Web Services) #Computer Science #Distributed Computing #Cloud #PySpark #Scala #Data Processing #Data Framework #Data Engineering #Data Quality #Batch #Azure
Role description
β’
β’ 100% Remote
β’
β’ Our client, an industry leader in technology and media, has an excellent opportunity for a Data Engineer to work on a 12+month contract opportunity. The Data Engineer will work on designing and building scalable data pipelines and distributed systems to process petabyte-scale datasets that power intelligent products and services. The ideal candidate has deep technical knowledge in big data frameworks, cloud infrastructure, caching systems, and orchestration tools like Ray and Conductor.
We can facilitate w2 and corp-to-corp consultants. For our w2 consultants, we offer a great benefits package that includes Medical, Dental, and Vision benefits, 401k with company matching, and life insurance.
Rate: $65 - $85 / hr. w2
Responsibilities:
Β· Design, implement, and maintain scalable and reliable data pipelines using Spark, Ray, and other distributed frameworks.
Β· Build orchestration workflows and manage dependencies using Conductor and Kubernetes.
Β· Develop and optimize data caching strategies and solutions to enhance performance and reduce latency across large-scale systems.
Β· Architect and deploy data infrastructure and solutions in cloud environments including AWS, GCP, and Azure.
Β· Ensure robust data quality, governance, and monitoring mechanisms are in place.
Β· Contribute to architectural decisions to improve scalability, maintainability, and performance of data platforms.
Requirements:
Β· Bachelor's degree in Computer Science or equivalent
Β· Ray: Experience using Ray for parallel and distributed computing, particularly in machine learning and data processing workloads.
Β· Conductor: Proficiency in designing and managing asynchronous workflow orchestration using Netflix Conductor or similar tools.
Β· Apache Spark: Strong experience with batch and stream processing using PySpark or Scala
Β· Cloud Platforms: Deep understanding of cloud-native services across AWS, Google Cloud Platform (GCP), and Azure, including Data services, Compute, Orchestration & Serverless
Β· Kubernetes: Hands-on experience in container orchestration, cluster management, and deploying data workloads on K8s.
Β· Caching Solutions: Demonstrated expertise in building and optimizing caching layers using technologies like Redis, Memcached, Apache Ignite, or similar.
Please be advised- If anyone reaches out to you about an open position connected with Eliassen Group, please confirm that they have an Eliassen.com email address and never provide personal or financial information to anyone who is not clearly associated with Eliassen Group. If you have any indication of fraudulent activity, please contact InfoSec@eliassen.com.
Skills, experience, and other compensable factors will be considered when determining pay rate. The pay range provided in this posting reflects a W2 hourly rate; other employment options may be available that may result in pay outside of the provided range.
W2 employees of Eliassen Group who are regularly scheduled to work 30 or more hours per week are eligible for the following benefits: medical (choice of 3 plans), dental, vision, pre-tax accounts, other voluntary benefits including life and disability insurance, 401(k) with match, and sick time if required by law in the worked-in state/locality.
JOB ID: JN -082025-103240
β’
β’ 100% Remote
β’
β’ Our client, an industry leader in technology and media, has an excellent opportunity for a Data Engineer to work on a 12+month contract opportunity. The Data Engineer will work on designing and building scalable data pipelines and distributed systems to process petabyte-scale datasets that power intelligent products and services. The ideal candidate has deep technical knowledge in big data frameworks, cloud infrastructure, caching systems, and orchestration tools like Ray and Conductor.
We can facilitate w2 and corp-to-corp consultants. For our w2 consultants, we offer a great benefits package that includes Medical, Dental, and Vision benefits, 401k with company matching, and life insurance.
Rate: $65 - $85 / hr. w2
Responsibilities:
Β· Design, implement, and maintain scalable and reliable data pipelines using Spark, Ray, and other distributed frameworks.
Β· Build orchestration workflows and manage dependencies using Conductor and Kubernetes.
Β· Develop and optimize data caching strategies and solutions to enhance performance and reduce latency across large-scale systems.
Β· Architect and deploy data infrastructure and solutions in cloud environments including AWS, GCP, and Azure.
Β· Ensure robust data quality, governance, and monitoring mechanisms are in place.
Β· Contribute to architectural decisions to improve scalability, maintainability, and performance of data platforms.
Requirements:
Β· Bachelor's degree in Computer Science or equivalent
Β· Ray: Experience using Ray for parallel and distributed computing, particularly in machine learning and data processing workloads.
Β· Conductor: Proficiency in designing and managing asynchronous workflow orchestration using Netflix Conductor or similar tools.
Β· Apache Spark: Strong experience with batch and stream processing using PySpark or Scala
Β· Cloud Platforms: Deep understanding of cloud-native services across AWS, Google Cloud Platform (GCP), and Azure, including Data services, Compute, Orchestration & Serverless
Β· Kubernetes: Hands-on experience in container orchestration, cluster management, and deploying data workloads on K8s.
Β· Caching Solutions: Demonstrated expertise in building and optimizing caching layers using technologies like Redis, Memcached, Apache Ignite, or similar.
Please be advised- If anyone reaches out to you about an open position connected with Eliassen Group, please confirm that they have an Eliassen.com email address and never provide personal or financial information to anyone who is not clearly associated with Eliassen Group. If you have any indication of fraudulent activity, please contact InfoSec@eliassen.com.
Skills, experience, and other compensable factors will be considered when determining pay rate. The pay range provided in this posting reflects a W2 hourly rate; other employment options may be available that may result in pay outside of the provided range.
W2 employees of Eliassen Group who are regularly scheduled to work 30 or more hours per week are eligible for the following benefits: medical (choice of 3 plans), dental, vision, pre-tax accounts, other voluntary benefits including life and disability insurance, 401(k) with match, and sick time if required by law in the worked-in state/locality.
JOB ID: JN -082025-103240