

Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a contract length of "unknown" and a pay rate of "unknown." Requires 5+ years of experience in scalable systems, proficiency in Java, Python, Scala, API development, and familiarity with cloud technologies like Azure or GCP.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
August 28, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Unknown
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Sunnyvale, CA
-
π§ - Skills detailed
#Presto #Scala #AI (Artificial Intelligence) #Java #Trino #Debugging #Kubernetes #Airflow #Azure #Data Lake #API (Application Programming Interface) #Migration #Spark (Apache Spark) #Looker #Kafka (Apache Kafka) #Cloud #Data Engineering #Public Cloud #GCP (Google Cloud Platform) #Base #Data Migration #Hadoop #Python
Role description
β’ 5+ years of relevant experience in building highly resilient, highly scalable systems.
β’ Experience in multiple stack technologies Java, Python, Scala
β’ Hands on experience in API development, GQL, Node.js
β’ Hands on experience with Hadoop, Hive, Spark using Scala, Vertex AI, Presto/Trino, Kubernetes, Cloud, Automic, Airflow and Data Lake concepts.
β’ Solid knowledge of complex software design, distributed system design, design patterns, data structures, and algorithms.
β’ Skilled in data modelling & data migration protocols.
β’ Familiarity with public cloud technologies such as Azure or Google Cloud Platform.
β’ Knowledge on Kafka connect, Druid, Big Query and Looker is added advantage.
β’ Excellent technical debugging and production support skills.
β’ Extensive experience in the design, development, and delivery of software products with a large user base.
β’ Track record in an architect role with large-scale software development data-backed services and applications.
β’ Ability to balance conflicting interests in a complex and fast-paced environment.
β’ 5+ years of relevant experience in building highly resilient, highly scalable systems.
β’ Experience in multiple stack technologies Java, Python, Scala
β’ Hands on experience in API development, GQL, Node.js
β’ Hands on experience with Hadoop, Hive, Spark using Scala, Vertex AI, Presto/Trino, Kubernetes, Cloud, Automic, Airflow and Data Lake concepts.
β’ Solid knowledge of complex software design, distributed system design, design patterns, data structures, and algorithms.
β’ Skilled in data modelling & data migration protocols.
β’ Familiarity with public cloud technologies such as Azure or Google Cloud Platform.
β’ Knowledge on Kafka connect, Druid, Big Query and Looker is added advantage.
β’ Excellent technical debugging and production support skills.
β’ Extensive experience in the design, development, and delivery of software products with a large user base.
β’ Track record in an architect role with large-scale software development data-backed services and applications.
β’ Ability to balance conflicting interests in a complex and fast-paced environment.