

GCP Data Engineer
โญ - Featured Role | Apply direct with Data Freelance Hub
This role is for a GCP Data Engineer on a contract basis, paying $50.11 - $65.53 per hour for 40 hours per week. Key skills include Python, GCP services, data modeling, and experience with Scala. A degree in a related field is required.
๐ - Country
United States
๐ฑ - Currency
$ USD
-
๐ฐ - Day rate
520
-
๐๏ธ - Date discovered
September 30, 2025
๐ - Project duration
Unknown
-
๐๏ธ - Location type
On-site
-
๐ - Contract type
Unknown
-
๐ - Security clearance
Unknown
-
๐ - Location detailed
Mountain View, CA 94043
-
๐ง - Skills detailed
#ML Ops (Machine Learning Operations) #Data Analysis #Jupyter #Code Reviews #Monitoring #Batch #Data Processing #Pandas #GCP (Google Cloud Platform) #Computer Science #Data Pipeline #Airflow #TensorFlow #GitHub #BigQuery #Storage #Deployment #Scala #API (Application Programming Interface) #Data Engineering #Apache Spark #Data Management #Python #"ETL (Extract #Transform #Load)" #Mathematics #Logging #Spark (Apache Spark) #Version Control #ML (Machine Learning) #Microservices #Dataflow #NumPy #Libraries #Data Science #GIT #Java #Data Modeling #Cloud
Role description
What youโll do:
Develop and enhance Python frameworks and libraries to support data processing, quality, lineage, governance, analysis, and machine learning operations.
Design, build, and maintain scalable and efficient data pipelines on GCP.
Implement robust monitoring, logging, and alerting systems to ensure the reliability and stability of data infrastructure.
Build scalable batch pipelines leveraging Big query, Dataflow and Airflow/Composer scheduler/executor framework on Google Cloud Platform
Building data pipelines, leveraging Scala, Pub Sub, Akka, Dataflow on Google Cloud Platform
Design our data models for optimal storage and retrieval and to meet machine learning modeling using technologies like Bigtable and Vertex Feature Store
Contribute to shared Data Engineering tooling & standards to improve the productivity and quality of output for Data Engineers across the company
Minimum Basic Requirements:
Python Expertise: Write and maintain Python frameworks and libraries to support data processing and integration tasks.
Code Management: Use Git and GitHub for source control, code reviews, and version management.
GCP Proficiency: Extensive experience working with GCP services (e.g., BigQuery, Cloud Dataflow, Pub/Sub, Cloud Storage).
Python Mastery: Proficient in Python with experience in writing, maintaining, and optimizing data processing frameworks and libraries.
Software Engineering: Strong understanding of software engineering best practices, including version control (Git), collaborative development (GitHub), code reviews, and CI/CD.
Data Management: Deep knowledge of data modeling, ETL/ELT, and data warehousing concepts.
Problem-Solving: Excellent problem-solving skills with the ability to tackle complex data engineering challenges.
Communication: Strong communication skills, including the ability to explain complex technical details to non-technical stakeholders.
Data Science Stack: Proficiency in data analysis and familiarity with tools such as Jupyter Notebook, pandas, NumPy, and other Python data analysis libraries.
Frameworks/Tools: Familiarity with machine learning and data processing tools and frameworks such as TensorFlow, Apache Spark, and scikit-learn.
Bachelorโs or masterโs degree in computer science, Engineering, Computer Information Systems, Mathematics, Physics, or a related field or software development training program.
Preferred Qualifications:
Experience in Scala, Java, and/or any functional language. We code primarily in Scala, so youโll be excited to either ramp or continue with such
Experience in microservices architecture, messaging patterns, and deployment models
Experience in API design and building robust and extendable client/server contracts
Job Type: Contract
Pay: $50.11 - $65.53 per hour
Expected hours: 40 per week
Work Location: In person
What youโll do:
Develop and enhance Python frameworks and libraries to support data processing, quality, lineage, governance, analysis, and machine learning operations.
Design, build, and maintain scalable and efficient data pipelines on GCP.
Implement robust monitoring, logging, and alerting systems to ensure the reliability and stability of data infrastructure.
Build scalable batch pipelines leveraging Big query, Dataflow and Airflow/Composer scheduler/executor framework on Google Cloud Platform
Building data pipelines, leveraging Scala, Pub Sub, Akka, Dataflow on Google Cloud Platform
Design our data models for optimal storage and retrieval and to meet machine learning modeling using technologies like Bigtable and Vertex Feature Store
Contribute to shared Data Engineering tooling & standards to improve the productivity and quality of output for Data Engineers across the company
Minimum Basic Requirements:
Python Expertise: Write and maintain Python frameworks and libraries to support data processing and integration tasks.
Code Management: Use Git and GitHub for source control, code reviews, and version management.
GCP Proficiency: Extensive experience working with GCP services (e.g., BigQuery, Cloud Dataflow, Pub/Sub, Cloud Storage).
Python Mastery: Proficient in Python with experience in writing, maintaining, and optimizing data processing frameworks and libraries.
Software Engineering: Strong understanding of software engineering best practices, including version control (Git), collaborative development (GitHub), code reviews, and CI/CD.
Data Management: Deep knowledge of data modeling, ETL/ELT, and data warehousing concepts.
Problem-Solving: Excellent problem-solving skills with the ability to tackle complex data engineering challenges.
Communication: Strong communication skills, including the ability to explain complex technical details to non-technical stakeholders.
Data Science Stack: Proficiency in data analysis and familiarity with tools such as Jupyter Notebook, pandas, NumPy, and other Python data analysis libraries.
Frameworks/Tools: Familiarity with machine learning and data processing tools and frameworks such as TensorFlow, Apache Spark, and scikit-learn.
Bachelorโs or masterโs degree in computer science, Engineering, Computer Information Systems, Mathematics, Physics, or a related field or software development training program.
Preferred Qualifications:
Experience in Scala, Java, and/or any functional language. We code primarily in Scala, so youโll be excited to either ramp or continue with such
Experience in microservices architecture, messaging patterns, and deployment models
Experience in API design and building robust and extendable client/server contracts
Job Type: Contract
Pay: $50.11 - $65.53 per hour
Expected hours: 40 per week
Work Location: In person