

RSC Solutions
Senior GCP Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior GCP Data Engineer, offering a 6-12 month contract, located in the EST time zone. Key skills include Python, SQL, GCP, Kafka, and CI/CD tools. Requires 5 years of data pipeline experience and healthcare domain knowledge.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
April 11, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
New York, NY
-
🧠 - Skills detailed
#Deployment #Automation #AI (Artificial Intelligence) #Data Engineering #REST API #REST (Representational State Transfer) #GraphQL #Langchain #"ETL (Extract #Transform #Load)" #Monitoring #Python #Batch #Data Ingestion #Java #Data Quality #Kubernetes #Observability #Storage #TensorFlow #AWS (Amazon Web Services) #Microservices #Data Access #Datasets #ML (Machine Learning) #Scala #Kafka (Apache Kafka) #Databases #Data Pipeline #NLP (Natural Language Processing) #Data Lake #Data Processing #Azure #Cloud #GitHub #Jenkins #Data Science #Transformers #GCP (Google Cloud Platform) #SQL (Structured Query Language) #Dataflow #BigQuery #Argo
Role description
TITLE: Senior GCP Data Engineer
LOCATION: EST time zone
DURATION: 6-12-month contract with a probable extension
NO C2C 3RD PARTY VENDORS!!!
JOB DESCRIPTION:
Responsibilities
• Develop robust data ingestion, transformation, and integration workflows using Python, SQL, and modern data engineering frameworks.
• Build and maintain batch and streaming data pipelines leveraging technologies such as Kafka (or similar pub/sub tools).
• Work with Google Cloud Platform (GCP) services, including Cloud Storage, Dataflow, Pub/Sub, BigQuery, Cloud Spanner and Cloud Functions
• Develop and manage data APIs and interfaces (REST and GraphQL) to enable high-performance data access across microservices.
• Implement CI/CD automation for data pipelines using GitHub Actions, Argo CD, or equivalent tools.
• Collaborate with Data Scientists and MLOps teams to integrate ML/NLP models into data pipelines and production workflows.
• Build and operationalize NLP data pipelines for structured and unstructured data sources
• Enable continuous learning and model‐retraining workflows using Vertex AI, Kubeflow, or similar GCP‐native tooling.
• Implement frameworks for observability and data quality, ensuring ML predictions, confidence scores, and fallback events are logged into data lakes or monitoring systems.
• Support distributed data systems and ensure reliability, performance, and scalability of data infrastructure.
Required Qualifications
• 5 years of experience building data pipelines or backend data workflows using Python, Java, or similar languages.
• 2 years of experience designing REST/GraphQL data services or integrating data APIs.
• 2 years of experience with cloud platforms (GCP preferred; AWS or Azure acceptable).
• 2 years working with streaming platforms like Kafka or equivalent.
• 2 years of experience with databases (Postgres or similar relational systems).
• 2 years of experience with CI/CD tools (GitHub Actions, Jenkins, Argo CD, etc.).
Preferred Qualifications
• Direct, hands-on experience with Google Cloud Platform, especially BigQuery, Dataflow, GKE, Composer and Vertex AI.
• Knowledge of Kubernetes concepts and experience running data services or pipelines on GKE.
• Strong understanding of distributed systems, microservice patterns, and data‐centric system design.
• Hands‐on experience working with ML/AI model integration in production (e.g., Vertex AI Endpoints, TensorFlow Serving, ML REST APIs).
• Experience handling structured and unstructured datasets, including healthcare data (Rx claims, clinical documents, NLP text).
• Familiarity with the end-to-end ML lifecycle: data ingestion, feature engineering, training, deployment, and real‐time inference.
• Experience using Vertex AI, Kubeflow, or other ML orchestration platforms for model training and serving.
• Knowledge of GenAI pipelines, LLM prompt workflows, and agent orchestration frameworks (e.g., LangChain, transformers).
• Experience deploying Python-based ML/NLP services into microservice ecosystems using REST, gRPC, or sidecar architectures.
• Domain experience in healthcare, claim adjudication, or Rx data processing.
TITLE: Senior GCP Data Engineer
LOCATION: EST time zone
DURATION: 6-12-month contract with a probable extension
NO C2C 3RD PARTY VENDORS!!!
JOB DESCRIPTION:
Responsibilities
• Develop robust data ingestion, transformation, and integration workflows using Python, SQL, and modern data engineering frameworks.
• Build and maintain batch and streaming data pipelines leveraging technologies such as Kafka (or similar pub/sub tools).
• Work with Google Cloud Platform (GCP) services, including Cloud Storage, Dataflow, Pub/Sub, BigQuery, Cloud Spanner and Cloud Functions
• Develop and manage data APIs and interfaces (REST and GraphQL) to enable high-performance data access across microservices.
• Implement CI/CD automation for data pipelines using GitHub Actions, Argo CD, or equivalent tools.
• Collaborate with Data Scientists and MLOps teams to integrate ML/NLP models into data pipelines and production workflows.
• Build and operationalize NLP data pipelines for structured and unstructured data sources
• Enable continuous learning and model‐retraining workflows using Vertex AI, Kubeflow, or similar GCP‐native tooling.
• Implement frameworks for observability and data quality, ensuring ML predictions, confidence scores, and fallback events are logged into data lakes or monitoring systems.
• Support distributed data systems and ensure reliability, performance, and scalability of data infrastructure.
Required Qualifications
• 5 years of experience building data pipelines or backend data workflows using Python, Java, or similar languages.
• 2 years of experience designing REST/GraphQL data services or integrating data APIs.
• 2 years of experience with cloud platforms (GCP preferred; AWS or Azure acceptable).
• 2 years working with streaming platforms like Kafka or equivalent.
• 2 years of experience with databases (Postgres or similar relational systems).
• 2 years of experience with CI/CD tools (GitHub Actions, Jenkins, Argo CD, etc.).
Preferred Qualifications
• Direct, hands-on experience with Google Cloud Platform, especially BigQuery, Dataflow, GKE, Composer and Vertex AI.
• Knowledge of Kubernetes concepts and experience running data services or pipelines on GKE.
• Strong understanding of distributed systems, microservice patterns, and data‐centric system design.
• Hands‐on experience working with ML/AI model integration in production (e.g., Vertex AI Endpoints, TensorFlow Serving, ML REST APIs).
• Experience handling structured and unstructured datasets, including healthcare data (Rx claims, clinical documents, NLP text).
• Familiarity with the end-to-end ML lifecycle: data ingestion, feature engineering, training, deployment, and real‐time inference.
• Experience using Vertex AI, Kubeflow, or other ML orchestration platforms for model training and serving.
• Knowledge of GenAI pipelines, LLM prompt workflows, and agent orchestration frameworks (e.g., LangChain, transformers).
• Experience deploying Python-based ML/NLP services into microservice ecosystems using REST, gRPC, or sidecar architectures.
• Domain experience in healthcare, claim adjudication, or Rx data processing.






