

Systems Integration Solutions
Python Developer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Python Developer in Cupertino, CA, on a long-term contract. Requires 10+ years of Python experience, strong backend system skills, and a BS in Computer Science. Familiarity with GCP, AWS, and data processing technologies preferred.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
May 8, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Cupertino, CA
-
🧠 - Skills detailed
#AWS (Amazon Web Services) #Storage #REST API #Data Processing #Programming #Data Storage #Data Access #ML (Machine Learning) #Maven #GCP (Google Cloud Platform) #Kafka (Apache Kafka) #Computer Science #Redis #Data Pipeline #Scala #REST (Representational State Transfer) #Docker #Datasets #Python #AI (Artificial Intelligence) #Data Science #Jenkins
Role description
Sr Backend Software Engineer – Data & ML Innovation
📍 Location: Cupertino, CA
💼 W-2 Only | No C2C | No Sponsorship
⏳ Long-term Contract
Summary
Join a cutting-edge AI/ML organization focused on improving Foundation Models through large-scale data innovation. This team leverages data from a variety of sources including licensed, vendor, crawl, and internal datasets to optimize model training efficiency and performance.
We are seeking a Senior Backend Software Engineer with deep Python expertise and experience building scalable backend systems, pipelines, and data services supporting large-scale AI/LLM workflows.
Responsibilities
• Develop large-scale training data pipelines for AI/ML systems
• Convert raw data into formats compatible with training jobs across GCP and AWS environments
• Support large-scale inference workflows using fine-tuned LLMs
• Build scalable backend services and REST APIs for data access and inspection tools
• Process and filter massive, often messy datasets efficiently
• Leverage open-source and internal tooling to improve model training and inference performance
• Collaborate cross-functionally with engineering teams to drive backend innovation and scalability
Minimum Qualifications
• BS in Computer Science, Electrical Engineering, or equivalent technical degree (firm requirement, cannot substitute for a data science degree)
• 10+ years of professional Python development experience
• Strong understanding of:
• Concurrency & parallelism
• Functional programming
• Decorators
• Data structures & algorithms
• Object-oriented programming
• Experience building scalable applications and distributed systems
• Proficiency with REST APIs, Redis, VectorDBs, or other large-scale data storage technologies
• Experience designing, testing, reviewing, and deploying production-scale software systems
Preferred Qualifications
• Experience with streaming/data processing technologies such as Kafka
• Familiarity with tools such as Docker, Jenkins, Maven, and Gradle
• Exposure to GCP and AWS environments
• Experience supporting AI/ML or LLM-related infrastructure and workflows
• Strong architecture and system design experience
• Excellent communication and cross-functional collaboration skills
• Self-motivated, curious, and comfortable working in ambiguous, fast-paced environments
Sr Backend Software Engineer – Data & ML Innovation
📍 Location: Cupertino, CA
💼 W-2 Only | No C2C | No Sponsorship
⏳ Long-term Contract
Summary
Join a cutting-edge AI/ML organization focused on improving Foundation Models through large-scale data innovation. This team leverages data from a variety of sources including licensed, vendor, crawl, and internal datasets to optimize model training efficiency and performance.
We are seeking a Senior Backend Software Engineer with deep Python expertise and experience building scalable backend systems, pipelines, and data services supporting large-scale AI/LLM workflows.
Responsibilities
• Develop large-scale training data pipelines for AI/ML systems
• Convert raw data into formats compatible with training jobs across GCP and AWS environments
• Support large-scale inference workflows using fine-tuned LLMs
• Build scalable backend services and REST APIs for data access and inspection tools
• Process and filter massive, often messy datasets efficiently
• Leverage open-source and internal tooling to improve model training and inference performance
• Collaborate cross-functionally with engineering teams to drive backend innovation and scalability
Minimum Qualifications
• BS in Computer Science, Electrical Engineering, or equivalent technical degree (firm requirement, cannot substitute for a data science degree)
• 10+ years of professional Python development experience
• Strong understanding of:
• Concurrency & parallelism
• Functional programming
• Decorators
• Data structures & algorithms
• Object-oriented programming
• Experience building scalable applications and distributed systems
• Proficiency with REST APIs, Redis, VectorDBs, or other large-scale data storage technologies
• Experience designing, testing, reviewing, and deploying production-scale software systems
Preferred Qualifications
• Experience with streaming/data processing technologies such as Kafka
• Familiarity with tools such as Docker, Jenkins, Maven, and Gradle
• Exposure to GCP and AWS environments
• Experience supporting AI/ML or LLM-related infrastructure and workflows
• Strong architecture and system design experience
• Excellent communication and cross-functional collaboration skills
• Self-motivated, curious, and comfortable working in ambiguous, fast-paced environments






