

Largeton Group
Data Analyst 3
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Analyst 3 with a contract length of "Unknown" and a pay rate of "Unknown." Key skills include Apache Spark, Python, SQL, Airflow, and experience with data migration from Elasticsearch to Snowflake. Hybrid work location.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
680
-
🗓️ - Date
November 11, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Sunnyvale, CA
-
🧠 - Skills detailed
#Apache Spark #Spark (Apache Spark) #Kubernetes #SQL (Structured Query Language) #Data Engineering #Data Analysis #Data Processing #Storage #Spark SQL #Cloud #PySpark #Python #Deployment #Scala #Snowflake #S3 (Amazon Simple Storage Service) #Data Pipeline #Airflow #Elasticsearch #Migration
Role description
Job Summary (Data Analyst 3 – Intuitive Surgical)
• Migrate existing data pipelines from Elasticsearch (Python-based) to Snowflake and S3 architecture.
• Process and handle large volumes of log data generated from field devices (e.g., hospital/surgical robots).
• Reverse engineer and redesign existing Python data workflows for compatibility with the new data stack.
• Implement and productionize a privacy proof-of-concept previously developed by interns.
• Utilize tools and technologies such as Apache Spark (with Iceberg), Python (PySpark), SQL, Airflow, and Kubernetes for data processing, orchestration, and deployment.
• Work with cloud storage solutions, specifically S3 buckets.
• Orchestrate and automate data pipelines using Airflow.
• Collaborate in a hybrid work setting (3 days onsite, 2 days remote).
• Ensure smooth migration and reliability of data pipelines from legacy to new architecture.
• Focus on producing scalable, maintainable, and privacy-compliant data engineering solutions.
Job Summary (Data Analyst 3 – Intuitive Surgical)
• Migrate existing data pipelines from Elasticsearch (Python-based) to Snowflake and S3 architecture.
• Process and handle large volumes of log data generated from field devices (e.g., hospital/surgical robots).
• Reverse engineer and redesign existing Python data workflows for compatibility with the new data stack.
• Implement and productionize a privacy proof-of-concept previously developed by interns.
• Utilize tools and technologies such as Apache Spark (with Iceberg), Python (PySpark), SQL, Airflow, and Kubernetes for data processing, orchestration, and deployment.
• Work with cloud storage solutions, specifically S3 buckets.
• Orchestrate and automate data pipelines using Airflow.
• Collaborate in a hybrid work setting (3 days onsite, 2 days remote).
• Ensure smooth migration and reliability of data pipelines from legacy to new architecture.
• Focus on producing scalable, maintainable, and privacy-compliant data engineering solutions.






