

Kairos Technologies
W2 Only--Sr Data Platform & Visualization Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is a long-term contract for a W2 Sr Data Platform & Visualization Engineer in Los Altos, CA, paying $65.00 - $68.00 per hour. Requires 3-5 years of experience in Python, SQL, AWS, and data engineering, with strong collaboration skills.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
544
-
ποΈ - Date
February 5, 2026
π - Duration
Unknown
-
ποΈ - Location
Hybrid
-
π - Contract
W2 Contractor
-
π - Security
Unknown
-
π - Location detailed
Los Altos, CA 94022
-
π§ - Skills detailed
#S3 (Amazon Simple Storage Service) #Storage #Libraries #Databases #EC2 #Metadata #AWS S3 (Amazon Simple Storage Service) #Data Engineering #React #AWS (Amazon Web Services) #ML (Machine Learning) #Visualization #Automation #Data Lineage #FastAPI #Python #MySQL #Cloud #SQL (Structured Query Language) #Datasets #Lambda (AWS Lambda) #Flask #Plotly
Role description
Greetings,
Role: Data Platform & Visualization Engineer
Location: 4440 El Camino Real, Los Altos, CA 94022
Duration: Long Term
Python-focused data platform engineer with experience building reliable ingestion pipelines, SQL-backed metrics systems, backend APIs, and web-based dashboards for ML and experimentation workflows in AWS environments.
The company collects a lot of data on a monthly trip and potentially from simulations.
Currently, they are not handling much of it.
The main target is that their monthly trip collects a large amount of vehicle data.
Previously, it used to be mostly just vehicle numbers.
They are starting to add a lot of camera and LIDAR data, which means that
the scale of the data is getting larger.
Looking for someone junior to help with the existing ingestor system that ingests the
data and uploads it to S3.
Bandwidth is very limited (100 Mbps) because they go to a remote location, and
they are trying to upload 500 GB of data per night.
They want some sort of orchestration logic for sequencing and prioritizing certain
steps.
Once the data is up there, they are looking for someone with front-end experience to
navigate, filter, and create metadata.
This will require some back-end experience as well to handle the data.
Current Process:
It's not very smooth, but they make it work.
If they cannot finish uploading the data on the spot, they have to wait hours and
then come back to the office where there is fast data and upload it.
Hoping to streamline and optimize the process.
Clarification:
There is a queuing system where they have a bunch of stuff stored locally and
need to peter it out to the cloud storage.
That system needs to be refined and maintained.
Data Sources
Data is primarily from actual vehicles.
The system is deployed in California.
Data Transfer Process
Currently, data transfer involves manually taking out a hard drive, pressing buttons, and
uploading.
TRI wants more sophisticated lambda functions to be triggered once the data is
present.Location
The HQ is in Los Altos.
Data is collected approximately three hours from Northern California, near Willows,
California.
The industrial system and internet bottleneck are located there.
Role Summary
The speaker wants a data engineer to optimize the workflow of getting data from
vehicles/simulations into AWS.
The engineer should leverage AWS tooling for further processing, including metadata
and lambda functions.
The role involves pre and post-processing data and tagging it.
The speaker wants a dashboard to manage data, as they currently use Notepad.
This is described as digital asset management.
The role is more back-end heavy, but with some light front-end development for
usability.
SQL expectations are high.
Data Schema
The team has some schema defined for each of the datasets they are imagining.
They currently use Google Sheets, but it is a "little bit janky when it comes to like sorting
and retrieving the results."
Uploading is also a little janky.
Strong collaboration and communication, ownership mindset, analytical problem-solving, adaptability, attention to detail, proactive initiative, and effective time management.
Ability to multitask effectively and efficiently.
Technology Theyβll Be Working With:
Python, SQL, AWS (S3, EC2), FastAPI/Flask, React, relational databases (Postgres/MySQL), data visualization libraries (Plotly/Vega), custom orchestration pipelines, and LLM-based annotation tooling.
Years of Experience:
3β5 years of professional experience in software or data engineering roles would be reasonable.
Strong candidates with 2+ years but highly relevant projects (pipelines, dashboards, AWS, Python) could also be considered.
Nice-to-haves: ML experiment tracking, LLM-based automation, data lineage β typically seen in candidates with 2β5+ years of hands-on experience in data engineering or full-stack data platforms.
Thanks& Regards,
K Hemanth Kumar | Sr IT Technical Recruiter | Kairos Technologies Inc
E: hemanth@kairostech.com
Job Type: Contract
Pay: $65.00 - $68.00 per hour
Work Location: Hybrid remote in Los Altos, CA 94022
Greetings,
Role: Data Platform & Visualization Engineer
Location: 4440 El Camino Real, Los Altos, CA 94022
Duration: Long Term
Python-focused data platform engineer with experience building reliable ingestion pipelines, SQL-backed metrics systems, backend APIs, and web-based dashboards for ML and experimentation workflows in AWS environments.
The company collects a lot of data on a monthly trip and potentially from simulations.
Currently, they are not handling much of it.
The main target is that their monthly trip collects a large amount of vehicle data.
Previously, it used to be mostly just vehicle numbers.
They are starting to add a lot of camera and LIDAR data, which means that
the scale of the data is getting larger.
Looking for someone junior to help with the existing ingestor system that ingests the
data and uploads it to S3.
Bandwidth is very limited (100 Mbps) because they go to a remote location, and
they are trying to upload 500 GB of data per night.
They want some sort of orchestration logic for sequencing and prioritizing certain
steps.
Once the data is up there, they are looking for someone with front-end experience to
navigate, filter, and create metadata.
This will require some back-end experience as well to handle the data.
Current Process:
It's not very smooth, but they make it work.
If they cannot finish uploading the data on the spot, they have to wait hours and
then come back to the office where there is fast data and upload it.
Hoping to streamline and optimize the process.
Clarification:
There is a queuing system where they have a bunch of stuff stored locally and
need to peter it out to the cloud storage.
That system needs to be refined and maintained.
Data Sources
Data is primarily from actual vehicles.
The system is deployed in California.
Data Transfer Process
Currently, data transfer involves manually taking out a hard drive, pressing buttons, and
uploading.
TRI wants more sophisticated lambda functions to be triggered once the data is
present.Location
The HQ is in Los Altos.
Data is collected approximately three hours from Northern California, near Willows,
California.
The industrial system and internet bottleneck are located there.
Role Summary
The speaker wants a data engineer to optimize the workflow of getting data from
vehicles/simulations into AWS.
The engineer should leverage AWS tooling for further processing, including metadata
and lambda functions.
The role involves pre and post-processing data and tagging it.
The speaker wants a dashboard to manage data, as they currently use Notepad.
This is described as digital asset management.
The role is more back-end heavy, but with some light front-end development for
usability.
SQL expectations are high.
Data Schema
The team has some schema defined for each of the datasets they are imagining.
They currently use Google Sheets, but it is a "little bit janky when it comes to like sorting
and retrieving the results."
Uploading is also a little janky.
Strong collaboration and communication, ownership mindset, analytical problem-solving, adaptability, attention to detail, proactive initiative, and effective time management.
Ability to multitask effectively and efficiently.
Technology Theyβll Be Working With:
Python, SQL, AWS (S3, EC2), FastAPI/Flask, React, relational databases (Postgres/MySQL), data visualization libraries (Plotly/Vega), custom orchestration pipelines, and LLM-based annotation tooling.
Years of Experience:
3β5 years of professional experience in software or data engineering roles would be reasonable.
Strong candidates with 2+ years but highly relevant projects (pipelines, dashboards, AWS, Python) could also be considered.
Nice-to-haves: ML experiment tracking, LLM-based automation, data lineage β typically seen in candidates with 2β5+ years of hands-on experience in data engineering or full-stack data platforms.
Thanks& Regards,
K Hemanth Kumar | Sr IT Technical Recruiter | Kairos Technologies Inc
E: hemanth@kairostech.com
Job Type: Contract
Pay: $65.00 - $68.00 per hour
Work Location: Hybrid remote in Los Altos, CA 94022





