Synergy Interactive

Data Scientist

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Scientist with 8+ years of experience, specializing in YOLO-based computer vision and big data analytics. Contract length is unspecified, with a pay rate of "unknown." Remote work is allowed. Key skills include Python, Docker, Kubernetes, and GCP.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
640
-
🗓️ - Date
January 15, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Chicago, IL
-
🧠 - Skills detailed
#Datasets #Docker #Cloud #Databases #Deployment #BI (Business Intelligence) #Object Detection #Data Pipeline #Predictive Modeling #Data Mining #Data Science #Kubernetes #FastAPI #"ETL (Extract #Transform #Load)" #Spark (Apache Spark) #Big Data #Python #Computer Science #Hadoop #ML (Machine Learning) #Scala #TypeScript #Data Engineering #MongoDB #NoSQL #GCP (Google Cloud Platform) #Kafka (Apache Kafka)
Role description
About the Role We are seeking a Specialist Data Scientist with strong expertise in computer vision and big data analytics to design, optimize, and deploy scalable, production-grade data science solutions. This role blends hands-on machine learning—particularly YOLO-based object detection—with big data engineering and cloud-native deployment. You will work closely with cross-functional teams to turn complex datasets into actionable, real-time insights that drive measurable business impact. Key Responsibilities • Lead the design and implementation of scalable big data solutions and advanced analytics frameworks. • Develop, fine-tune, and retrain YOLO-based computer vision models for object detection use cases. • Optimize models for edge and production deployment using techniques such as quantization and distillation. • Build and deploy machine learning models integrated into production systems for real-time insights. • Design and optimize data models, pipelines, and algorithms to solve complex business challenges. • Leverage big data technologies (Hadoop, Spark, NoSQL) to process and analyze large-scale datasets. • Containerize FastAPI-based services using Docker and publish results to message brokers. • Operate within Kubernetes-based production environments on Google Cloud Platform. • Collaborate with stakeholders to translate business requirements into data-driven strategies. • Continuously improve data science and analytics processes for efficiency, scalability, and impact. Required Skills & Experience • 8+ years of experience in data science, machine learning, or big data engineering. • Strong experience with YOLO-based computer vision models and modern ML frameworks. • Proficiency in Python; experience with TypeScript is a plus. • Hands-on experience with Docker for containerization and service deployment. • Working knowledge of Kubernetes and Google Cloud Platform in production environments. • Experience with big data technologies such as Hadoop, Spark, Hive, and NoSQL databases (e.g., Cassandra, MongoDB). • Strong background in statistical analysis, data mining, and predictive modeling. • Experience building scalable architectures and optimizing data pipelines (ETL). • Strong problem-solving skills and the ability to make data-driven decisions in complex environments. Top Skills Needed • YOLO-based models & machine learning frameworks • Docker & containerized deployments • Kubernetes & Google Cloud Platform Set Yourself Apart With • Master’s or PhD in Data Science, Computer Science, Engineering, or a related field. • Experience deploying enterprise-scale big data or ML solutions (finance, healthcare, retail a plus). • Experience with real-time data and stream processing (Kafka, Flink). • Familiarity with BI tools and presenting insights to non-technical stakeholders. • Experience deploying data science solutions using Docker and Kubernetes.