SynapOne

Data Scientist

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Scientist (12 months, remote) focusing on Generative AI/NLP or Fraud Detection & Time Series Analysis. Requires MS/PhD, 3+ years in ML/DL, proficiency in Python, AWS, NLP tools, and strong SQL skills.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
October 11, 2025
🕒 - Duration
More than 6 months
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Washington, DC
-
🧠 - Skills detailed
#PySpark #Tableau #NLG (Natural Language Generation) #Statistics #AWS (Amazon Web Services) #Elasticsearch #OpenSearch #Knowledge Graph #Visualization #Deep Learning #Jenkins #AWS SageMaker #SciPy #Computer Science #Docker #RDF (Resource Description Framework) #PyTorch #Keras #Kubernetes #AI (Artificial Intelligence) #NLTK (Natural Language Toolkit) #Anomaly Detection #BERT #GitLab #Python #HBase #SageMaker #TensorFlow #Jupyter #Scala #Spark (Apache Spark) #NLP (Natural Language Processing) #Programming #Data Pipeline #SQL (Structured Query Language) #Leadership #SpaCy #Mathematics #Databases #Forecasting #NumPy #Time Series #GIT #Langchain #Data Science #ML (Machine Learning)
Role description
Position: Data Scientist Location: Remote (Washington, DC HQ – Hybrid option available) Duration: 12 months with possible extension Interview Process: 2 rounds (direct manager contact) Background Check: Yes Overview We are seeking hands-on Data Scientists to support next-generation AI initiatives. Two positions focus on Generative AI/NLP, and one focuses on Fraud Detection & Time Series Analysis. We are looking for PhD-level (preferred) candidates who are passionate about applied AI, NLP, and advanced modeling — individuals who are hands-on builders, not leadership roles. Key Responsibilities • Design and implement machine learning, deep learning, and AI models for real-world problems. • Develop and fine-tune Generative AI and NLP applications using LLMs (GPT-4, Claude, Llama, Mistral, etc.). • Apply RAG, LoRA, PEFT, and LangChain for retrieval augmentation and fine-tuning. • Work with Vector Databases, Knowledge Graphs, and Graph-based AI architectures. • Handle structured and unstructured data using PySpark, AWS SageMaker, and related tools. • Build and maintain CI/CD pipelines (Git, Jenkins, GitLab). • Collaborate with cross-functional teams to translate ideas into scalable production AI systems. Minimum Qualifications • Education: MS in Computer Science, Statistics, Mathematics, or related field (PhD highly preferred). • 3+ years of experience building and deploying ML/DL models. • Proficiency with: • Python (NumPy, SciPy, PySpark, Scikit-learn) • AWS SageMaker, Jupyter Notebooks • NLP tools: SpaCy, NLTK, BERT, RoBERTa, OpenAI APIs • Deep Learning frameworks: TensorFlow, PyTorch, Keras • Experience in Generative AI, NLP/NLG, and LLM Fine-Tuning. • Strong SQL and data pipeline development skills. • Familiar with data visualization tools like Tableau, Kibana, or QuickSight. Preferred Qualifications • Experience with LLM Agents, Agentic Programming, and Human-in-the-Loop (HITL) systems. • Background in fraud detection, anomaly detection, or time series forecasting. • Experience with Docker, Kubernetes, ElasticSearch/OpenSearch. • Exposure to GraphRAG, Chain-of-Thought (CoT), and Knowledge Graphs (OWL, RDF, SPARQL). Ideal Candidate • PhD or advanced MS with applied or published AI/ML research. • Hands-on, self-starter mindset with strong problem-solving and experimentation skills. • Passion for innovation in Generative AI, NLP, and predictive analytics.