

Biztec Global
Senior Data Scientist - GenAI
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Scientist - GenAI, remote from San Francisco, California, offering competitive pay. Requires 7 years of experience in GenAI, expertise in Python, deep learning, and generative algorithms, along with knowledge of AI ethics.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
January 14, 2026
π - Duration
Unknown
-
ποΈ - Location
Remote
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#AWS (Amazon Web Services) #Strategy #Monitoring #GIT #Scala #DevOps #Programming #Data Mining #Data Science #Data Pipeline #GCP (Google Cloud Platform) #Version Control #Deep Learning #Spark (Apache Spark) #ML (Machine Learning) #Python #S3 (Amazon Simple Storage Service) #Unsupervised Learning #AI (Artificial Intelligence) #"ETL (Extract #Transform #Load)" #Disaster Recovery #RNN (Recurrent Neural Networks) #Supervised Learning #Deployment
Role description
Role: Senior Data Scientist - GenAI
Location: Remote - (San Francisco, California)
As a Data Scientist, you will apply strong expertise in AI through the use of machine learning, data mining, and information retrieval to design, prototype, and build next generation advanced analytics engines and services. You will collaborate with cross-functional teams and business partners to define the technical problem statement and hypotheses to test. You will develop efficient and accurate analytical models which mimic business decisions and incorporate those models into analytical data products and tools. You will have the opportunity to drive current and future strategy by leveraging your analytical skills as you ensure business value and communicate the results.
Key Responsibilities
β’ Implement efficient Retrieval-Augmented Generation (RAG) architectures and integrate with enterprise data infrastructure.
β’ Collaborate with cross-functional teams to integrate solutions into operational processes and systems supporting various functions.
β’ Stay up to date with industry advancements in AI and apply modern technologies and methodologies to our systems.
β’ Design, build and maintain scalable and robust real-time data streaming pipelines using technologies such as gcp, vertex ai, s3, AWS bedrock, Spark streaming, or similar.
β’ Develop data domains and data products for various consumption archetypes including Reporting, Data Science, AI/ML, Analytics etc.
β’ Ensure the reliability, availability, and scalability of data pipelines and systems through effective monitoring, alerting, and incident management.
β’ Implement best practices in reliability engineering, including redundancy, fault tolerance, and disaster recovery strategies.
β’ Collaborate closely with DevOps and infrastructure teams to ensure seamless deployment, operation, and maintenance of data systems.
β’ Mentor junior team members and engage in communities of practice to deliver high-quality data and AI solutions while promoting best practices, standards, and adoption of reusable patterns.
β’ Apply AI solutions to insurance-specific data use cases and challenges.
β’ Partner with architects and stakeholders to influence and implement the vision of the AI and data pipelines while safeguarding the integrity and scalability of the environment.
Requirements
β’ 7 years of experience working as a GenAI Data Science.
β’ Experience with Python from a functional programming paradigm, able to manage dependencies and virtual environments, along with version control in git
β’ Experience with sequential algorithms (e.g., LSTM, RNN, transformer, etc.)
β’ Experience with Bedrock, JumpStart, HuggingFace
β’ Experience evaluating ethical implications of AI and controlling for them (e.g., red-teaming)
β’ Expertise in supervised learning and unsupervised learning along with experience in deep learning and transfer learning
β’ Experience in generative algorithms (e.g., GAN, VAE, etc.) as well as pre-trained models (e.g., LLaMa, SAM, etc.)
Role: Senior Data Scientist - GenAI
Location: Remote - (San Francisco, California)
As a Data Scientist, you will apply strong expertise in AI through the use of machine learning, data mining, and information retrieval to design, prototype, and build next generation advanced analytics engines and services. You will collaborate with cross-functional teams and business partners to define the technical problem statement and hypotheses to test. You will develop efficient and accurate analytical models which mimic business decisions and incorporate those models into analytical data products and tools. You will have the opportunity to drive current and future strategy by leveraging your analytical skills as you ensure business value and communicate the results.
Key Responsibilities
β’ Implement efficient Retrieval-Augmented Generation (RAG) architectures and integrate with enterprise data infrastructure.
β’ Collaborate with cross-functional teams to integrate solutions into operational processes and systems supporting various functions.
β’ Stay up to date with industry advancements in AI and apply modern technologies and methodologies to our systems.
β’ Design, build and maintain scalable and robust real-time data streaming pipelines using technologies such as gcp, vertex ai, s3, AWS bedrock, Spark streaming, or similar.
β’ Develop data domains and data products for various consumption archetypes including Reporting, Data Science, AI/ML, Analytics etc.
β’ Ensure the reliability, availability, and scalability of data pipelines and systems through effective monitoring, alerting, and incident management.
β’ Implement best practices in reliability engineering, including redundancy, fault tolerance, and disaster recovery strategies.
β’ Collaborate closely with DevOps and infrastructure teams to ensure seamless deployment, operation, and maintenance of data systems.
β’ Mentor junior team members and engage in communities of practice to deliver high-quality data and AI solutions while promoting best practices, standards, and adoption of reusable patterns.
β’ Apply AI solutions to insurance-specific data use cases and challenges.
β’ Partner with architects and stakeholders to influence and implement the vision of the AI and data pipelines while safeguarding the integrity and scalability of the environment.
Requirements
β’ 7 years of experience working as a GenAI Data Science.
β’ Experience with Python from a functional programming paradigm, able to manage dependencies and virtual environments, along with version control in git
β’ Experience with sequential algorithms (e.g., LSTM, RNN, transformer, etc.)
β’ Experience with Bedrock, JumpStart, HuggingFace
β’ Experience evaluating ethical implications of AI and controlling for them (e.g., red-teaming)
β’ Expertise in supervised learning and unsupervised learning along with experience in deep learning and transfer learning
β’ Experience in generative algorithms (e.g., GAN, VAE, etc.) as well as pre-trained models (e.g., LLaMa, SAM, etc.)






