Skip to content
View dylanbouchard's full-sized avatar

Block or report dylanbouchard

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
dylanbouchard/README.md

Dylan Bouchard

Principal Researcher @ CVS Health
AI Safety | Uncertainty Quantification | Hallucination Detection | Bias & Fairness

I lead CVS Health's first AI research program, focusing on Responsible AI and AI Safety. My work bridges cutting-edge research with practical implementation through open-source toolkits.

πŸ”“ Open-Source Projects

  • UQLM - State-of-the-art uncertainty quantification for LLM hallucination detection
  • LangFair - Context-aware LLM bias and fairness assessment framework

πŸ“ Select Research

πŸ“« Connect

Pinned Loading

  1. cvs-health/uqlm cvs-health/uqlm Public

    UQLM: Uncertainty Quantification for Language Models, is a Python package for UQ-based LLM hallucination detection

    Python 1k 102

  2. cvs-health/langfair cvs-health/langfair Public

    LangFair is a Python library for conducting use-case level LLM bias and fairness assessments

    Python 231 39