Principal Researcher @ CVS Health
AI Safety | Uncertainty Quantification | Hallucination Detection | Bias & Fairness
I lead CVS Health's first AI research program, focusing on Responsible AI and AI Safety. My work bridges cutting-edge research with practical implementation through open-source toolkits.
- UQLM - State-of-the-art uncertainty quantification for LLM hallucination detection
- LangFair - Context-aware LLM bias and fairness assessment framework
- A Suite of Black-Box, White-Box, LLM Judge, and Ensemble Scorers (under review at TMLR)
- UQLM: A Python Package for Uncertainty Quantification in LLMs (under review at JMLR)
- An Actionable Framework for Assessing Bias and Fairness in LLM Use Cases (under review at ACM TIST)
- LangFair: A Python Package for Assessing Bias and Fairness in LLM Use Cases (Journal of Open Source Software)