Skip to content
#

explainability-ai

Here are 9 public repositories matching this topic...

VISION is a framework for robust and interpretable code vulnerability detection using counterfactual data augmentation. It leverages GNNs, LLM-generated counterfactuals, and graph-based explainability to mitigate spurious correlations and improve generalization on real-world vulnerabilities (CWE-20).

  • Updated Oct 19, 2025
  • Jupyter Notebook

Improve this page

Add a description, image, and links to the explainability-ai topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the explainability-ai topic, visit your repo's landing page and select "manage topics."

Learn more