This repository explores the application of compositional game theory, as introduced in the paper Compositional Game Theory [1] to the analysis and enhancement of neural networks. We represent neural network components as players in open games, aiming to leverage game-theoretic tools for improved training and understanding
This repository includes:
cgtnnlib
, a PyTorch based library for training neural networksdata
directory with some of the data we usedoc
directory with documentationNotebooks/
with the primary experimental notebooksExamples/
with miscellaneous notebooks
As of now, the main branch is in flux. Don't expect it to be stable. Most results/revisions are available at the releases page. Older releases are at Yandex Disk.
- Create a virtual environment:
python -m venv .venv
source .venv/bin/activate
- Install dependencies:
pip install -r requirements.txt
- Open a notebook
*.ipynb
file with any .ipynb reader available to you and run
The library consists of classes (with filenames beginning with capital letter) that represent problem domain (Dataset, Report, etc.) and several procedural modules:
common.py
: main functions and evaluationanalyze.py
: reads report JSON files and plots graphsdatasets.py
: dataset definitionstraining.py
: training proceduresplt_extras.py
: Matplotlib extensionstorch_extras.py
: PyTorch extensions- etc.
The nn
subdirectory contains PyTorch modules and functions that represent
neural architectures we evaluate
The doc
subdirectory contains info about datasets and a presentation.
Trained models are stored in the pth/
directory (or other). Along with each
model, a corresponding JSON file is also created which contains
properties like:
started
: date of report creationsaved
: date of last updatemodel
: model parameters, like classname and hyperparameter valuedataset
: dataset info, including the type of learning task (regression/classification)loss
: an array of loss values during each iteration of training, for analyzing loss curveseval
: an object that contains various values of "noise_factor", that represents noise mixed into the input during evaluation, and their corresponding evaluation metrics values: "r2" and "mse" for regression, and "f1", "accuracy", "roc_auc" for classification- other, experiment-specific keys
Typically a report is created during the model creation and initial
training, and then updated during evaluation. This two-step process
creates the complete report to be analyzed by analyze.py
.
pip install pytest==8.4.1
pytest tests/ --cov=.
These commands will format code for you:
pip install "black[jupyter]"
python -m black cgtnnlib/**/*.py
python -m black **/*.ipynb
- N. Ghani, J. Hedges, V. Winschel, and P. Zahn. Compositional game theory. Mar 2016. https://arxiv.org/abs/1603.04641