Skip to content

naruarjun/SADAM-reproducibility

Repository files navigation

SAdam Reproducibility Study

Reproducibility study carried out on the paper SAdam: A Variant of Adam for Strongly Convex Functions , as part of the ML Reproducibility Challenge 2020

Request Feature/Report Bug

Table of Contents

  1. Installation
  2. Usage
  3. Results
  4. Contributing
  5. License
  6. Contact
  7. Acknowledgements

Installation

  1. Clone the repo
    git clone https://github.com/naruarjun/SADAM-reproducibility.git
  2. Install virtualenv
    python3 -m pip install --user virtualenv
  3. Make a virtual Environnment and activate it
    virtualenv /path_to_env
    source /path_to_env/bin/activate
  4. Install the requirements
    pip install -r requirements.txt 

Usage

Arguments:

 --dataset 	    The dataset to be used in the experiment  (mnist(default)/cifar10/cifar100) 

 --lr               Learning rate for the optimizer           (default - 1e-3)
 
 -batch_size        Batch size for dataloader                 (default - 64)
 
 --model            Model architecture                        (logistic(default)/nn/resnet18)
 
 --epochs           No. of epochs to train the model          (default - 100)
 
 --optimizer        Optimizer to be used                      (adam(default)/amsgrad/scrms/scadagrad/ogd/sadam)
 
 --convex           To use convex version of optimizer or not (True/False(default))
 
 --decay            Regularization Factor                     (Default - 1e-2)
 
 --beta1            Hyperparameter for SAdam                     (Default - 0.9)
 
 --gamma            Hyperparameter for SAdam                     (Default - 0.9) 

Neural Network Experiments

A Sample way to execute is given below, however the parameters can be varied as per the user's wish, to generate all kinds of permutations with models, hyperparameters, datasets, optimizers and batch sizes.

   python3 train.py --dataset mnist --lr 0.001 --batch_size 64 --decay 0 --optimizer adam --epochs 100 --model nn

Regret Experiments

All the options mentioned above, can be used to run the regret experiments as well, however the model chosen should be logistic and convex parameter should be True. A sample execution is shown below -

    python3 train.py --dataset mnist --lr 0.001 --batch_size 64 --decay 1e-2 --optimizer adam --epochs 100 --model logistic --convex True

Use the Optimizers

Code to import the optimizers. Once the optimizers are imported, one can use these optimizers, just like the standard ones provided by PyTorch are used with optimizer.zero_grad() and optimizer.step() whenever necessary.

    import custom_optimizers as OP 
    """
    params - model.parameters() 
    lr - learning rate to be used 
    weight_decay - non zero value, in case some 
    convex - True / False, depending on the model to train
    """
    optimzer = OP.SC_RMSprop(params, lr=lr, weight_decay=decay, convex=convex)

PennTree Bank Dataset Language Modelling Experiments

We used the source code provided in this repository, to conduct our perplexity experiments, simply replacing the default optimizers in their processes by our optimizer implementations, as mentioned above.

Results

Below are the links to our corresponsing projects on Weights and Biases

MNIST regret analysis

CIFAR-10 regret analysis

CIFAR-100 regret analysis

MNIST 4 layer CNN analysis

CIFAR10 4 layer CNN analysis

CIFAR100 4 layer CNN analysis

CIFAR10 ResNet-18 analysis

Hyperparameter Grid Search for CIFAR 100

Hyperparameter Grid Search for MNIST and CIFAR 10

Contributing

Contributions are what make the open source community such an amazing place to be learn, inspire, and create. Any contributions you make are greatly appreciated.

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/Feature)
  3. Commit your Changes (git commit -m 'Add some Feature')
  4. Push to the Branch (git push origin feature/Feature)
  5. Open a Pull Request

License

Distributed under the MIT License. See LICENSE for more information.

Acknowledgements

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •  

Languages