Note
Use this template generator when you are ready to start a machine learning project on Chameleon Cloud, or if you want to integrate reproducible experiment tracking into your existing project.
Reproducibility in Machine Learning means being able to reproduce the results of an experiment using the same code, with the same data and environment, and obtain the same results.
in research, this remains a significant problem — many papers lack proper experiment tracking, dependency capturing, versioning, and artifact sharing. As a result, even the original authors often struggle to reproduce their own results several months later.
this weakens trust and slows scientific progress, since others cannot easily validate or build upon previous work.
in production, however, MLOps practices (automation, versioning, monitoring) have proven to be key enablers of reproducible and trustworthy ML systems.
but for researchers, adapting these tools and practices is full of friction — stitching everything together takes considerable time
and effort.
ReproGen removes this barrier by providing ready-to-use templates with reproducibility baked in, so you can focus on spinning up your VMs instead of infrastructure.
Supporting Reproducibility with ReproGen:
- It makes it easier for researchers and scientists to adopt MLOps practices in their projects.
- It provides ready-to-launch templates with reproducibility baked in (experiment tracking, consistent environments, artifact persistence).
- It lowers the barrier to reproducible research on platforms like Chameleon Cloud.
Tip
You can use our mlfow-replay setup to revisit an experiment previously run on Chameleon Cloud.
This branch includes an MLflow setup that reads and load artifacts of a previous experiment generated by our ReproGen template generator from a public Chameleon Cloud object store containers.
This allows you to reproduce, inspect, and extend experiments easily.
Note
You can try a hands-on demo tutorial with ReproGen 🦎.
Check out our LLM Experiment Demo where we
fine-tune and track a language model experiment on Chameleon Cloud using ReproGen's reproducible template. The tutorial shows how experiment tracking, MLflow logging, and artifact persistence work in practice. It’s a great starting point if you want to adapt ReproGen for your own deep learning workflows.
This repository is not intended to be cloned or forked directly. Instead use Copier to generate a project from the template, which will prompt you for configuration.
you will learn more about this in Getting Started
Caution
If you have python pre-installed on your machine, make sure its version ≥ 3.9 and Copier ≥ 9.0
Install Copier
pipx install copier
or
pip install copier
Note
You can also install Copier using regular pip. however, this will place it in your main Python environment, which may lead to dependency conflicts (that you don't want). using pipx is recommended instead, since it installs Copier in its own isolated environment.
Create a New Project with
copier copy --vcs-ref main https://github.com/A7med7x7/reprogen.git path/to/destination
Important
Ensure that path/to/destination
points to an empty directory, and replace path/to/destination
with the name of the path and directory you want
Answer a few questions, and Copier will generate the project in that directory. See Setup Parameters and Their Roles below if you want to know the role of the questions and what they will generate.
Tip
If you're new to Chameleon Cloud, we recommend using Basic
in setup mode. It's beginner-friendly!
When the project is generated, a README.md file will be created at the root.it contains all the instructions to guide you through setting your environment.
Your answers will generate a project based on your input values, below you can find the variables with their description and implications
Defines the overall setup approach for your environment. tweak the customization you want during server and Docker environment creation.
- Basic: minmal prompts. you will be asked to input your project name (
project_name
), remote repository link (repo_url
) and framework of your choice (ml_framework
) it recommend defaults for most options. - Advanced: Lets you control Compute site (chameleon_site), GPU type (
gpu_type
), CUDA version (cuda_version
), storage site (bucket_site
) The rest of the documentation shows what these options are and their implications - Type: Single-select
- Default: "Basic"
- We recommend setting
project name
as the prefix for the lease name - It is used everywhere your project is referenced:
-
object store names (e.g.,
project-name-data
,project-name-mlflow-artifacts
) -
Compute instances /servers are going to include the
project_name
as their prefix# when creating a server s = server.Server( f"{{ project_name }}-node-{username}"
-
Your material on the compute instance will be under a directory named after your
project_name
-
The containerized environment will look for a directory with the
project_name
directory -
most commands and scripts assume a unified
project_name
-
- Rules: only letters, numbers, hyphen (-), and underscore (_). no spaces.
- Tip: Choose something short and memorable — remember this will show up in multiple commands and URLs
- Type:
str
- The remote Git repository where the generated project will live, we recommend creating a remote repository (e.g GitHub or GitLab).
- Accepts HTTPS or SSH URLs (e.g.,
https://github.com/user/repo.git
orgit@gitlab.com:user/repo.git
). - After having your project generated, you need to push the code there. (see Github / Gitlabl Guide on pushing code to remote repo)
- Type:
str
- Note: can accept blank if you need to set the repository later, you will need to manully input the remote repo in create_server notebook.
The site where your leases at, and compute resources will be provisioned.
- This doesn’t control persistent storage storage location (that’s
bucket_site
).
- CHI@TACC → Most GPU bare metal nodes.
- CHI@UC → University of Chicago resources.
- KVM@TACC → VM(Virtual Machines )-based compute at TACC.
- Type:
select
- Default:
CHI@TACC
- This is where your object storage contrainers for you project will live.
- CHI@TACC: Texas Advanced Computing Center
- CHI@UC: University of Chicago
- auto is usually the best choice unless you have a reason to store data in a specific location. it matches your selected
chameleon_site
if object storage containers are available, if not it defaults to CHI@TACC site. - Type:
select
- Default:
CHI@TACC
- The type of GPU (or CPU-only) node you want to create and configure. this assumes that you have reserved a node and you know which type it is AMD, NVIDIA or CPU.
- configuring a server from a lease require the
gpu_type
, as differentgpus
have different setup process. nvidia
andamd
require different container images to. so your decision will result in selecting the appropriate container images- Type:
multi-choice
- you can select multiple types. - Default:
NVIDIA
- Note: when selecting
chemeleon_site
= KVM@TACC the GPU flavors run on NVIDIA hardware as there are no AMD variant.
- Selects the primary ML/deep learning framework for your environment.
- It will decide which container image to include and use for your Jupyter Lab.
- Custom training code for the selected
ml_framework
will be generated - pytorch – Flexible, widely used deep learning library. Supports CUDA (NVIDIA) and ROCm (AMD).
- pytorch-lightning – High-level wrapper for PyTorch that simplifies training loops. Supports CUDA (NVIDIA) and ROCm (AMD).
- tensorflow – Popular deep learning library with a strong ecosystem.
- scikit-learn – Machine Learning and data science stack (pandas, scikit-learn, matplotlib, etc.) without deep learning frameworks.
- Note: PyTorch and PyTorch Lightning will prompt for CUDA/ROCm version if you select GPU types.
- Type
multi-choice
: you can select multiple frameworks.
- Choose the CUDA version that matches your code and driver requirements.
cuda11-latest
: highly compatible with most GPUs in Chameleon Cloudcuda12-latest
: The latest version designed to work with newer GPU architectures
- Type
select
- Default:
cuda11-latest
How would you like to configure your server (.env file generation & docker compose setup)? You have two options:
- SSH : SSH into your reserved compute instance and follow the README instructions to generate the .env file and launch your docker compose environment.
- Notebook: stay inside Chameleon JupyterHub and use a notebook (2_configure_server.ipynb) provided in the chi directory to configure your server. This approach is more guided and beginner-friendly.
- both methods generate the same .env file and launch the same docker compose environment. The only difference is where you perform the steps: via SSH (manual control) or Notebook (fully browser-based).
- Type:
select
- Default:
"notebook"
ifsetup_mode == 'Basic'
If enabled, it configures the environment to include a HuggingFace token for seamless Hugging Face Hub access and caching of models/datasets .
- During server setup you will be prompted to enter a Hugging Face Token
- All models/datasets downloaded from Hugging Face will be stored on the mounted point
/mnt/data/
- Type:
bool
- Default:
true
This project was supported by the 2025 Summer of Reproducibility.
Contributors: Ahmed Alghali, Mohamed Saeed, Fraida Fund.