This repository contains a collection of ready-to-deploy GPU application templates for Spheron. These templates are designed to make it easier for users to leverage Spheron's GPU capabilities for various AI and Web3 projects.
-
- Pre-installed PyTorch environment
- Ready for data science and machine learning tasks
-
- Test and interact with various LLMs supported by Ollama
- User-friendly web interface
-
- Ollama server pre-configured with LLaMA 3.2
- Easily customizable to use any model from the Ollama registry
-
- Remote development environment with GPU support
- Ideal for building AI applications using Ollama
- Can be adapted to host your own AI application
-
- Remote development environment with GPU support
- Ideal for building AI applications using Ollama
- Can be adapted to host your own AI application
-
- Remote development environment with GPU support
- Ideal for building AI applications using PyTorch
- Can be adapted to host your own AI application
-
- Pre-installed Unsloth Jupyter notebook
- Helpful for trying out finetuning using the Unsloth library
-
- Pre-installed Nvidia Cuda Image on Ubuntu 22.04
- Helpful for building AI applications using Nvidia Cuda
-
- Pre-installed Nvidia Cudnn Image on Ubuntu 22.04
- Helpful for building AI applications using Nvidia Cudnn
-
- Pre-installed PyTorch Image
- Helpful for building AI applications using PyTorch
-
- Pre-installed TensorFlow Image
- Helpful for building AI applications using TensorFlow
-
- Pre-installed Basic Ubuntu 22.04 Image
- Helpful for building AI applications using Basic Ubuntu 22.04
-
- Pre-installed Basic Ubuntu 24.04 Image
- Helpful for building AI applications using Basic Ubuntu 24.04
-
- Pre-installed WordPress
- Helpful for running a blog
To use these templates:
-
Clone this repository
-
Choose the template that fits your needs
-
Use the Spheron YAML configuration file and directly deploy it on the Spheron Console App
OR
Follow the guide at Spheron's Deploy Your App documentation to know different ways to deploy your app
We welcome contributions! If you have a template you'd like to add or improvements to existing ones, please submit a pull request.
For questions or issues, please open an issue in this repository or contact Spheron support on Discord.
This project is licensed under the Apache License, Version 2.0. See the LICENSE file for details.
This repository was initially used for educational and testing purposes during our early exploration of compute infrastructure under the Apache 2.0 license.
It is now archived and no longer maintained or used in production. The current Spheron codebase has been fully rewritten and migrated to: 👉 https://github.com/spheron-core/awesome-spheron
This migration aligns with our roadmap toward TGE, Foundation-based governance, and long-term code maintainability.
Note:
- This repository is now deprecated and archived.
- It is no longer maintained and is not used in any part of the production infrastructure.
- All original attributions have been preserved in compliance with the Apache 2.0 license.
- No active development will occur on this repository moving forward.
For the latest updates and active development, please refer to the new organization above.