Skip to content

Commit fcf4213

Browse files
committed
Add readme
1 parent c871a1c commit fcf4213

File tree

1 file changed

+32
-8
lines changed

1 file changed

+32
-8
lines changed

README.md

+32-8
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,30 @@
44

55
This project is a chat application with a web interface developed using Streamlit and a backend developed with FastAPI. The backend integrates and loads the TinyLlama model directly to handle chat queries and generate responses to users' questions. The entire solution is containerized, allowing for deployment with both Docker Compose and Kubernetes.
66

7+
## Contents
8+
9+
- [Features](#features)
10+
- [Technologies, Frameworks and Tools](#technologies-frameworks-and-tools)
11+
- [GitHub Actions CI/CD](#github-actions-cicd)
12+
- [Architecture](#architecture)
13+
- [Project Structure](#project-structure)
14+
- [Backend](#backend)
15+
- [Frontend](#frontend)
16+
- [Root Directory](#root-directory)
17+
- [Getting Started](#getting-started)
18+
- [Prerequisites](#prerequisites)
19+
- [Installation for Local Development](#installation-for-local-development)
20+
- [Frontend usage](#frontend-usage)
21+
- [Backend API Usage](#backend-api-usage)
22+
- [Building the Docker Image Locally](#building-the-docker-image-locally)
23+
- [Running the Docker Image Locally](#running-the-docker-image-locally)
24+
- [Deployment with Docker Compose](#deployment-with-docker-compose)
25+
- [Deployment with Kubernetes](#deployment-with-kubernetes)
26+
- [Running Tests](#running-tests)
27+
- [Documentation](#documentation)
28+
- [Contributing](#contributing)
29+
- [License](#license)
30+
731
## Features
832

933
- Chat Interface with TinyLlama Model: The chat interface uses a TinyLlama model integrated within the backend to respond to user queries in natural language format with a conversational tone and context. The model is not hosted on Hugging Face but is instead downloaded and loaded directly in the backend for real-time response generation. View the [TinyLlama model](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0).
@@ -183,7 +207,7 @@ The root directory contains configuration files and documentation for the overal
183207
- FastAPI. View installation instructions in the [FastAPI documentation](https://fastapi.tiangolo.com/). Not necesary if you install dependencies by requirements.txt file.
184208
- Streamlit. View installation instructions in the [Streamlit documentation](https://docs.streamlit.io/). Not necesary if you install dependencies by requirements.txt file.
185209
186-
### Installation
210+
### Installation for Local Development
187211
188212
1. Clone the repository:
189213
@@ -254,6 +278,8 @@ To build the Docker image for the frontend Streamlit application, run the follow
254278
docker build -t llm-tinyllama-frontend:latest frontend
255279
```
256280

281+
### Running the Docker Image Locally
282+
257283
To run the Docker image for the backend FastAPI application, run the following command:
258284

259285
```bash
@@ -317,7 +343,7 @@ To deploy the services with Docker Compose using GitHub Container Registry image
317343
docker-compose -f docker-compose-ghimages.yaml down
318344
```
319345

320-
### Accessing the Services
346+
#### Accessing the Services
321347

322348
- To access the frontend in Docker Compose, go to:
323349

@@ -381,7 +407,7 @@ To deploy the backend and frontend services to Kubernetes, follow these steps:
381407
kubectl delete deployment chatllm-frontend-deployment
382408
```
383409

384-
### Accessing the Services
410+
#### Accessing the Services
385411

386412
- To access the frontend in Kubernetes, go to:
387413

@@ -393,7 +419,7 @@ To deploy the backend and frontend services to Kubernetes, follow these steps:
393419
```bash
394420
http://localhost:8000/docs
395421
```
396-
### Running Tests
422+
## Running Tests
397423

398424
To run the tests for the backend FastAPI application, run the following command from the root directory:
399425

@@ -407,7 +433,7 @@ To run the tests for the frontend Streamlit application, run the following comma
407433
pytest --cov=frontend/api --cov-report=term-missing frontend/tests/test_main.py
408434
```
409435

410-
### Documentation
436+
## Documentation
411437

412438
For more information about Hubging Face LLM models, please refer to the [Hugging Face documentation](https://huggingface.co/).
413439

@@ -421,8 +447,6 @@ For more information on using Docker, please refer to the [Docker documentation]
421447

422448
For more information on using GitHub Actions, please refer to the [GitHub Actions documentation](https://docs.github.com/en/actions).
423449

424-
425-
426450
## Contributing
427451

428452
If you want to contribute to this project, please follow these steps:
@@ -436,4 +460,4 @@ If you want to contribute to this project, please follow these steps:
436460

437461
## License
438462

439-
This project is licensed under the MIT License. See the [LICENSE](LICENSE) file for more information.
463+
This project is licensed under the MIT License. See the [LICENSE](LICENSE) file for more information.

0 commit comments

Comments
 (0)