You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+32-8
Original file line number
Diff line number
Diff line change
@@ -4,6 +4,30 @@
4
4
5
5
This project is a chat application with a web interface developed using Streamlit and a backend developed with FastAPI. The backend integrates and loads the TinyLlama model directly to handle chat queries and generate responses to users' questions. The entire solution is containerized, allowing for deployment with both Docker Compose and Kubernetes.
6
6
7
+
## Contents
8
+
9
+
-[Features](#features)
10
+
-[Technologies, Frameworks and Tools](#technologies-frameworks-and-tools)
11
+
-[GitHub Actions CI/CD](#github-actions-cicd)
12
+
-[Architecture](#architecture)
13
+
-[Project Structure](#project-structure)
14
+
-[Backend](#backend)
15
+
-[Frontend](#frontend)
16
+
-[Root Directory](#root-directory)
17
+
-[Getting Started](#getting-started)
18
+
-[Prerequisites](#prerequisites)
19
+
-[Installation for Local Development](#installation-for-local-development)
20
+
-[Frontend usage](#frontend-usage)
21
+
-[Backend API Usage](#backend-api-usage)
22
+
-[Building the Docker Image Locally](#building-the-docker-image-locally)
23
+
-[Running the Docker Image Locally](#running-the-docker-image-locally)
24
+
-[Deployment with Docker Compose](#deployment-with-docker-compose)
25
+
-[Deployment with Kubernetes](#deployment-with-kubernetes)
26
+
-[Running Tests](#running-tests)
27
+
-[Documentation](#documentation)
28
+
-[Contributing](#contributing)
29
+
-[License](#license)
30
+
7
31
## Features
8
32
9
33
- Chat Interface with TinyLlama Model: The chat interface uses a TinyLlama model integrated within the backend to respond to user queries in natural language format with a conversational tone and context. The model is not hosted on Hugging Face but is instead downloaded and loaded directly in the backend for real-time response generation. View the [TinyLlama model](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0).
@@ -183,7 +207,7 @@ The root directory contains configuration files and documentation for the overal
183
207
- FastAPI. View installation instructions in the [FastAPI documentation](https://fastapi.tiangolo.com/). Not necesary if you install dependencies by requirements.txt file.
184
208
- Streamlit. View installation instructions in the [Streamlit documentation](https://docs.streamlit.io/). Not necesary if you install dependencies by requirements.txt file.
185
209
186
-
### Installation
210
+
### Installation for Local Development
187
211
188
212
1. Clone the repository:
189
213
@@ -254,6 +278,8 @@ To build the Docker image for the frontend Streamlit application, run the follow
0 commit comments