Skip to content

Commit 2482e84

Browse files
committed
Correct readme
1 parent fcf4213 commit 2482e84

File tree

3 files changed

+21
-5
lines changed

3 files changed

+21
-5
lines changed

README.md

+21-5
Original file line numberDiff line numberDiff line change
@@ -221,20 +221,32 @@ The root directory contains configuration files and documentation for the overal
221221
pip install -r backend/requirements.txt
222222
pip install -r frontend/requirements.txt
223223
```
224+
3. Set SERVICE_TOKEN environment variable with the service token for the backend API. You can set the environment variable in the terminal before running the application:
224225

225-
3. Run the backend FastAPI application:
226+
(Linux/Mac)
227+
```bash
228+
export SERVICE_TOKEN="myllservicetoken2024"
229+
```
230+
or
231+
232+
(Windows)
233+
```bash
234+
$env:SERVICE_TOKEN="myllservicetoken2024"
235+
```
236+
237+
4. Run the backend FastAPI application:
226238

227239
```bash
228-
uvicorn backend.api.main:app --reload
240+
uvicorn backend.api.main:api --reload
229241
```
230-
4. Run the frontend Streamlit application:
242+
5. Run the frontend Streamlit application:
231243

232244
```bash
233245
streamlit run frontend/app/main.py
234246
```
235-
5. Open your web browser and go to `http://localhost:8501` to access the Streamlit application Chat of the frontend.
247+
6. Open your web browser and go to `http://localhost:8501` to access the Streamlit application Chat of the frontend.
236248

237-
6. Go to `http://localhost:8000/docs` to access the FastAPI Swagger documentation of the backend.
249+
7. Go to `http://localhost:8000/docs` to access the FastAPI Swagger documentation of the backend.
238250

239251
### Frontend usage
240252

@@ -243,6 +255,8 @@ The root directory contains configuration files and documentation for the overal
243255
3. The backend API will process the message and generate a response using the TinyLlama model.
244256
4. The response will be displayed in the chat interface on the Streamlit application.
245257

258+
![Chat Example](./images/chat_llm.png)
259+
246260
From the frontend interface, you can interact with the chatbot and view the responses in real-time. The backend API handles the chatbot logic and interacts with the TinyLlama model to generate responses.
247261

248262
You can adjust the following parameters in the Streamlit interface to control the chatbot responses expand the "Config params" section:
@@ -251,6 +265,8 @@ You can adjust the following parameters in the Streamlit interface to control th
251265
- **Top K**: The number of highest probability vocabulary tokens to keep for top-k-filtering.
252266
- **Top P**: The cumulative probability of parameter settings for nucleus sampling.
253267

268+
![Chat Config params](./images/chat_config_params.png)
269+
254270
The interface send all history of the chat to the backend API to generate the response. LLM model is a conversational model, so it needs the context of the conversation to generate the response correctly.
255271

256272
For clear the chat history, you can click the "New Chat" button.

images/chat_config_params.png

27.9 KB
Loading

images/chat_llm.png

45.1 KB
Loading

0 commit comments

Comments
 (0)