You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -172,7 +172,7 @@ The root directory contains configuration files and documentation for the overal
172
172
173
173
From the frontend interface, you can interact with the chatbot and view the responses in real-time. The backend API handles the chatbot logic and interacts with the TinyLlama model to generate responses.
174
174
175
-
You can adjust the following parameters in the Streamlit interface to control the chatbot responses:
175
+
You can adjust the following parameters in the Streamlit interface to control the chatbot responses expand the "Config params" section:
176
176
- **Max Tokens**: The maximum number of tokens to generate.
177
177
- **Temperature**: The value used to control the randomness of the generated text.
178
178
- **Top K**: The number of highest probability vocabulary tokens to keep for top-k-filtering.
0 commit comments