Chat with LLM is a Streamlit-based web application that allows users to interact with large language models (LLMs) through a chat interface. You can choose between various models hosted on Hugging Face and adjust parameters such as temperature and max tokens for personalized responses. This app leverages the Hugging Face API to generate text-based responses and provides a user-friendly chat experience.
- Multiple Model Support: Choose between models like Mistral-7B, Gemma-3-1b, and DeepSeek-R1.
- Dynamic Settings: Adjust the temperature, model selection, and token limits from the sidebar.
- Real-Time Chat: Engage in real-time chat with the selected model.
- Environment Variable Setup: Easily connect your Hugging Face account for API key integration.
- Persistent Chat History: Maintain conversation history across sessions for a more natural experience.
- Python
- Streamlit (for building the web interface)
- Hugging Face API (for model inference)
- Langchain (for chat history management)
Chat-with-LLM/
│── __pycache__/ # Cached Python files
│── main.py # Main Streamlit app file
│── Screenshots # Contains ss fo some chats
│── README.md # Project documentation
-
Prerequisites - Ensure you have Python 3.8+ installed.
-
Clone the repository
git clone https://github.com/Uni-Creator/Chat-with-LLM.git cd Chat-with-LLM
-
Install dependencies
pip install streamlit langchain huggingface_hub
-
Run the app
streamlit run main.py
- Select Model - Choose the desired model from the sidebar.
- Adjust Settings - Modify the temperature, token limits, and other settings to fine-tune the responses.
- Start Chatting - Type your message in the text input field and hit enter to chat with the model.
User: Hello, how are you today?
Model: I'm doing great, thank you for asking! How about you?
Here are some screenshots showcasing the Chat with LLM app in action:
First view of the app showing the model selection and settings.
Another screenshot showing the chat interface.
Chat in progress with model responses.
User adjusting settings in the sidebar.
Chat in progress with model responses.
Final view with full chat history.
- Modify
app.py
to change the interface or add new functionality. - Adjust model settings and parameters in the sidebar for different experiences.
- The user interacts with the interface built using Streamlit.
- The app sends requests to Hugging Face's inference API.
- The selected model generates a response based on the input prompt.
- The response is displayed in the app's chat interface.
- Add more language models for broader usage.
- Create a user authentication system for saving chat history.
- Implement a logging feature for conversation analytics.
Contributions are welcome! Feel free to open an issue or submit a pull request.
This project is licensed under the MIT License.