-
-
Notifications
You must be signed in to change notification settings - Fork 731
Ollama integration as a new AI provider #1208
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
✅ Deploy Preview for afmg ready!
To edit notification comments on pull requests, go to your Netlify project configuration. |
Hello. I'm not against Ollama, but this PR looks AI-generated. It has both good and bad parts, at least I would expect a human to work on it before presenting to public. Also I'm not sure why should we stick with Ollama and not e.g. LM Studio. It would make more sense to allow ANY local model by just letting user to populate endpoint and other required fields. In this case I also expect a tutorial to be linked on what is it and how to use it. |
well i check the code and it is all right but im new in coding so i may miss something, ollama works a 30% faster than lm studio you can check it thats why use ollama its better, also ollama api is not like lm studio api so it would need a diferent implementation , oh i think it is explained just in the pull request the user just need to write the model name instead the api and select ollama thats all |
I won't say so. It adds some mess that we don't need. Changelog is in a separate file, not in Readme. Some comments are redundant, some vars are renamed for unclear reason, and I don't get why does it change the way modules work with |
Okey okey i will review all and i will get back to you, sorry for make you lost time! |
I've removed the Recent Changes section from the README.md to keep the changelog separate. I've also gone through the modules/ui/ai-generator.js file to simplify comments and ensure variable naming aligns with the original code. Crucially, I've removed the window.generateWithAi = generateWithAi; line, as it was indeed an oversight on my part and unnecessary. The function is now only exposed via modules.generateWithAi, respecting the existing module structure. take a look and let me know what you think¡ |
Description
This pull request introduces Ollama integration as a new AI provider for the text generation feature within the Fantasy Map Generator. Users can now leverage locally running Ollama models (e.g., Llama 3, Mistral) to generate descriptive text for their map notes.
Motivation and Context:
The primary motivation was to offer users more flexibility and control over the AI models used for text generation, particularly by enabling the use of local models which can be beneficial for privacy, cost, and offline access. This also serves as an alternative to cloud-based AI providers.
NOTE: this will only work for people who is using it locally and has ollama running in the same machine
Summary of Changes:
Ollama Provider Implementation (
modules/ui/ai-generator.js
):PROVIDERS
andMODELS
constants).http://localhost:11434/api/generate
.generateWithOllama
was created to construct and send the request to the Ollama API, including the model name, prompt, system message, and temperature.handleStream
function was updated to correctly parse the newline-delimited JSON objects streamed by the Ollama API.AI Generator Dialog Enhancements (
modules/ui/ai-generator.js
):updateDialogElements
) and the setup of its internal event listeners (for the help button and model selection dropdown) are now performed within the jQuery dialog'sopen
event. This ensures that the DOM elements are available and ready before any manipulation attempts.Notes Editor Integration (
modules/ui/notes-editor.js
):openAiGenerator
function, triggered by the "generate text for notes" button, was verified to correctly call the maingenerateWithAi
function, ensuring the dialog opens as expected.<p>
tags, no headings, no markdown). (Note: This change was present in the development process; the user may have reverted this specific prompt modification in their local version. The core functionality for Ollama integration remains.)How it Works:
When a user selects "Ollama (enter model in key field)" from the AI generator dropdown:
generateWithOllama
function sends a POST request tohttp://localhost:11434/api/generate
with the specified model, prompt, and other parameters.handleStream
function processes this stream, extracting the content from each JSON object and appending it to the result text area in real-time.This integration allows for a seamless experience using local LLMs for content generation directly within the Fantasy Map Generator.
Type of change
Versioning