Skip to content

Ollama integration as a new AI provider #1208

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 9 commits into
base: master
Choose a base branch
from

Conversation

kroryan
Copy link

@kroryan kroryan commented May 18, 2025

Description

This pull request introduces Ollama integration as a new AI provider for the text generation feature within the Fantasy Map Generator. Users can now leverage locally running Ollama models (e.g., Llama 3, Mistral) to generate descriptive text for their map notes.

Motivation and Context:

The primary motivation was to offer users more flexibility and control over the AI models used for text generation, particularly by enabling the use of local models which can be beneficial for privacy, cost, and offline access. This also serves as an alternative to cloud-based AI providers.

NOTE: this will only work for people who is using it locally and has ollama running in the same machine

Summary of Changes:

  1. Ollama Provider Implementation (modules/ui/ai-generator.js):

    • Added "Ollama" to the list of available AI providers (PROVIDERS and MODELS constants).
    • When "Ollama" is selected:
      • The "API Key" input field is repurposed to accept the Ollama model name (e.g., "llama3"). The placeholder text dynamically updates to guide the user.
      • The system makes requests to the local Ollama API endpoint: http://localhost:11434/api/generate.
    • A new function generateWithOllama was created to construct and send the request to the Ollama API, including the model name, prompt, system message, and temperature.
    • The handleStream function was updated to correctly parse the newline-delimited JSON objects streamed by the Ollama API.
    • Logic for saving and loading the Ollama model name (from the "API Key" field) to/from local storage was implemented, similar to how API keys for other providers are handled.
  2. AI Generator Dialog Enhancements (modules/ui/ai-generator.js):

    • Resolved an issue where the AI generator dialog would sometimes fail to initialize correctly, leading to errors when trying to access its elements (e.g., "Cannot set properties of null").
    • The initialization of the dialog's UI elements (updateDialogElements) and the setup of its internal event listeners (for the help button and model selection dropdown) are now performed within the jQuery dialog's open event. This ensures that the DOM elements are available and ready before any manipulation attempts.
  3. Notes Editor Integration (modules/ui/notes-editor.js):

    • The openAiGenerator function, triggered by the "generate text for notes" button, was verified to correctly call the main generateWithAi function, ensuring the dialog opens as expected.
    • The prompt sent to the AI was refined to provide clearer instructions on the expected HTML output format (use of <p> tags, no headings, no markdown). (Note: This change was present in the development process; the user may have reverted this specific prompt modification in their local version. The core functionality for Ollama integration remains.)

How it Works:

When a user selects "Ollama (enter model in key field)" from the AI generator dropdown:

  1. The "API Key" field's placeholder changes to "Enter Ollama model name (e.g., llama3)".
  2. The user enters their desired Ollama model name (which must be pulled/available in their local Ollama instance).
  3. Upon clicking "Generate", the generateWithOllama function sends a POST request to http://localhost:11434/api/generate with the specified model, prompt, and other parameters.
  4. The Ollama API streams back the response as a series of JSON objects, each on a new line.
  5. The handleStream function processes this stream, extracting the content from each JSON object and appending it to the result text area in real-time.

This integration allows for a seamless experience using local LLMs for content generation directly within the Fantasy Map Generator.

Type of change

  • New feature (Ollama integration for AI text generation)
  • Bug fix (Resolved issues with AI generator dialog initialization)
  • Refactoring / style
  • Documentation update / chore
  • Other (please describe)

Versioning

  • Version is updated
  • Changed files hash is updated

Copy link

netlify bot commented May 18, 2025

Deploy Preview for afmg ready!

Name Link
🔨 Latest commit 8db9c1c
🔍 Latest deploy log https://app.netlify.com/projects/afmg/deploys/683a4a6c1a86fd00088542ab
😎 Deploy Preview https://deploy-preview-1208--afmg.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify project configuration.

@Azgaar
Copy link
Owner

Azgaar commented May 18, 2025

Hello. I'm not against Ollama, but this PR looks AI-generated. It has both good and bad parts, at least I would expect a human to work on it before presenting to public. Also I'm not sure why should we stick with Ollama and not e.g. LM Studio. It would make more sense to allow ANY local model by just letting user to populate endpoint and other required fields. In this case I also expect a tutorial to be linked on what is it and how to use it.

@Azgaar Azgaar self-requested a review May 18, 2025 14:26
@kroryan
Copy link
Author

kroryan commented May 18, 2025

and bad parts, at least I would expect a human to work on it before presenting to public. Also I'm not sure why should we stick with Ollama and not e.g. LM Studio. It would make

well i check the code and it is all right but im new in coding so i may miss something, ollama works a 30% faster than lm studio you can check it thats why use ollama its better, also ollama api is not like lm studio api so it would need a diferent implementation , oh i think it is explained just in the pull request the user just need to write the model name instead the api and select ollama thats all

@Azgaar
Copy link
Owner

Azgaar commented May 18, 2025

well i check the code and it is all right

I won't say so. It adds some mess that we don't need. Changelog is in a separate file, not in Readme. Some comments are redundant, some vars are renamed for unclear reason, and I don't get why does it change the way modules work with window.generateWithAi = generateWithAi.

@kroryan
Copy link
Author

kroryan commented May 19, 2025

well i check the code and it is all right

I won't say so. It adds some mess that we don't need. Changelog is in a separate file, not in Readme. Some comments are redundant, some vars are renamed for unclear reason, and I don't get why does it change the way modules work with window.generateWithAi = generateWithAi.

Okey okey i will review all and i will get back to you, sorry for make you lost time!

@kroryan
Copy link
Author

kroryan commented May 31, 2025

well i check the code and it is all right

I won't say so. It adds some mess that we don't need. Changelog is in a separate file, not in Readme. Some comments are redundant, some vars are renamed for unclear reason, and I don't get why does it change the way modules work with window.generateWithAi = generateWithAi.

I've removed the Recent Changes section from the README.md to keep the changelog separate.

I've also gone through the modules/ui/ai-generator.js file to simplify comments and ensure variable naming aligns with the original code.

Crucially, I've removed the window.generateWithAi = generateWithAi; line, as it was indeed an oversight on my part and unnecessary. The function is now only exposed via modules.generateWithAi, respecting the existing module structure.

take a look and let me know what you think¡

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants