Skip to content
suncloudsmoon edited this page Feb 4, 2025 · 3 revisions

Welcome to the LLM Helper Functions Wiki!

This repository provides a set of utilities for interacting with large language model (LLM) APIs, including support for both OpenAI and Ollama. The key functionalities include:

  • Context Window Management:
    Determine the number of tokens available in a model's context window. Useful for managing prompt size and chunking text.

  • Text Chunking and Token Estimation:
    Split long text into chunks based on estimated token counts and convert between token counts and character counts.

  • Ollama API Integration:
    Retrieve model details from an Ollama API endpoint and determine whether a collection of models is served by Ollama.

Wiki Pages

  • ContextWindowHelper
    Documentation on how to obtain context window sizes, convert between tokens and characters, and chunk text.

  • Ollama API Integration
    Details on how to interact with the Ollama API for retrieving model details and detecting model providers.

Getting Started

  1. Clone the Repository:
    Get a local copy of the repository to explore the source code and run examples.

  2. Review the Wiki:
    Use the pages on this wiki to understand the purpose of each component and how to integrate them into your projects.

  3. Examples:
    The code examples provided in the wiki pages will help you quickly integrate these helper functions into your LLM applications.

Contributing

Contributions are welcome! If you have improvements or additional features to suggest, please open an issue or submit a pull request.

License

This project is licensed under the MIT License.

Happy coding!

Clone this wiki locally