Skip to content

A modular, symbolic-ready LLM framework addressing key dev pain points: local fine-tuning, full-language interoperability, and composable reasoning. Built for extensibility, performance, and seamless AI/UX integration across ecosystems.

License

Notifications You must be signed in to change notification settings

BryceWDesign/UXI-LLM

Repository files navigation

UXI-LLM

A next-generation, modular, symbolic + neural hybrid large language model built to solve modern AI's biggest pain points.

🔥 Key Features

  • 🧠 Hybrid AI Core: Integrates neural transformers with symbolic reasoning.
  • 🧩 True Modularity: Swap or extend any model component (tokenizer, attention, reasoning).
  • 🖥️ Fully Local Compatible: Train and run offline on local machines.
  • 🔌 Plugin System: Extend capabilities with hot-swappable modules.
  • 📡 API Ready: Full REST, WebSocket, and CLI interfaces.
  • 📊 Evaluation Tools: Benchmarks, logic tests, and memory diagnostics included.

🚀 Why UXI-LLM?

Modern LLMs are huge, closed, cloud-dependent, and rigid. UXI-LLM is the opposite:

  • Local-first, open, and customizable.
  • Built for hackers, researchers, and devs who want control.
  • Designed to evolve with the community.

📦 Tech Stack

  • Python 3.10+
  • PyTorch (for neural layers)
  • SymPy / Z3 / MiniKanren (for symbolic logic)
  • FastAPI (for API layer)
  • YAML (for modular configs)
  • Docker (for CI/infra)

📂 Project Layout (Preview)

/core         - neural model code
/symbolic     - rule engine, logic DSL
/plugins      - custom extensions
/api          - FastAPI + CLI tools
/configs      - YAML config files
/tests        - unit + integration tests
/docs         - Markdown docs + architecture

🧠 Symbolic Reasoning Capabilities

UXI-LLM can:

  • Apply user-defined logic rules to guide generations
  • Mix symbolic + statistical inference at runtime
  • Embed reasoning modules inside transformer attention

⚙️ Installation

# Requirements
Python 3.10+
pip install -r requirements.txt

🛠️ Development Status

Currently in heavy development. MVP will support:

  • Training on small local datasets
  • Interoperable plugin injection
  • Symbolic parsing + attention hooks

📄 License

MIT — Free for all use.


Built for the open future. No permission needed. Fork, improve, and share.

About

A modular, symbolic-ready LLM framework addressing key dev pain points: local fine-tuning, full-language interoperability, and composable reasoning. Built for extensibility, performance, and seamless AI/UX integration across ecosystems.

Topics

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published