Skip to content

Memori AI v1.0.0

Compare
Choose a tag to compare
@github-actions github-actions released this 04 Aug 08:03
· 195 commits to main since this release

[1.0.0] - 2025-08-03

πŸŽ‰ Production-Ready Memory Layer for AI Agents

Complete professional-grade memory system with modular architecture, comprehensive error handling, and configuration management.

✨ Core Features

  • Universal LLM Integration: Works with ANY LLM library (LiteLLM, OpenAI, Anthropic)
  • Pydantic-based Intelligence: Structured memory processing with validation
  • Automatic Context Injection: Relevant memories automatically added to conversations
  • Multiple Memory Types: Short-term, long-term, rules, and entity relationships
  • Advanced Search: Full-text search with semantic ranking

πŸ—οΈ Architecture

  • Modular Design: Separated concerns with clear component boundaries
  • SQL Query Centralization: Dedicated query modules for maintainability
  • Configuration Management: Pydantic-based settings with auto-loading
  • Comprehensive Error Handling: Context-aware exceptions with sanitized logging
  • Production Logging: Structured logging with multiple output targets

πŸ—„οΈ Database Support

  • Multi-Database: SQLite, PostgreSQL, MySQL connectors
  • Query Optimization: Indexed searches and connection pooling
  • Schema Management: Version-controlled migrations and templates
  • Full-Text Search: FTS5 support for advanced text search

πŸ”§ Developer Experience

  • Type Safety: Full Pydantic validation throughout
  • Simple API: One-line enablement with memori.enable()
  • Flexible Configuration: File, environment, or programmatic setup
  • Rich Examples: Basic usage, personal assistant, advanced config

πŸ“Š Memory Processing

  • Entity Extraction: People, technologies, projects, skills
  • Smart Categorization: Facts, preferences, skills, rules, context
  • Importance Scoring: Multi-dimensional relevance assessment
  • Relationship Mapping: Entity interconnections and memory graphs

πŸ”Œ Integrations

  • LiteLLM Native: Uses official callback system (recommended)
  • OpenAI/Anthropic: Clean wrapper classes for direct usage
  • Tool Support: Memory search tools for function calling

πŸ›‘οΈ Security & Reliability

  • Input Sanitization: Protection against injection attacks
  • Error Context: Detailed error information without exposing secrets
  • Graceful Degradation: Continues operation when components fail
  • Resource Management: Automatic cleanup and connection pooling

πŸ“ Project Structure

memoriai/
β”œβ”€β”€ core/              # Main memory interface and database
β”œβ”€β”€ config/            # Configuration management system
β”œβ”€β”€ agents/            # Pydantic-based memory processing
β”œβ”€β”€ database/          # Multi-database support and queries
β”œβ”€β”€ integrations/      # LLM provider integrations
β”œβ”€β”€ utils/             # Helpers, validation, logging
└── tools/             # Memory search and retrieval tools

🎯 Philosophy Alignment

  • Second-memory for LLM work: Never repeat context again
  • Flexible database connections: Production-ready adapters
  • Simple, reliable architecture: Just works out of the box
  • Conscious context injection: Intelligent memory retrieval

⚑ Quick Start

from memoriai import Memori

memori = Memori(
    database_connect="sqlite:///my_memory.db",
    conscious_ingest=True,
    openai_api_key="sk-..."
)
memori.enable()  # Start recording all LLM conversations

# Use any LLM library - context automatically injected!
from litellm import completion
response = completion(model="gpt-4", messages=[...])

πŸ“š Documentation

  • Clean, focused README aligned with project vision
  • Essential examples without complexity bloat
  • Configuration guides for development and production
  • Architecture documentation for contributors

πŸ—‚οΈ Archive Management

  • Moved outdated files to archive/ folder
  • Updated .gitignore to exclude archive from version control
  • Preserved development history while cleaning main structure

πŸ’‘ Breaking Changes from Pre-1.0

  • Moved from enum-driven to Pydantic-based processing
  • Simplified API surface with focus on enable()/disable()
  • Restructured package layout for better modularity
  • Enhanced configuration system replaces simple parameters