Memori AI v1.0.0
[1.0.0] - 2025-08-03
π Production-Ready Memory Layer for AI Agents
Complete professional-grade memory system with modular architecture, comprehensive error handling, and configuration management.
β¨ Core Features
- Universal LLM Integration: Works with ANY LLM library (LiteLLM, OpenAI, Anthropic)
- Pydantic-based Intelligence: Structured memory processing with validation
- Automatic Context Injection: Relevant memories automatically added to conversations
- Multiple Memory Types: Short-term, long-term, rules, and entity relationships
- Advanced Search: Full-text search with semantic ranking
ποΈ Architecture
- Modular Design: Separated concerns with clear component boundaries
- SQL Query Centralization: Dedicated query modules for maintainability
- Configuration Management: Pydantic-based settings with auto-loading
- Comprehensive Error Handling: Context-aware exceptions with sanitized logging
- Production Logging: Structured logging with multiple output targets
ποΈ Database Support
- Multi-Database: SQLite, PostgreSQL, MySQL connectors
- Query Optimization: Indexed searches and connection pooling
- Schema Management: Version-controlled migrations and templates
- Full-Text Search: FTS5 support for advanced text search
π§ Developer Experience
- Type Safety: Full Pydantic validation throughout
- Simple API: One-line enablement with
memori.enable()
- Flexible Configuration: File, environment, or programmatic setup
- Rich Examples: Basic usage, personal assistant, advanced config
π Memory Processing
- Entity Extraction: People, technologies, projects, skills
- Smart Categorization: Facts, preferences, skills, rules, context
- Importance Scoring: Multi-dimensional relevance assessment
- Relationship Mapping: Entity interconnections and memory graphs
π Integrations
- LiteLLM Native: Uses official callback system (recommended)
- OpenAI/Anthropic: Clean wrapper classes for direct usage
- Tool Support: Memory search tools for function calling
π‘οΈ Security & Reliability
- Input Sanitization: Protection against injection attacks
- Error Context: Detailed error information without exposing secrets
- Graceful Degradation: Continues operation when components fail
- Resource Management: Automatic cleanup and connection pooling
π Project Structure
memoriai/
βββ core/ # Main memory interface and database
βββ config/ # Configuration management system
βββ agents/ # Pydantic-based memory processing
βββ database/ # Multi-database support and queries
βββ integrations/ # LLM provider integrations
βββ utils/ # Helpers, validation, logging
βββ tools/ # Memory search and retrieval tools
π― Philosophy Alignment
- Second-memory for LLM work: Never repeat context again
- Flexible database connections: Production-ready adapters
- Simple, reliable architecture: Just works out of the box
- Conscious context injection: Intelligent memory retrieval
β‘ Quick Start
from memoriai import Memori
memori = Memori(
database_connect="sqlite:///my_memory.db",
conscious_ingest=True,
openai_api_key="sk-..."
)
memori.enable() # Start recording all LLM conversations
# Use any LLM library - context automatically injected!
from litellm import completion
response = completion(model="gpt-4", messages=[...])
π Documentation
- Clean, focused README aligned with project vision
- Essential examples without complexity bloat
- Configuration guides for development and production
- Architecture documentation for contributors
ποΈ Archive Management
- Moved outdated files to
archive/
folder - Updated
.gitignore
to exclude archive from version control - Preserved development history while cleaning main structure
π‘ Breaking Changes from Pre-1.0
- Moved from enum-driven to Pydantic-based processing
- Simplified API surface with focus on
enable()/disable()
- Restructured package layout for better modularity
- Enhanced configuration system replaces simple parameters