Releases: Flamehaven/CRoM-Context-Rot-Mitigation--EfficientLLM
Releases · Flamehaven/CRoM-Context-Rot-Mitigation--EfficientLLM
CRoM-EfficientLLM v1.0.2
[1.0.2] - 2025-09-17
🐞 Fixed
- Critical SyntaxError in
SafeCrossEncoderManager.get_status_for_response
- Separated docstring and return statement that were incorrectly on the same line
- Resolved import failures preventing
crom_efficientllm.cross_encoder
module loading
- FastAPI Server ImportError for
enhanced_greedy_pack
- Implemented lazy loading with
importlib
to fix circular import dependency - Corrected module structure so
enhanced_greedy_pack
is properly exported frombudget_packer
- Resolved FastAPI server startup failures
- Implemented lazy loading with
- Version consistency issues across components
- FastAPI server now correctly displays
v1.0.2
- Fixed dynamic version loading from
pyproject.toml
and package metadata - Synchronized version references across the codebase
- FastAPI server now correctly displays
⚡ Improved
- More robust error handling and import resilience across core modules
- Refined module architecture preventing circular dependencies
- Clearer debugging output for Cross-Encoder initialization
- Dynamic version management system with proper fallback mechanisms
✨ Added
- Comprehensive integration tests (
tests/test_integration.py
)- End-to-end validation with 10 test cases
- Verified version consistency across all components
- Interoperability tests (CrossEncoder + BudgetPacker + Logger)
- Performance testing with 1,000+ documents
- Edge-case error handling and fallback validation
- FastAPI server startup and endpoint checks
🧪 Testing
- All core module imports function correctly
- FastAPI server launches without errors and shows correct version
- Cross-Encoder manager handles errors gracefully
- Integration tests validate full system functionality
- Performance benchmarks confirm scalability with large datasets
CRoM v1.0.1 – Context Rot Mitigation Core Release
🧠 CRoM v1.0.1 – Context-Rot Mitigation for Efficient LLMs
Release Date: 2025-09-06
Tag: v1.0.1
🚀 What's New
First functional release of the CRoM system — a context efficiency layer for Retrieval-Augmented LLM systems. This version brings token-aware injection, cross-encoder fallback, and full FastAPI orchestration.
🔧 Features
budget_packer.py
– Smart prompt packing under token budgets- Cross-Encoder Fallback – Embedding-based scoring when LLMs are uncertain
- Capsule Logger – Transparent logging of injected/pruned context
- FastAPI Interface – Live endpoint access for simulation, injection, and logging
⚠️ Known Issues
- Persistent vector cache (FAISS) not yet implemented
- Evaluation tests are internal only
- Single-thread fallback slows batch performance
🔭 Roadmap
- Vector memory cache
- Context summarizer & history pruning
- Performance benchmarking suite
🔌 Install & Run
git clone https://github.com/Flamehaven/CRoM-Context-Rot-Mitigation--EfficientLLM.git
cd CRoM-Context-Rot-Mitigation--EfficientLLM
pip install -r requirements.txt
uvicorn app.main:app --reload