SceneIntruderMCP is a revolutionary AI-driven interactive storytelling platform that combines traditional text analysis with modern AI technology, providing users with an unprecedented immersive role-playing and story creation experience.
- Multi-dimensional Parsing: Automatically extract scenes, characters, items, and plot elements
- Bilingual Support: Perfect support for intelligent recognition and processing of Chinese and English content
- Deep Analysis: Professional-grade text type identification based on literary theory
- Emotional Intelligence: 8-dimensional emotional analysis (emotion, action, expression, tone, etc.)
- Character Consistency: Maintain long-term memory and personality traits
- Dynamic Interaction: Intelligently triggered automatic dialogues between characters
- Character Memory: Persistent knowledge base that characters remember across interactions
- Relationship Mapping: Dynamic relationship tracking between characters
- Personality Modeling: Comprehensive personality profiles affecting dialogue and behavior
- Non-linear Narrative: Support complex story branching and timeline management
- Intelligent Choice Generation: AI dynamically creates 4 types of choices based on context (Action/Dialogue/Investigation/Strategy)
- Story Rewind: Complete timeline rollback and state management
- Branch Visualization: Visual representation of story branches and pathways
- Progressive Storytelling: Continuous story development across sessions
- Context Preservation: Maintain story context when returning to scenes
- Timeline Management: Sophisticated handling of non-linear story timelines
- Inventory System: Rich object management with interactive items
- Skill System: User-defined abilities affecting story outcomes
- Character Relationships: Track evolving relationships between characters
- World Building: Dynamic scene and location management
- Quest Tracking: Mission and objective management system
- Achievement System: Recognition for story exploration and interaction milestones
- User Customization: Custom items and skills system
- Creativity Control: 3-level creativity control (Strict/Balanced/Expansive)
- Progress Tracking: Real-time story completion and statistical analysis
- Custom Items: Users can define unique items with customizable properties
- Custom Skills: Users can create and manage skills with different effects and levels
- Property System: Items can have multiple properties (attack, defense, magic, durability, etc.)
- Rarity Levels: Items support different rarity tiers: common, rare, epic, legendary
- Skill Trees: Hierarchical skill system with prerequisites and requirements
- Character Interaction: Items and skills can affect character interactions and story outcomes
- API Integration: Full CRUD operations available via API for managing user-defined content
- OpenAI GPT: GPT-4.1/4o/5-chat series
- Anthropic Claude: Claude-3.5/4.5 series
- DeepSeek: DeepSeek--chat series
- Google Gemini: Gemini-2.5/3.0 series
- Grok: xAI's Grok-4/3 series
- Mistral: Mistral-large/small series
- Qwen: Alibaba Cloud Qwen3 series
- GitHub Models: Via GitHub Models platform (GPT-4o/4.1, etc.)
- OpenRouter: Open source model aggregation platform with free tiers
- GLM: Zhipu AI's GLM-4/4-plus series
> ℹ️ **Tip**: The `default_model` value for the active provider is now respected across the backend. Any AI call that doesn't explicitly pass a model name will automatically fall back to this configuration, so you can centrally switch models without touching code.
SceneIntruderMCP/
├── cmd/
│ └── server/ # Application entry point
│ └── main.go
├── internal/
│ ├── api/ # HTTP API routes and handlers
│ ├── app/ # Application core logic
│ ├── config/ # Configuration management
│ ├── di/ # Dependency injection
│ ├── llm/ # LLM provider abstraction layer
│ │ └── providers/ # Various LLM provider implementations
│ ├── models/ # Data model definitions
│ ├── services/ # Business logic services
│ └── storage/ # Storage abstraction layer
├── frontend/
│ └── dist/ # assets
├── data/ # Data storage directory
│ ├── scenes/ # Scene data
│ ├── stories/ # Story data
│ ├── users/ # User data
│ └── exports/ # Export files
└── logs/ # Application logs
- Backend: Go 1.21+, Gin Web Framework
- AI Integration: Multi-LLM provider support with unified abstraction interface
- Storage: File system-based JSON storage with database extension support
- Frontend: React, responsive design
- Deployment: Containerization support, cloud-native architecture
- Scene deletion cleanup –
DELETE /api/scenes/{id}now synchronously removes the matchingdata/stories/<scene_id>timeline, ensuring no orphaned story files remain after a scene is removed. - GitHub Models fallback fixes – Provider bootstrap now respects the configured
default_modeleven when only GitHub Models credentials are supplied, eliminating the previous “connection failed” errors. - Operational readiness upgrades – Documented the persistent encryption key (
data/.encryption_key), refreshed the API/deployment guides, and added a pre-release data cleanup checklist so release artifacts stay tidy.
Before packaging a new build or resetting a shared demo environment, wipe transient data while preserving configuration secrets.
data/scenes/*– per-scene caches, characters, and context filesdata/stories/*– story timelines (v1.2.0+ deletes these automatically alongside scenes)data/items/*– scene item cachesdata/exports/*– exported archives and interaction summariesdata/stats/usage_stats.json– accumulated telemetrytemp/*– temporary uploads and scratch fileslogs/*.log– runtime logs (archive first if you need them)
data/config.json– persisted runtime settings and encrypted API keysdata/.encryption_key– AES-GCM key required to decrypt stored LLM credentials; deleting it forces you to re-enter every API keydata/users/*.json– built-in accounts such asadmin.jsonandconsole_user.json
ℹ️ Scenes deleted prior to v1.2.0 may have left residual
data/stories/scene_*folders. You can safely remove those directories manually to reclaim disk space.
- Go 1.21 or higher
- At least one LLM API key (OpenAI/Claude/DeepSeek, etc.)
- 2GB+ available memory
- Operating System: Windows/Linux/macOS
- Clone the Project
git clone https://github.com/Corphon/SceneIntruderMCP.git
cd SceneIntruderMCP- Install Dependencies
go mod download- Configure Environment
On first start, the server initializes a configuration file at data/config.json (or ${DATA_DIR}/config.json).
You can configure the LLM provider/API key either:
- via the Settings UI:
http://localhost:8080/settings, or - by editing
data/config.jsondirectly.
- Start Service
# Development mode
go run cmd/server/main.go
# Production mode
go build -o sceneintruder cmd/server/main.go
./sceneintruder- Access Application
Open browser: http://localhost:8080
{
"port": "8080",
"data_dir": "data",
"static_dir": "frontend\\dist\\assets",
"templates_dir": "frontend\\dist",
"log_dir": "logs",
"debug_mode": true,
"llm_provider": "openrouter",
"llm_config": {
"default_model": "mistralai/devstral-2512:free",
"base_url": "",
"api_key": ""
},
"encrypted_llm_config": {
"api_key": "<encrypted_api_key_here>"
}
}- When
CONFIG_ENCRYPTION_KEYisn’t provided, the backend generates a random 32-byte key and stores it indata/.encryption_keyso encrypted API keys keep working between restarts. - The file must stay alongside
data/config.json; deleting it invalidates every encrypted credential until you re-enter them through the settings UI. - To rotate the key intentionally, delete the file, restart the server, and immediately update the API keys—new data will be re-encrypted with the regenerated key.
- Keep
.encryption_keyout of version control and deployment artefacts that are meant to be shared publicly.
- Upload Text: Support various text formats including novels, scripts, stories
- AI Analysis: System automatically extracts characters, scenes, items, and other elements
- Scene Generation: Create interactive scene environments
- Select Character: Choose interaction targets from analyzed characters
- Natural Dialogue: Engage in natural language conversations with AI characters
- Emotional Feedback: Observe character emotions, actions, and expression changes
- Dynamic Choices: AI generates 4 types of choices based on current situation
- Story Development: Advance non-linear story plots based on choices
- Branch Management: Support story rewind and multi-branch exploration
- Interaction Records: Export complete dialogue history
- Story Documents: Generate structured story documents
- Statistical Analysis: Character interaction and story progress statistics
- Multiple Formats: Export data in JSON, Markdown, HTML, TXT, CSV, and PDF formats
- Comprehensive Scene Data: Export full scene information including characters, locations, items, themes, atmosphere, and settings
- Character Interactions: Export detailed interaction records between characters with timestamps and emotional context
- Story Branches: Export complete story trees with all possible branches, choices, and outcomes
- Conversation History: Export all character conversations with metadata
- Progress Statistics: Export story progress metrics, interaction statistics, and timeline data
- User Preferences: Export user customization settings, items, and skills
- Batch Export: Support for exporting multiple scenes or stories simultaneously
- Scheduled Exports: Option for automated periodic exports
- Filtered Exports: Export based on time range, character participation, or interaction type
- Rich Metadata: Include timestamps, version information, and export configuration
- Export Status Tracking: Monitor ongoing export tasks with progress indicators
- Export History: Maintain history of all performed exports
- File Organization: Automatic organization of exported files in structured directories
- Export Quality Assurance: Validation of exported data integrity
- Performance Optimization: Efficient export processing for large datasets
GET /api/scenes # Get scene list
POST /api/scenes # Create scene
GET /api/scenes/{id} # Get scene details
GET /api/scenes/{id}/characters # Get scene characters
GET /api/scenes/{id}/conversations # Get scene conversations
GET /api/scenes/{id}/aggregate # Get scene aggregate dataGET /api/scenes/{id}/story # Get story data
POST /api/scenes/{id}/story/choice # Make story choice
POST /api/scenes/{id}/story/advance # Advance story
POST /api/scenes/{id}/story/rewind # Rewind story
GET /api/scenes/{id}/story/branches # Get story branches
POST /api/scenes/{id}/story/rewind # Rewind story to specific nodeGET /api/scenes/{id}/export/scene # Export scene data
GET /api/scenes/{id}/export/interactions # Export interactions
GET /api/scenes/{id}/export/story # Export story documentPOST /api/interactions/aggregate # Process aggregated interactions
GET /api/interactions/{scene_id} # Get character interactions
GET /api/interactions/{scene_id}/{character1_id}/{character2_id} # Get character-to-character interactionsGET /api/scenes/{id}/aggregate # Get comprehensive scene data with optionsPOST /api/scenes/{id}/story/batch # Batch story operationsGET /api/users/{user_id} # Get user profile
PUT /api/users/{user_id} # Update user profile
GET /api/users/{user_id}/preferences # Get user preferences
PUT /api/users/{user_id}/preferences # Update user preferences# User Items
GET /api/users/{user_id}/items # Get user items
POST /api/users/{user_id}/items # Add user item
GET /api/users/{user_id}/items/{item_id} # Get specific item
PUT /api/users/{user_id}/items/{item_id} # Update user item
DELETE /api/users/{user_id}/items/{item_id} # Delete user item
# User Skills
GET /api/users/{user_id}/skills # Get user skills
POST /api/users/{user_id}/skills # Add user skill
GET /api/users/{user_id}/skills/{skill_id} # Get specific skill
PUT /api/users/{user_id}/skills/{skill_id} # Update user skill
DELETE /api/users/{user_id}/skills/{skill_id} # Delete user skillGET /api/config/health # Get configuration health status
GET /api/config/metrics # Get configuration metrics
GET /api/settings # Get system settings
POST /api/settings # Update system settings
POST /api/settings/test-connection # Test connectionGET /api/ws/status # Get WebSocket connection status
POST /api/ws/cleanup # Clean up expired WebSocket connectionsPOST /api/analyze # Analyze text content
GET /api/progress/{taskID} # Get analysis progress
POST /api/cancel/{taskID} # Cancel analysis task
POST /api/upload # Upload filePOST /api/chat # Basic chat with characters
POST /api/chat/emotion # Chat with emotion analysis
POST /api/interactions/trigger # Trigger character interactions
POST /api/interactions/simulate # Simulate character dialogue
POST /api/interactions/aggregate # Aggregate interaction processing
GET /api/interactions/{scene_id} # Get interaction history
GET /api/interactions/{scene_id}/{character1_id}/{character2_id} # Get specific character interactionsGET /api/settings # Get system settings
POST /api/settings # Update system settings
POST /api/settings/test-connection # Test connection
GET /api/llm/status # Get LLM service status
GET /api/llm/models # Get available models
PUT /api/llm/config # Update LLM configuration# User Profile
GET /api/users/{user_id} # Get user profile
PUT /api/users/{user_id} # Update user profile
GET /api/users/{user_id}/preferences # Get user preferences
PUT /api/users/{user_id}/preferences # Update user preferences
# User Items Management
GET /api/users/{user_id}/items # Get user items
POST /api/users/{user_id}/items # Add user item
GET /api/users/{user_id}/items/{item_id} # Get specific item
PUT /api/users/{user_id}/items/{item_id} # Update user item
DELETE /api/users/{user_id}/items/{item_id} # Delete user item
# User Skills Management
GET /api/users/{user_id}/skills # Get user skills
POST /api/users/{user_id}/skills # Add user skill
GET /api/users/{user_id}/skills/{skill_id} # Get specific skill
PUT /api/users/{user_id}/skills/{skill_id} # Update user skill
DELETE /api/users/{user_id}/skills/{skill_id} # Delete user skillWS /ws/scene/{id} # Scene WebSocket connection
WS /ws/user/status # User status WebSocket connectionGET /api/ws/status # Get WebSocket connection status// 1. Get story data
const storyData = await fetch('/api/scenes/scene123/story');
// 2. Make a choice
const choiceResult = await fetch('/api/scenes/scene123/story/choice', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
node_id: 'node_1',
choice_id: 'choice_a'
})
});
// 3. Export story
const storyExport = await fetch('/api/scenes/scene123/export/story?format=markdown');// 1. Basic chat
const chatResponse = await fetch('/api/chat', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
scene_id: 'scene123',
character_id: 'char456',
message: 'Hello, how are you?'
})
});
// 2. Trigger character interaction
const interaction = await fetch('/api/interactions/trigger', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
scene_id: 'scene123',
character_ids: ['char1', 'char2'],
topic: 'Discussing the mysterious artifact'
})
});// 1. Add custom item
const newItem = await fetch('/api/users/user123/items', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
name: 'Magic Sword',
description: 'A legendary sword with mystical powers',
type: 'weapon',
properties: { attack: 50, magic: 30 }
})
});
// 2. Add skill
const newSkill = await fetch('/api/users/user123/skills', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
name: 'Fireball',
description: 'Cast a powerful fireball spell',
type: 'magic',
level: 3
})
});// Connect to scene WebSocket
const sceneWs = new WebSocket(`ws://localhost:8080/ws/scene/scene123?user_id=user456`);
sceneWs.onmessage = (event) => {
const data = JSON.parse(event.data);
console.log('Scene update:', data);
};
// Send character interaction
sceneWs.send(JSON.stringify({
type: 'character_interaction',
character_id: 'char123',
message: 'Hello everyone!'
}));
// Send story choice
sceneWs.send(JSON.stringify({
type: 'story_choice',
node_id: 'story_node_1',
choice_id: 'choice_a',
user_preferences: {
creativity_level: 'balanced',
allow_plot_twists: true
}
}));// Connect to user status WebSocket
const statusWs = new WebSocket(`ws://localhost:8080/ws/user/status?user_id=user456`);
statusWs.onmessage = (event) => {
const data = JSON.parse(event.data);
switch(data.type) {
case 'heartbeat':
console.log('Connection alive');
break;
case 'user_status_update':
console.log('User status changed:', data.status);
break;
case 'error':
console.error('WebSocket error:', data.error);
break;
default:
console.log('Received:', data);
}
};- character_interaction: Character-to-character interactions
- story_choice: Story decision-making events
- user_status_update: User presence and status updates
- conversation:new: New conversation events
- heartbeat: Connection health checks
- pong: Heartbeat response messages
- error: Error notifications
The application uses RealtimeManager class for handling WebSocket communications:
// Initialize scene realtime functionality
await window.realtimeManager.initSceneRealtime('scene_123');
// Send character interaction
window.realtimeManager.sendCharacterInteraction('scene_123', 'character_456', 'Hello!');
// Subscribe to story events
window.realtimeManager.on('story:event', (data) => {
// Handle story updates
console.log('Story event:', data);
});
// Get connection status
const status = window.realtimeManager.getConnectionStatus();
console.log('WebSocket status:', status);{
"success": true,
"data": {
// Response data
},
"timestamp": "2024-01-01T12:00:00Z"
}{
"success": false,
"error": "Error message description",
"code": "ERROR_CODE",
"timestamp": "2024-01-01T12:00:00Z"
}{
"file_path": "/exports/story_20240101_120000.md",
"content": "# Story Export\n\n...",
"format": "markdown",
"size": 1024,
"timestamp": "2024-01-01T12:00:00Z"
}Currently, the API uses session-based authentication for user management. For production deployment, consider implementing:
- JWT Authentication: Token-based authentication for API access
- Rate Limiting: API call frequency limits
- Input Validation: Strict parameter validation and sanitization
- HTTPS Only: Force HTTPS for all production traffic
For detailed API documentation, see: API Documentation
# Run all tests
go test ./...
# Run tests with coverage report
go test -coverprofile=coverage.out ./...
go tool cover -html=coverage.out
# Run specific package tests
go test ./internal/services/...- Implement Interface: Create new provider in
internal/llm/providers/ - Register Provider: Register in
init()function - Add Configuration: Update configuration file template
- Write Tests: Add corresponding unit tests
- models/: Data models defining core entities in the system
- services/: Business logic layer handling core functionality
- api/: HTTP handlers exposing RESTful APIs
- llm/: LLM abstraction layer supporting multiple AI providers
- Concurrent Processing: Support multiple simultaneous users
- Caching Mechanism: Intelligent caching of LLM responses
- Memory Optimization: Load on demand, prevent memory leaks
- File Compression: Automatic compression of historical data
- API Usage Statistics: Request count and token consumption
- Response Time: AI model response speed monitoring
- Error Rate: System and API error tracking
- Resource Usage: CPU and memory usage monitoring
- API Keys: Secure storage with environment variable support
- User Data: Local storage with complete privacy control
- Access Control: User session and permission management support
- Data Backup: Automatic backup of important data
- HTTPS Support: HTTPS recommended for production environments
- CORS Configuration: Secure cross-origin resource sharing configuration
- Input Validation: Strict user input validation and sanitization
- AES-GCM Encryption: API keys are securely encrypted using AES-GCM algorithm before storage
- Environment Variable Priority: API keys are primarily loaded from environment variables (e.g.,
OPENAI_API_KEY) - Encrypted Storage: When stored in configuration files, API keys are kept in encrypted form in
EncryptedLLMConfigfield - Runtime Decryption: API keys are decrypted only when needed for API calls
- Automatic Migration: Legacy unencrypted API keys are automatically migrated to encrypted storage
- Secure Backward Compatibility: The system handles transition from unencrypted to encrypted API key storage
- Configuration Security: The encryption key should be set as
CONFIG_ENCRYPTION_KEYenvironment variable for optimal security - Fallback Protection: Includes fallback mechanisms to prevent storing API keys as plain text
- Key Derivation: In absence of environment-provided encryption keys, the system safely derives encryption keys from multiple entropy sources
We welcome all forms of contributions!
- Bug Reports: Use GitHub Issues to report problems
- Feature Suggestions: Propose ideas and suggestions for new features
- Code Contributions: Submit Pull Requests
- Documentation Improvements: Help improve documentation and examples
- Fork the project repository
- Create feature branch:
git checkout -b feature/amazing-feature - Commit changes:
git commit -m 'Add amazing feature' - Push branch:
git push origin feature/amazing-feature - Create Pull Request
- Follow official Go coding style
- Add necessary comments and documentation
- Write unit tests covering new features
- Ensure all tests pass
This project is licensed under the Apache 2.0 License - see the LICENSE file for details
- Go - High-performance programming language
- Gin - Lightweight web framework
- OpenAI - GPT series models
- Anthropic - Claude series models
Thanks to all developers and users who have contributed to this project!
- Project Homepage: GitHub Repository
- Issue Reports: GitHub Issues
- Feature Requests: GitHub Discussions
- Email Contact: project@sceneintruder.dev
🌟 If this project helps you, please consider giving it a Star! 🌟
Made with ❤️ by SceneIntruderMCP Team
