Feature and its Use Cases
Problem
The application currently lacks built-in visibility into token consumption during AI debates. This makes it difficult for developers to:
- Monitor API usage and cost in real-time
- Debug prompt efficiency and response overhead
- Understand token usage per debate turn without external tools
At present, token usage can only be inferred indirectly (e.g., dashboards or database inspection), which is not convenient during active development or testing.
Proposed Solution
Introduce lightweight token observability directly within existing system flows, without requiring additional tools.
1. API Response Enhancement
Include token usage metadata in the JSON response for:
- Debate message responses
- Judge responses
Fields:
prompt_tokens
response_tokens
total_tokens
This allows developers to inspect token usage directly via the browser Network tab (F12) without modifying the frontend UI.
2. Backend Logging
Add structured logging in the backend controller to print token usage for each AI interaction.
Example:
[TOKEN USAGE] Bot: Yoda | Prompt: 420 | Response: 69 | Total: 489
This enables real-time monitoring directly from the server terminal.
Expected Outcome
- Token usage is visible instantly via:
- Browser Network tab (API response)
- Backend server logs
- No additional setup or tools required
- No impact on existing frontend behavior
- Improved debugging, observability, and cost awareness
Benefits
- Zero-friction developer experience
- Immediate visibility into AI token consumption
- Helps optimize prompts and model performance
- Lays groundwork for future features like token analytics dashboards
Compatibility
- No breaking changes to existing frontend
- Token fields are additive and optional
Additional Context
Additional Context
- Token usage data is already available from the Gemini API via
usageMetadata, but is not currently exposed to developers during runtime.
- This proposal focuses on surfacing that existing data with minimal changes and zero impact on current user flows.
- The approach avoids introducing new dependencies or tools, keeping the implementation lightweight and maintainable.
- This can serve as a foundation for future enhancements such as token usage dashboards, cost estimation, or analytics features.
Code of Conduct
Feature and its Use Cases
Problem
The application currently lacks built-in visibility into token consumption during AI debates. This makes it difficult for developers to:
At present, token usage can only be inferred indirectly (e.g., dashboards or database inspection), which is not convenient during active development or testing.
Proposed Solution
Introduce lightweight token observability directly within existing system flows, without requiring additional tools.
1. API Response Enhancement
Include token usage metadata in the JSON response for:
Fields:
prompt_tokensresponse_tokenstotal_tokensThis allows developers to inspect token usage directly via the browser Network tab (F12) without modifying the frontend UI.
2. Backend Logging
Add structured logging in the backend controller to print token usage for each AI interaction.
Example:
[TOKEN USAGE] Bot: Yoda | Prompt: 420 | Response: 69 | Total: 489
This enables real-time monitoring directly from the server terminal.
Expected Outcome
Benefits
Compatibility
Additional Context
Additional Context
usageMetadata, but is not currently exposed to developers during runtime.Code of Conduct