mirror of
https://github.com/jlengrand/Maestro.git
synced 2026-03-10 00:21:21 +00:00
- Add real-time token usage and cost tracking from Claude Code responses - New UsageStats type with tokens, cost, and context window - Context window tooltip shows detailed token breakdown - Cost displayed in main panel header and session list - Improve input area UX with $ prefix for terminal mode - Add Cmd+L shortcut to insert markdown checkbox in Scratchpad - Add terminalOnly flag for slash commands (/jump is terminal-only) - Disable LLM Settings panel behind feature flag (documented in BACKBURNER.md) - Fix AgentSessionsBrowser to use layer stack for Escape handling - Update README with cost tracking feature and keyboard shortcut - Update CLAUDE.md with process:usage event and usageStats session field
50 lines
1.6 KiB
Markdown
50 lines
1.6 KiB
Markdown
# Backburner Features
|
|
|
|
This document tracks dormant features that are implemented but disabled via feature flags. These features may be re-enabled in future releases.
|
|
|
|
## LLM Settings Panel
|
|
|
|
**Status:** Disabled
|
|
**Feature Flag:** `FEATURE_FLAGS.LLM_SETTINGS` in `src/renderer/components/SettingsModal.tsx`
|
|
**Disabled Date:** 2024-11-26
|
|
|
|
### Description
|
|
|
|
The LLM Settings panel provides configuration options for connecting to various LLM providers directly from Maestro. This feature was designed to enable a built-in AI assistant for the scratchpad or other future AI-powered features within the application.
|
|
|
|
### Supported Providers
|
|
|
|
- **OpenRouter** - API proxy supporting multiple models
|
|
- **Anthropic** - Direct Claude API access
|
|
- **Ollama** - Local LLM inference
|
|
|
|
### Configuration Options
|
|
|
|
- LLM Provider selection
|
|
- Model slug/identifier
|
|
- API key (stored locally)
|
|
- Connection test functionality
|
|
|
|
### Files Involved
|
|
|
|
- `src/renderer/components/SettingsModal.tsx` - Main settings UI with LLM tab
|
|
- Settings stored in electron-store: `llmProvider`, `modelSlug`, `apiKey`
|
|
|
|
### Re-enabling
|
|
|
|
To re-enable this feature:
|
|
|
|
1. Open `src/renderer/components/SettingsModal.tsx`
|
|
2. Find the `FEATURE_FLAGS` constant at the top of the file
|
|
3. Set `LLM_SETTINGS: true`
|
|
|
|
```typescript
|
|
const FEATURE_FLAGS = {
|
|
LLM_SETTINGS: true, // LLM provider configuration (OpenRouter, Anthropic, Ollama)
|
|
};
|
|
```
|
|
|
|
### Reason for Disabling
|
|
|
|
Currently not in use as Maestro focuses on managing external AI coding agents (Claude Code, etc.) rather than providing built-in LLM functionality. May be re-enabled when there's a use case for direct LLM integration within Maestro.
|