Files
Maestro/CLAUDE-PERFORMANCE.md
Pedram Amini f87d072f96 ## CHANGES
- Split monolithic CLAUDE.md into focused, indexed sub-docs for faster onboarding 📚
- Added deep Agent support guide: capabilities, flags, parsers, storage, and adding agents 🤖
- Documented full `window.maestro` IPC surface with clearer namespaces and History API 📡
- Captured core implementation patterns: processes, security, settings, modals, SSH, Auto Run 🧭
- Published performance playbook for React, IPC batching, caching, and debouncing 🚀
- Formalized Session interface docs, including multi-tab, queues, stats, and error fields 🧩
- Session-level custom agent path now overrides detected binary during spawn 🛠️
- New rendering settings: disable GPU acceleration (early startup) and disable confetti 🖥️
- Confetti effects now respect user preference across celebration overlays and wizard completion 🎊
- File explorer now distinguishes “loading” vs “no files found” empty states 🗂️
2026-01-13 11:14:59 -06:00

7.0 KiB
Raw Blame History

CLAUDE-PERFORMANCE.md

Performance best practices for the Maestro codebase. For the main guide, see CLAUDE.md.

React Component Optimization

Use React.memo for list item components:

// Components rendered in arrays (tabs, sessions, list items) should be memoized
const Tab = memo(function Tab({ tab, isActive, ... }: TabProps) {
  // Memoize computed values that depend on props
  const displayName = useMemo(() => getTabDisplayName(tab), [tab.name, tab.agentSessionId]);

  // Memoize style objects to prevent new references on every render
  const tabStyle = useMemo(() => ({
    borderRadius: '6px',
    backgroundColor: isActive ? theme.colors.accent : 'transparent',
  } as React.CSSProperties), [isActive, theme.colors.accent]);

  return <div style={tabStyle}>{displayName}</div>;
});

Consolidate chained useMemo calls:

// BAD: Multiple dependent useMemo calls create cascade re-computations
const filtered = useMemo(() => sessions.filter(...), [sessions]);
const sorted = useMemo(() => filtered.sort(...), [filtered]);
const grouped = useMemo(() => groupBy(sorted, ...), [sorted]);

// GOOD: Single useMemo with all transformations
const { filtered, sorted, grouped } = useMemo(() => {
  const filtered = sessions.filter(...);
  const sorted = filtered.sort(...);
  const grouped = groupBy(sorted, ...);
  return { filtered, sorted, grouped };
}, [sessions]);

Pre-compile regex patterns at module level:

// BAD: Regex compiled on every render
const Component = () => {
  const cleaned = text.replace(/^(\p{Emoji})+\s*/u, '');
};

// GOOD: Compile once at module load
const LEADING_EMOJI_REGEX = /^(\p{Emoji})+\s*/u;
const Component = () => {
  const cleaned = text.replace(LEADING_EMOJI_REGEX, '');
};

Memoize helper function results used in render body:

// BAD: O(n) lookup on every keystroke (runs on every render)
const activeTab = activeSession ? getActiveTab(activeSession) : undefined;
// Then used multiple times in JSX...

// GOOD: Memoize once, use everywhere
const activeTab = useMemo(
  () => activeSession ? getActiveTab(activeSession) : undefined,
  [activeSession?.aiTabs, activeSession?.activeTabId]
);
// Use activeTab directly in JSX - no repeated lookups

Data Structure Pre-computation

Build indices once, reuse in renders:

// BAD: O(n) tree traversal on every markdown render
const result = remarkFileLinks({ fileTree, cwd });

// GOOD: Build index once when fileTree changes, pass to renders
const indices = useMemo(() => buildFileTreeIndices(fileTree), [fileTree]);
const result = remarkFileLinks({ indices, cwd });

Main Process (Node.js)

Cache expensive lookups:

// BAD: Synchronous file check on every shell spawn
fs.accessSync(shellPath, fs.constants.X_OK);

// GOOD: Cache resolved paths
const shellPathCache = new Map<string, string>();
const cached = shellPathCache.get(shell);
if (cached) return cached;
// ... resolve and cache
shellPathCache.set(shell, resolved);

Use async file operations:

// BAD: Blocking the main process
fs.unlinkSync(tempFile);

// GOOD: Non-blocking cleanup
import * as fsPromises from 'fs/promises';
fsPromises.unlink(tempFile).catch(() => {});

Debouncing and Throttling

Use debouncing for user input and persistence:

// Session persistence uses 2-second debounce to prevent excessive disk I/O
// See: src/renderer/hooks/utils/useDebouncedPersistence.ts
const { persist, isPending } = useDebouncedPersistence(session, 2000);

// Always flush on visibility change and beforeunload to prevent data loss
useEffect(() => {
  const handleVisibilityChange = () => {
    if (document.hidden) flushPending();
  };
  document.addEventListener('visibilitychange', handleVisibilityChange);
  window.addEventListener('beforeunload', flushPending);
  return () => { /* cleanup */ };
}, []);

Debounce expensive search operations:

// BAD: Fuzzy matching all files on every keystroke
const suggestions = useMemo(() => {
  return getAtMentionSuggestions(atMentionFilter);  // Runs 2000+ fuzzy matches per keystroke
}, [atMentionFilter]);

// GOOD: Debounce the filter value first (100ms is imperceptible)
const debouncedFilter = useDebouncedValue(atMentionFilter, 100);
const suggestions = useMemo(() => {
  return getAtMentionSuggestions(debouncedFilter);  // Only runs after user stops typing
}, [debouncedFilter]);

Use throttling for high-frequency events:

// Scroll handlers should be throttled to ~4ms (240fps max)
const handleScroll = useThrottledCallback(() => {
  // expensive scroll logic
}, 4);

Update Batching

Batch rapid state updates during streaming:

// During AI streaming, IPC triggers 100+ updates/second
// Without batching: 100+ React re-renders/second
// With batching at 150ms: ~6 renders/second
// See: src/renderer/hooks/session/useBatchedSessionUpdates.ts

// Update types that get batched:
// - appendLog (accumulated via string chunks)
// - setStatus (last wins)
// - updateUsage (accumulated)
// - updateContextUsage (high water mark - never decreases)

Virtual Scrolling

Use virtual scrolling for large lists (100+ items):

// See: src/renderer/components/HistoryPanel.tsx
import { useVirtualizer } from '@tanstack/react-virtual';

const virtualizer = useVirtualizer({
  count: items.length,
  getScrollElement: () => scrollRef.current,
  estimateSize: () => 40,  // estimated row height
});

IPC Parallelization

Parallelize independent async operations:

// BAD: Sequential awaits (4 × 50ms = 200ms)
const branches = await git.branch(cwd);
const remotes = await git.remote(cwd);
const status = await git.status(cwd);

// GOOD: Parallel execution (max 50ms = 4x faster)
const [branches, remotes, status] = await Promise.all([
  git.branch(cwd),
  git.remote(cwd),
  git.status(cwd),
]);

Visibility-Aware Operations

Pause background operations when app is hidden:

// See: src/renderer/hooks/git/useGitStatusPolling.ts
const handleVisibilityChange = () => {
  if (document.hidden) {
    stopPolling();  // Save battery/CPU when backgrounded
  } else {
    startPolling();
  }
};
document.addEventListener('visibilitychange', handleVisibilityChange);

Context Provider Memoization

Always memoize context values:

// BAD: New object on every render triggers all consumers to re-render
return <Context.Provider value={{ sessions, updateSession }}>{children}</Context.Provider>;

// GOOD: Memoized value only changes when dependencies change
const contextValue = useMemo(() => ({
  sessions,
  updateSession,
}), [sessions, updateSession]);
return <Context.Provider value={contextValue}>{children}</Context.Provider>;

Event Listener Cleanup

Always clean up event listeners:

useEffect(() => {
  const handler = (e: Event) => { /* ... */ };
  document.addEventListener('click', handler);
  return () => document.removeEventListener('click', handler);
}, []);