Conversation
Three root causes of UI slowness during the primary user journey: 1. Every SSE token triggered a full React re-render (~20-50/sec during streaming). Added requestAnimationFrame-based event batching so multiple SSE events within a single frame are processed in one synchronous pass. React 18 batches all setState calls within the same callback, reducing re-renders from ~50/sec to ~60fps max. 2. sendMessage callback was recreated on every state change because it depended on state.threadId, state.runId, and state.status. Replaced with refs so the callback is stable across state changes, preventing cascading re-renders in child components. 3. Three overlapping polling intervals fired simultaneously during session startup: useSession (500-1000ms), workflow queue (2000ms), and message queue (2000ms). The latter two were redundant since useSession already polls aggressively during transitional states. Removed the duplicate intervals. 4. Initial prompt tracking effect had unnecessary deps (aguiState.messages.length, aguiState.status) that caused it to re-evaluate on every streamed message, even though it only updates a ref. Reduced to only depend on session.spec.initialPrompt. https://claude.ai/code/session_01P6sFQgvLjBHd6zuieQuzyz
- Consolidate three separate state-sync refs (threadId, runId, status) into a single stateSnapshotRef with proper typed status field - Add mountedRef guard to flushEventBuffer to prevent stale setState calls after unmount - Update React version reference in comment (18 → 18+) https://claude.ai/code/session_01P6sFQgvLjBHd6zuieQuzyz
Claude Code ReviewSummaryThis PR addresses real UI jank during agent streaming by tackling four root causes: per-token re-renders, an unstable Issues by Severity🚫 Blocker IssuesNone. 🔴 Critical IssuesNone. 🟡 Major Issues1. Removed polling may miss terminal-state queuing edge case ( The removed
This is unlikely to occur in normal flow (workflow activation probably sets the annotation first), but worth explicitly verifying or documenting why terminal-state queuing is safe to drop. 🔵 Minor Issues2. One-render lag in The snapshot is updated via In practice this is benign because 3. const processEventRef = useRef(processEvent)
useEffect(() => {
processEventRef.current = processEvent
}, [processEvent])Between when 4. const initialState: AGUIClientState = {
pendingToolCalls: new Map(), // ← created once for the module
...
}These const createInitialState = (): AGUIClientState => ({
pendingToolCalls: new Map(),
pendingChildren: new Map(),
messageFeedback: new Map(),
...
})
// useState(createInitialState) — React calls the factory once per instance5. No tests for the new batching behavior The rAF batching logic ( 6. const processMessages = async () => { ... }
processMessages() // fire-and-forgetErrors inside Positive Highlights
Recommendations
Overall this is a high-quality, well-reasoned performance fix. The major concern (item 1) is worth confirming before merge but is unlikely to affect real users given normal flow assumptions. 🤖 Generated with Claude Code 🔍 View AI decision process (logs available for 90 days) 📋 View memory system files loaded (click to expand)What Amber Loaded for Code ReviewAmber automatically loaded these repository standards from the memory system:
Impact: This review used your repository's specific code quality standards, security patterns, and best practices from the memory system (PRs #359, #360) - not just generic code review guidelines. |
No description provided.