Skip to content

refactor: decompose ChatPanel into focused modules#34

Open
SweetSophia wants to merge 12 commits intoMiniMax-AI:mainfrom
SweetSophia:refactor/extract-chat-engine
Open

refactor: decompose ChatPanel into focused modules#34
SweetSophia wants to merge 12 commits intoMiniMax-AI:mainfrom
SweetSophia:refactor/extract-chat-engine

Conversation

@SweetSophia
Copy link
Copy Markdown

Summary

Extract the monolithic ChatPanel/index.tsx into focused, maintainable modules. This reduces the file from ~1200 lines to a slim orchestrator with delegated concerns.

Changes

New modules extracted from ChatPanel/index.tsx:

  • useConversationEngine.ts — LLM tool-calling loop, message streaming, and conversation state management
  • toolDefinitions.ts — Agent tool schemas and hasUsableLLMConfig validation
  • ChatSubComponents.tsx — UI sub-components (message list, input area, etc.)
  • SettingsModal.tsx — Configuration UI with server-side key persistence

Bug fixes included:

  • Stabilize runConversation identity to prevent listener churn
  • Flush debounced saves on cleanup; fix write amplification
  • useLayoutEffect for setSessionPath to avoid stale renders
  • Only break tool loop when respond_to_user is the sole tool call
  • Redact sensitive args from warning logs
  • Extract PendingSaveSnapshot type from inline ref

Guide improvements:

  • Updated guide.md across 10+ apps (Chess, Diary, Email, MusicApp, Gomoku, CyberNews, Twitter, Album, etc.) for clearer AI instructions and standardized formatting.

Dependencies

⚠️ Note: This PR should preferably be applied after #33 (fix(security): server-side API key management and SSRF hardening), as it builds on the error boundary and config infrastructure introduced there.

Testing

  • Lint passes
  • Build passes
  • Self-reviewed

Checklist

  • pnpm run lint passes
  • pnpm build passes
  • Self-reviewed the code changes

SweetSophia and others added 12 commits April 1, 2026 15:35
Decompose the 1472-line ChatPanel god component into four focused files:

- toolDefinitions.ts (133 lines): tool defs, system prompt builder, config guard
- ChatSubComponents.tsx (178 lines): CharacterAvatar, StageIndicator, ActionsTaken
- SettingsModal.tsx (270 lines): LLM + image generation configuration UI
- useConversationEngine.ts (341 lines): conversation loop + tool dispatch hook

ChatPanel/index.tsx reduced from 1472 → 632 lines (-57%), now a thin shell
that wires state to UI. No behavior changes — pure structural refactor.

Benefits:
- Conversation engine is testable in isolation
- Settings modal can be reasoned about independently
- Each file has a single clear responsibility
- Future PRs can target specific modules without touching the whole
runConversation was recreated on every render, causing the onUserAction
subscription effect to unsubscribe/resubscribe on each render. App actions
emitted during the cleanup-rebind window could be silently dropped.

Since runConversation only reads from refs (no reactive state), wrapping
it in useCallback with an empty dependency array makes its identity stable
across renders, eliminating the churn.
useConversationEngine:
- Wrap callbacks in ref to prevent stale closure in stabilized runConversation
- Break outer loop after respond_to_user to avoid extra model round-trip
- Add console.warn for unparseable tool call arguments
- Handle loadMemories rejection with fallback to empty array

ChatSubComponents:
- Reactivate existing inactive layer instead of skipping it
- Use ref for cleanup timeout to prevent stale timeout interference

index.tsx:
- Move setSessionPath call from render body to useEffect
- Reduce reload effect deps to sessionPath only, add cancellation guard

SettingsModal:
- Sync local state from parent props via useEffect

toolDefinitions:
- Replace hardcoded emotion examples with generic reference to character keys
useConversationEngine:
- Remove inner break on respond_to_user so sibling tool calls
  (e.g. finish_target) still execute before outer loop stops

index.tsx:
- Guard loadMemories .then/.catch with cancelled flag
- Snapshot session/data at debounce schedule time instead of
  reading refs at fire-time, preventing cross-session data mixing
- Remove now-unused sessionPathRef2

ChatSubComponents:
- On layer reactivation, cancel pending cleanup timeout and
  explicitly deactivate all other layers to prevent dual-active state
useConversationEngine:
- Break React import to multiline per Prettier config
- Guard loadMemories handlers against session change during async gap

index.tsx:
- Remove unused sessionPathRef_forSet ref — setSessionPath in
  useEffect alone is sufficient for module-level sync
…Session

index.tsx:
- Flush pending saveChatHistory in cleanup instead of discarding,
  preventing data loss on rapid session switches
- Remove modCollection from handleResetSession dependency array
  (uses refs and functional state updates, no direct read needed)
…e write amplification

Agent-Logs-Url: https://github.com/SweetSophia/OpenRoom/sessions/4ddaf906-91f2-4ac3-ac97-7ce3fd331545

Co-authored-by: SweetSophia <44297511+SweetSophia@users.noreply.github.com>
…nsitive args from warn log

Agent-Logs-Url: https://github.com/SweetSophia/OpenRoom/sessions/1954e0a2-8dec-4da9-85f4-92df5e2ddf00

Co-authored-by: SweetSophia <44297511+SweetSophia@users.noreply.github.com>
Previously the loop broke after respond_to_user even when other tools
ran in the same batch. If the model planned to emit follow-up tool
calls (e.g. finish_target) in a subsequent round-trip, those were
silently skipped. Now the loop only terminates when respond_to_user
was the only tool call — batched tool calls get their follow-up
round-trip as intended.
Copilot AI review requested due to automatic review settings April 3, 2026 21:39
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR refactors the previously monolithic ChatPanel/index.tsx into smaller, focused modules (conversation/tool loop, tool definitions/system prompt, UI subcomponents, settings modal), while also adjusting session-path synchronization and debounced persistence behavior to reduce churn and avoid dropped saves.

Changes:

  • Extracted the LLM tool-call loop into useConversationEngine.ts and tool/system-prompt helpers into toolDefinitions.ts.
  • Extracted UI helpers (CharacterAvatar, StageIndicator, etc.) into ChatSubComponents.tsx and config UI into SettingsModal.tsx.
  • Updated ChatPanel/index.tsx to become a “thin shell” wiring state/effects to the extracted modules, including useLayoutEffect session-path syncing and improved debounced-save flushing.

Reviewed changes

Copilot reviewed 5 out of 5 changed files in this pull request and generated 6 comments.

Show a summary per file
File Description
apps/webuiapps/src/components/ChatPanel/useConversationEngine.ts New hook encapsulating the tool-call loop and tool dispatch logic.
apps/webuiapps/src/components/ChatPanel/toolDefinitions.ts New module for tool schemas, system prompt building, and config guard.
apps/webuiapps/src/components/ChatPanel/SettingsModal.tsx New extracted settings modal for LLM + image-gen configuration.
apps/webuiapps/src/components/ChatPanel/ChatSubComponents.tsx New extracted UI subcomponents (avatar, stage indicator, actions, content rendering).
apps/webuiapps/src/components/ChatPanel/index.tsx Refactored orchestrator wiring state/effects to extracted modules and updated persistence/session syncing.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +302 to +305
// sessionPath is derived from charCollection.activeId + modCollection.activeId,
// so it already captures session changes. seedPrologue reads modCollection via
// closure but we only want to trigger on session identity change.
}, [sessionPath]);
Copy link

Copilot AI Apr 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This effect’s dependency array is [sessionPath], but the effect body reads modCollection (via getActiveModEntry(modCollection) when restoring opening replies). If the active mod’s config changes without changing sessionPath (e.g. editing opening suggested replies for the same mod), the closure can become stale and the UI won’t recompute the replies. Consider either adding modCollection/seedPrologue as dependencies or splitting out a separate effect that updates opening replies when modCollection changes and the conversation is still at the prologue state.

Copilot uses AI. Check for mistakes.
Comment on lines +265 to +278
const llmCfg: LLMConfig = {
provider,
apiKey,
baseUrl,
model,
...(customHeaders.trim() ? { customHeaders } : {}),
};
const igCfg: ImageGenConfig | null = igApiKey.trim()
? {
provider: igProvider,
apiKey: igApiKey,
baseUrl: igBaseUrl,
model: igModel,
...(igCustomHeaders.trim() ? { customHeaders: igCustomHeaders } : {}),
Copy link

Copilot AI Apr 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Save handler persists baseUrl, model, and API keys exactly as typed. Because hasUsableLLMConfig() only checks trimmed values (it doesn’t normalize them) and downstream clients don’t consistently trim, leading/trailing whitespace can result in invalid target URLs or auth headers (especially for image-gen where config.apiKey is used verbatim). Consider normalizing user inputs on save (e.g., trim() baseUrl/model/apiKey and optionally collapse internal whitespace where appropriate) before calling onSave.

Suggested change
const llmCfg: LLMConfig = {
provider,
apiKey,
baseUrl,
model,
...(customHeaders.trim() ? { customHeaders } : {}),
};
const igCfg: ImageGenConfig | null = igApiKey.trim()
? {
provider: igProvider,
apiKey: igApiKey,
baseUrl: igBaseUrl,
model: igModel,
...(igCustomHeaders.trim() ? { customHeaders: igCustomHeaders } : {}),
const normalizedApiKey = apiKey.trim();
const normalizedBaseUrl = baseUrl.trim();
const normalizedModel = model.trim();
const normalizedCustomHeaders = customHeaders.trim();
const normalizedIgApiKey = igApiKey.trim();
const normalizedIgBaseUrl = igBaseUrl.trim();
const normalizedIgModel = igModel.trim();
const normalizedIgCustomHeaders = igCustomHeaders.trim();
const llmCfg: LLMConfig = {
provider,
apiKey: normalizedApiKey,
baseUrl: normalizedBaseUrl,
model: normalizedModel,
...(normalizedCustomHeaders ? { customHeaders: normalizedCustomHeaders } : {}),
};
const igCfg: ImageGenConfig | null = normalizedIgApiKey
? {
provider: igProvider,
apiKey: normalizedIgApiKey,
baseUrl: normalizedIgBaseUrl,
model: normalizedIgModel,
...(normalizedIgCustomHeaders
? { customHeaders: normalizedIgCustomHeaders }
: {}),

Copilot uses AI. Check for mistakes.
const runConversation = useCallback(async (history: ChatMessage[], cfg: LLMConfig) => {
await seedMetaFiles();
await loadActionsFromMeta();
const hasImageGen = !!imageGenConfigRef.current?.apiKey;
Copy link

Copilot AI Apr 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hasImageGen is determined via !!imageGenConfigRef.current?.apiKey, which treats whitespace-only keys as enabled. That can cause the engine to register image tools and attempt image generation with an unusable key. Consider using a trimmed check (e.g. imageGenConfigRef.current?.apiKey.trim().length > 0) to match the UI’s intent.

Suggested change
const hasImageGen = !!imageGenConfigRef.current?.apiKey;
const hasImageGen = (imageGenConfigRef.current?.apiKey?.trim().length ?? 0) > 0;

Copilot uses AI. Check for mistakes.
Comment on lines +278 to +281
// ---- File tools ----
if (isFileTool(tc.function.name)) {
ctx.pendingToolCallsRef.current.push(
`${tc.function.name}(${JSON.stringify(params).slice(0, 60)})`,
Copy link

Copilot AI Apr 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The pendingToolCallsRef summary for file tools includes JSON.stringify(params).slice(0, 60). For tools like file_write, params can contain user/app data (and potentially secrets) that will then be surfaced in the UI (“Actions taken”) and may be persisted as part of chat history. Consider redacting/omitting sensitive fields (e.g. content, apiKey, Authorization) or logging only the tool name + path to avoid accidental data exposure.

Suggested change
// ---- File tools ----
if (isFileTool(tc.function.name)) {
ctx.pendingToolCallsRef.current.push(
`${tc.function.name}(${JSON.stringify(params).slice(0, 60)})`,
const summarizeFileToolCall = (
toolName: string,
toolParams: Record<string, unknown>,
): string => {
const pathKeys = ['path', 'filePath', 'sourcePath', 'targetPath', 'destinationPath'] as const;
for (const key of pathKeys) {
const value = toolParams[key];
if (typeof value === 'string' && value.length > 0) {
return `${toolName}(${key}: ${value})`;
}
}
return toolName;
};
// ---- File tools ----
if (isFileTool(tc.function.name)) {
ctx.pendingToolCallsRef.current.push(
summarizeFileToolCall(tc.function.name, params as Record<string, unknown>),

Copilot uses AI. Check for mistakes.
Comment on lines +145 to +153
const handleMediaReady = useCallback((readyUrl: string) => {
setLayers((prev) => {
const staleUrls = prev.filter((l) => l.url !== readyUrl).map((l) => l.url);
if (cleanupRef.current) clearTimeout(cleanupRef.current);
cleanupRef.current = setTimeout(() => {
setLayers((curr) => curr.filter((l) => !staleUrls.includes(l.url)));
}, 300);
return prev.map((l) => ({ ...l, active: l.url === readyUrl }));
});
Copy link

Copilot AI Apr 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

handleMediaReady always activates the layer that finished loading (readyUrl). If the user switches emotions quickly, an old/inactive layer can finish loading later and incorrectly become active, causing the avatar to “jump back” to the wrong emotion. Consider tracking the latest desired media.url in a ref and ignoring readyUrl events that don’t match it (and/or storing a per-layer generation token) before promoting a layer to active.

Copilot uses AI. Check for mistakes.
Comment on lines +19 to +20
const baseUrl = typeof config?.baseUrl === 'string' ? config.baseUrl.trim() : '';
const model = typeof config?.model === 'string' ? config.model.trim() : '';
Copy link

Copilot AI Apr 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hasUsableLLMConfig() trims baseUrl/model only for the boolean check, but the returned config is still the original (potentially whitespace-padded) object. Call sites then pass this through to chat(), where baseUrl with leading/trailing spaces can produce invalid URLs. Consider either normalizing the config at the source (preferred) or introducing a normalizeLLMConfig() helper that returns a sanitized copy (trimmed fields) and using that for all requests.

Suggested change
const baseUrl = typeof config?.baseUrl === 'string' ? config.baseUrl.trim() : '';
const model = typeof config?.model === 'string' ? config.model.trim() : '';
if (!config) {
return false;
}
const baseUrl = typeof config.baseUrl === 'string' ? config.baseUrl.trim() : '';
const model = typeof config.model === 'string' ? config.model.trim() : '';
config.baseUrl = baseUrl;
config.model = model;

Copilot uses AI. Check for mistakes.
SweetSophia added a commit to SweetSophia/OpenRoom that referenced this pull request Apr 4, 2026
PR MiniMax-AI#33 and MiniMax-AI#34 review feedback:

Security:
- Config file now written with mode 0600, directory with mode 0700
- Block hop-by-hop headers (RFC 9110) in proxy passthrough
- Validate header values are strings, not arrays, before use
- Fix misleading comment about symlink resolution in sanitizeRelativePath
- Redact sensitive data from file tool call summaries in chat history

UX / Correctness:
- Thread AbortSignal through chat() to fetch() for real request cancellation
- SettingsModal trims all inputs on save to prevent whitespace issues
- onSave returns proper error feedback when config reload fails
- hasUsableLLMConfig normalizes config in-place (trimmed baseUrl/model)
- Whitespace-only image gen API key no longer treated as enabled

Accessibility:
- ErrorBoundary: add role=alert, aria-live=assertive, type=button

Race conditions:
- handleMediaReady ignores stale loads when user switches emotions quickly

Tests: 118 passing
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants