add stop conv streaming and thinking text#1319
add stop conv streaming and thinking text#1319iceljc wants to merge 2 commits intoSciSharp:masterfrom
Conversation
Review Summary by QodoAdd streaming cancellation and thinking text support
WalkthroughsDescription• Add streaming cancellation service to stop active LLM responses • Implement stop-streaming endpoint for conversation-level cancellation • Add MetaData field to chat responses for thinking text and metadata • Support thinking text extraction from Google Gemini AI responses • Integrate cancellation tokens across multiple LLM provider plugins • Update Google GenerativeAI and OpenAI package versions Diagramflowchart LR
A["Streaming Request"] -->|RegisterConversation| B["ConversationCancellationService"]
B -->|GetToken| C["LLM Providers"]
C -->|WithCancellation| D["Stream Processing"]
D -->|OnCancel| E["StopStreaming Endpoint"]
E -->|CancelStreaming| B
B -->|UnregisterConversation| F["Cleanup"]
D -->|Extract| G["Thinking Text"]
G -->|MetaData| H["ChatResponseDto"]
File Changes1. src/Infrastructure/BotSharp.Abstraction/Conversations/Dtos/ChatResponseDto.cs
|
Code Review by Qodo
|
|
|
||
| #if DEBUG | ||
| _logger.LogCritical($"Content update: {text}"); | ||
| _logger.LogCritical($"Content update: {text}"); |
There was a problem hiding this comment.
1. Streaming text logged to logs 📘 Rule violation ⛨ Security
The PR logs full/partial model output (and tool-call args) during streaming, which can contain sensitive user/model content. This violates the requirement to keep logs free of sensitive content and use minimal structured metadata.
Agent Prompt
## Issue description
Streaming providers are logging user/model content (`text`, `allText`, tool-call args) which may contain sensitive data.
## Issue Context
Compliance requires logs to avoid user/model content; prefer minimal structured metadata (provider name, finish reason, token usage) and keep any content logging behind safe, explicit redaction/debug controls.
## Fix Focus Areas
- src/Plugins/BotSharp.Plugin.AzureOpenAI/Providers/Chat/ChatCompletionProvider.cs[310-310]
- src/Plugins/BotSharp.Plugin.AzureOpenAI/Providers/Chat/ChatCompletionProvider.cs[350-350]
- src/Plugins/BotSharp.Plugin.DeepSeekAI/Providers/Chat/ChatCompletionProvider.cs[277-277]
- src/Plugins/BotSharp.Plugin.DeepSeekAI/Providers/Chat/ChatCompletionProvider.cs[317-317]
- src/Plugins/BotSharp.Plugin.AnthropicAI/Providers/ChatCompletionProvider.cs[247-247]
- src/Plugins/BotSharp.Plugin.AnthropicAI/Providers/ChatCompletionProvider.cs[261-261]
- src/Plugins/BotSharp.Plugin.OpenAI/Providers/Chat/ChatCompletionProvider.cs[282-282]
- src/Plugins/BotSharp.Plugin.OpenAI/Providers/Chat/ChatCompletionProvider.cs[322-322]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| var parsedLevel = ThinkingLevel.LOW; | ||
| level = level.ToLower(); | ||
| switch (level) |
There was a problem hiding this comment.
2. parsethinkinglevel uses tolower() 📘 Rule violation ≡ Correctness
The PR normalizes level via ToLower() and then switches on it, which is culture-sensitive and can lead to inconsistent identifier-like matching. Compliance requires ordinal/culture-invariant comparisons for such matching.
Agent Prompt
## Issue description
`ParseThinkingLevel` uses culture-sensitive `ToLower()` before matching.
## Issue Context
Compliance requires ordinal/culture-invariant identifier matching (e.g., `StringComparison.OrdinalIgnoreCase` or `ToLowerInvariant()`).
## Fix Focus Areas
- src/Plugins/BotSharp.Plugin.GoogleAI/Providers/Chat/ChatCompletionProvider.cs[725-737]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| Response.Headers.Append(Microsoft.Net.Http.Headers.HeaderNames.Connection, "keep-alive"); | ||
|
|
||
| await conv.SendMessage(agentId, inputMsg, | ||
| replyMessage: input.Postback, | ||
| // responsed generated | ||
| async msg => | ||
| { | ||
| response.Text = !string.IsNullOrEmpty(msg.SecondaryContent) ? msg.SecondaryContent : msg.Content; | ||
| response.MessageLabel = msg.MessageLabel; | ||
| response.Function = msg.FunctionName; | ||
| response.RichContent = msg.SecondaryRichContent ?? msg.RichContent; | ||
| response.Instruction = msg.Instruction; | ||
| response.Data = msg.Data; | ||
| response.States = state.GetStates(); | ||
| await OnChunkReceived(Response, response); | ||
| }); | ||
| replyMessage: input.Postback, | ||
| // responsed generated | ||
| async msg => | ||
| { | ||
| response.Text = !string.IsNullOrEmpty(msg.SecondaryContent) ? msg.SecondaryContent : msg.Content; | ||
| response.MessageLabel = msg.MessageLabel; | ||
| response.Function = msg.FunctionName; | ||
| response.RichContent = msg.SecondaryRichContent ?? msg.RichContent; | ||
| response.Instruction = msg.Instruction; | ||
| response.Data = msg.Data; | ||
| response.States = state.GetStates(); | ||
|
|
||
| await OnChunkReceived(Response, response); | ||
| }); |
There was a problem hiding this comment.
3. Sse stop-streaming broken 🐞 Bug ≡ Correctness
SendMessageSse never registers a cancellation token source for the conversation, so StopStreaming cannot cancel SSE-initiated streaming and providers will run with CancellationToken.None.
Agent Prompt
### Issue description
`SendMessageSse` does not call `IConversationCancellationService.RegisterConversation(...)`, so `/conversation/{conversationId}/stop-streaming` cannot cancel SSE-started streams.
### Issue Context
Providers use `IConversationCancellationService.GetToken(conv.ConversationId)` and pass it into streaming enumerators; without registration this returns `CancellationToken.None`, and `CancelStreaming` will not find a CTS to cancel.
### Fix Focus Areas
- src/Infrastructure/BotSharp.OpenAPI/Controllers/Conversation/ConversationController.cs[442-498]
- src/Infrastructure/BotSharp.OpenAPI/Controllers/Conversation/ConversationController.cs[500-507]
### Implementation notes
- In `SendMessageSse`, add the same `RegisterConversation(conversationId)` + `try/finally { UnregisterConversation(conversationId); }` pattern used in `SendMessage` (likely gated by `input.IsStreamingMessage`, or always for SSE if SSE always implies streaming).
- Consider catching `OperationCanceledException` to close the SSE stream cleanly (optionally emit a final event / [DONE]).
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| async msg => | ||
| { | ||
| response.Text = !string.IsNullOrEmpty(msg.SecondaryContent) ? msg.SecondaryContent : msg.Content; | ||
| response.Function = msg.FunctionName; | ||
| response.MessageLabel = msg.MessageLabel; | ||
| response.RichContent = msg.SecondaryRichContent ?? msg.RichContent; | ||
| response.Instruction = msg.Instruction; | ||
| response.Data = msg.Data; | ||
| }); |
There was a problem hiding this comment.
4. Metadata dropped in responses 🐞 Bug ≡ Correctness
ConversationController.SendMessage and SendMessageSse never copy RoleDialogModel.MetaData to ChatResponseModel, so newly added meta_data/thinking_text does not reach OpenAPI clients.
Agent Prompt
### Issue description
`ChatResponseDto.MetaData` is now part of the API contract, but OpenAPI endpoints don’t populate it from `RoleDialogModel.MetaData`, dropping `thinking_text` and other metadata.
### Issue Context
Providers populate `RoleDialogModel.MetaData` (e.g., GoogleAI). `GetDialogs` already maps `MetaData`, but `SendMessage`/`SendMessageSse` do not.
### Fix Focus Areas
- src/Infrastructure/BotSharp.OpenAPI/Controllers/Conversation/ConversationController.cs[379-438]
- src/Infrastructure/BotSharp.OpenAPI/Controllers/Conversation/ConversationController.cs[442-491]
### Implementation notes
- In both callbacks (`SendMessage` and `SendMessageSse`), add `response.MetaData = msg.MetaData;` (and, if relevant, `response.IsStreaming = msg.IsStreaming;`).
- If you emit tool-call indication chunks via `OnReceiveToolCallIndication`, consider whether they should also include any relevant metadata.
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| private ThinkingLevel? ParseThinking(ReasoningSetting? settings, Agent agent) | ||
| { | ||
| var level = _state.GetState("thyinking_level"); | ||
|
|
||
| if (string.IsNullOrEmpty(level) && _model == agent?.LlmConfig?.Model) | ||
| { | ||
| level = agent?.LlmConfig?.ReasoningEffortLevel; | ||
| } |
There was a problem hiding this comment.
5. Thinking level key typo 🐞 Bug ≡ Correctness
GoogleAI ChatCompletionProvider reads conversation state key "thyinking_level" (misspelled), so state-based thinking level overrides never take effect.
Agent Prompt
### Issue description
`ParseThinking` reads the wrong state key (`thyinking_level`), preventing state overrides for thinking level.
### Issue Context
This affects `GenerationConfig.ThinkingConfig` selection and makes per-conversation configuration ineffective.
### Fix Focus Areas
- src/Plugins/BotSharp.Plugin.GoogleAI/Providers/Chat/ChatCompletionProvider.cs[693-716]
### Implementation notes
- Rename the key to the intended spelling (e.g., `thinking_level`).
- Prefer a shared constant/enumeration for the key to avoid future typos.
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
No description provided.