Skip to content

add stop conv streaming and thinking text#1319

Open
iceljc wants to merge 2 commits intoSciSharp:masterfrom
iceljc:features/refine-streaming-message
Open

add stop conv streaming and thinking text#1319
iceljc wants to merge 2 commits intoSciSharp:masterfrom
iceljc:features/refine-streaming-message

Conversation

@iceljc
Copy link
Copy Markdown
Collaborator

@iceljc iceljc commented Apr 7, 2026

No description provided.

@qodo-code-review
Copy link
Copy Markdown
Contributor

Review Summary by Qodo

Add streaming cancellation and thinking text support

✨ Enhancement 🐞 Bug fix

Grey Divider

Walkthroughs

Description
• Add streaming cancellation service to stop active LLM responses
• Implement stop-streaming endpoint for conversation-level cancellation
• Add MetaData field to chat responses for thinking text and metadata
• Support thinking text extraction from Google Gemini AI responses
• Integrate cancellation tokens across multiple LLM provider plugins
• Update Google GenerativeAI and OpenAI package versions
Diagram
flowchart LR
  A["Streaming Request"] -->|RegisterConversation| B["ConversationCancellationService"]
  B -->|GetToken| C["LLM Providers"]
  C -->|WithCancellation| D["Stream Processing"]
  D -->|OnCancel| E["StopStreaming Endpoint"]
  E -->|CancelStreaming| B
  B -->|UnregisterConversation| F["Cleanup"]
  D -->|Extract| G["Thinking Text"]
  G -->|MetaData| H["ChatResponseDto"]
Loading

Grey Divider

File Changes

1. src/Infrastructure/BotSharp.Abstraction/Conversations/Dtos/ChatResponseDto.cs ✨ Enhancement +4/-0

Add MetaData field to response DTO

src/Infrastructure/BotSharp.Abstraction/Conversations/Dtos/ChatResponseDto.cs


2. src/Infrastructure/BotSharp.Abstraction/Conversations/IConversationCancellationService.cs ✨ Enhancement +33/-0

New interface for streaming cancellation management

src/Infrastructure/BotSharp.Abstraction/Conversations/IConversationCancellationService.cs


3. src/Infrastructure/BotSharp.Core/Conversations/ConversationPlugin.cs ⚙️ Configuration changes +3/-0

Register cancellation service in dependency injection

src/Infrastructure/BotSharp.Core/Conversations/ConversationPlugin.cs


View more (15)
4. src/Infrastructure/BotSharp.Core/Conversations/Services/ConversationCancellationService.cs ✨ Enhancement +63/-0

Implement conversation-level streaming cancellation logic

src/Infrastructure/BotSharp.Core/Conversations/Services/ConversationCancellationService.cs


5. src/Infrastructure/BotSharp.OpenAPI/Controllers/Conversation/ConversationController.cs ✨ Enhancement +67/-36

Add stop-streaming endpoint and cancellation handling

src/Infrastructure/BotSharp.OpenAPI/Controllers/Conversation/ConversationController.cs


6. src/Infrastructure/BotSharp.OpenAPI/ViewModels/Conversations/Request/NewMessageModel.cs ✨ Enhancement +9/-0

Add IsStreamingMessage flag to request model

src/Infrastructure/BotSharp.OpenAPI/ViewModels/Conversations/Request/NewMessageModel.cs


7. src/Infrastructure/BotSharp.OpenAPI/ViewModels/Conversations/Response/ConverstionCancellationResponse.cs ✨ Enhancement +5/-0

New response model for cancellation status

src/Infrastructure/BotSharp.OpenAPI/ViewModels/Conversations/Response/ConverstionCancellationResponse.cs


8. src/Plugins/BotSharp.Plugin.AnthropicAI/Providers/ChatCompletionProvider.cs ✨ Enhancement +71/-49

Integrate cancellation token in Anthropic streaming

src/Plugins/BotSharp.Plugin.AnthropicAI/Providers/ChatCompletionProvider.cs


9. src/Plugins/BotSharp.Plugin.AzureOpenAI/Providers/Chat/ChatCompletionProvider.cs ✨ Enhancement +71/-49

Integrate cancellation token in Azure OpenAI streaming

src/Plugins/BotSharp.Plugin.AzureOpenAI/Providers/Chat/ChatCompletionProvider.cs


10. src/Plugins/BotSharp.Plugin.ChatHub/Hooks/ChatHubConversationHook.cs ✨ Enhancement +1/-0

Propagate MetaData in chat hub responses

src/Plugins/BotSharp.Plugin.ChatHub/Hooks/ChatHubConversationHook.cs


11. src/Plugins/BotSharp.Plugin.ChatHub/Observers/ChatHubObserver.cs ✨ Enhancement +1/-0

Include MetaData in chat hub observer output

src/Plugins/BotSharp.Plugin.ChatHub/Observers/ChatHubObserver.cs


12. src/Plugins/BotSharp.Plugin.DeepSeekAI/Providers/Chat/ChatCompletionProvider.cs ✨ Enhancement +71/-49

Integrate cancellation token in DeepSeek streaming

src/Plugins/BotSharp.Plugin.DeepSeekAI/Providers/Chat/ChatCompletionProvider.cs


13. src/Plugins/BotSharp.Plugin.GoogleAI/Constants/Constants.cs ✨ Enhancement +1/-0

Add thinking text constant for metadata

src/Plugins/BotSharp.Plugin.GoogleAI/Constants/Constants.cs


14. src/Plugins/BotSharp.Plugin.GoogleAI/Providers/Chat/ChatCompletionProvider.cs ✨ Enhancement +221/-80

Extract and stream thinking text from Gemini responses

src/Plugins/BotSharp.Plugin.GoogleAI/Providers/Chat/ChatCompletionProvider.cs


15. src/Plugins/BotSharp.Plugin.LLamaSharp/Providers/ChatCompletionProvider.cs ✨ Enhancement +24/-14

Integrate cancellation token in LLamaSharp streaming

src/Plugins/BotSharp.Plugin.LLamaSharp/Providers/ChatCompletionProvider.cs


16. src/Plugins/BotSharp.Plugin.OpenAI/Providers/Chat/ChatCompletionProvider.cs ✨ Enhancement +79/-58

Integrate cancellation token in OpenAI streaming

src/Plugins/BotSharp.Plugin.OpenAI/Providers/Chat/ChatCompletionProvider.cs


17. src/Plugins/BotSharp.Plugin.SparkDesk/Providers/ChatCompletionProvider.cs ✨ Enhancement +35/-25

Integrate cancellation token in SparkDesk streaming

src/Plugins/BotSharp.Plugin.SparkDesk/Providers/ChatCompletionProvider.cs


18. Directory.Packages.props Dependencies +3/-3

Update Google GenerativeAI and OpenAI package versions

Directory.Packages.props


Grey Divider

Qodo Logo

@qodo-code-review
Copy link
Copy Markdown
Contributor

qodo-code-review bot commented Apr 7, 2026

Code Review by Qodo

🐞 Bugs (5)   📘 Rule violations (4)   📎 Requirement gaps (0)   🎨 UX Issues (0)
🐞\ ≡ Correctness (3) ☼ Reliability (1) ⛨ Security (1)
📘\ ≡ Correctness (1) ☼ Reliability (2) ⛨ Security (1)

Grey Divider


Action required

1. Streaming text logged to logs 📘
Description
The PR logs full/partial model output (and tool-call args) during streaming, which can contain
sensitive user/model content. This violates the requirement to keep logs free of sensitive content
and use minimal structured metadata.
Code

src/Plugins/BotSharp.Plugin.AzureOpenAI/Providers/Chat/ChatCompletionProvider.cs[310]

+                    _logger.LogCritical($"Content update: {text}");
Evidence
PR Compliance ID 3 forbids logging sensitive user/model content; the changed code logs streaming
content chunks (text) and accumulated output (allText) (and similar patterns appear in other
providers in this PR).

src/Plugins/BotSharp.Plugin.AzureOpenAI/Providers/Chat/ChatCompletionProvider.cs[310-310]
src/Plugins/BotSharp.Plugin.AzureOpenAI/Providers/Chat/ChatCompletionProvider.cs[350-350]
src/Plugins/BotSharp.Plugin.DeepSeekAI/Providers/Chat/ChatCompletionProvider.cs[277-277]
src/Plugins/BotSharp.Plugin.DeepSeekAI/Providers/Chat/ChatCompletionProvider.cs[317-317]
src/Plugins/BotSharp.Plugin.AnthropicAI/Providers/ChatCompletionProvider.cs[247-247]
src/Plugins/BotSharp.Plugin.AnthropicAI/Providers/ChatCompletionProvider.cs[261-261]
src/Plugins/BotSharp.Plugin.OpenAI/Providers/Chat/ChatCompletionProvider.cs[282-282]
src/Plugins/BotSharp.Plugin.OpenAI/Providers/Chat/ChatCompletionProvider.cs[322-322]
Best Practice: Learned patterns

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
Streaming providers are logging user/model content (`text`, `allText`, tool-call args) which may contain sensitive data.

## Issue Context
Compliance requires logs to avoid user/model content; prefer minimal structured metadata (provider name, finish reason, token usage) and keep any content logging behind safe, explicit redaction/debug controls.

## Fix Focus Areas
- src/Plugins/BotSharp.Plugin.AzureOpenAI/Providers/Chat/ChatCompletionProvider.cs[310-310]
- src/Plugins/BotSharp.Plugin.AzureOpenAI/Providers/Chat/ChatCompletionProvider.cs[350-350]
- src/Plugins/BotSharp.Plugin.DeepSeekAI/Providers/Chat/ChatCompletionProvider.cs[277-277]
- src/Plugins/BotSharp.Plugin.DeepSeekAI/Providers/Chat/ChatCompletionProvider.cs[317-317]
- src/Plugins/BotSharp.Plugin.AnthropicAI/Providers/ChatCompletionProvider.cs[247-247]
- src/Plugins/BotSharp.Plugin.AnthropicAI/Providers/ChatCompletionProvider.cs[261-261]
- src/Plugins/BotSharp.Plugin.OpenAI/Providers/Chat/ChatCompletionProvider.cs[282-282]
- src/Plugins/BotSharp.Plugin.OpenAI/Providers/Chat/ChatCompletionProvider.cs[322-322]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


2. ParseThinkingLevel uses ToLower() 📘
Description
The PR normalizes level via ToLower() and then switches on it, which is culture-sensitive and
can lead to inconsistent identifier-like matching. Compliance requires ordinal/culture-invariant
comparisons for such matching.
Code

src/Plugins/BotSharp.Plugin.GoogleAI/Providers/Chat/ChatCompletionProvider.cs[R725-727]

+        var parsedLevel = ThinkingLevel.LOW;
+        level = level.ToLower();
+        switch (level)
Evidence
PR Compliance ID 4 requires ordinal (culture-invariant) comparisons; ToLower() uses current
culture and the subsequent switch depends on that transformation.

src/Plugins/BotSharp.Plugin.GoogleAI/Providers/Chat/ChatCompletionProvider.cs[725-727]
Best Practice: Learned patterns

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`ParseThinkingLevel` uses culture-sensitive `ToLower()` before matching.

## Issue Context
Compliance requires ordinal/culture-invariant identifier matching (e.g., `StringComparison.OrdinalIgnoreCase` or `ToLowerInvariant()`).

## Fix Focus Areas
- src/Plugins/BotSharp.Plugin.GoogleAI/Providers/Chat/ChatCompletionProvider.cs[725-737]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


3. SSE stop-streaming broken 🐞
Description
SendMessageSse never registers a cancellation token source for the conversation, so StopStreaming
cannot cancel SSE-initiated streaming and providers will run with CancellationToken.None.
Code

src/Infrastructure/BotSharp.OpenAPI/Controllers/Conversation/ConversationController.cs[R475-491]

        Response.Headers.Append(Microsoft.Net.Http.Headers.HeaderNames.Connection, "keep-alive");

        await conv.SendMessage(agentId, inputMsg,
-            replyMessage: input.Postback,
-            // responsed generated
-            async msg =>
-            {
-                response.Text = !string.IsNullOrEmpty(msg.SecondaryContent) ? msg.SecondaryContent : msg.Content;
-                response.MessageLabel = msg.MessageLabel;
-                response.Function = msg.FunctionName;
-                response.RichContent = msg.SecondaryRichContent ?? msg.RichContent;
-                response.Instruction = msg.Instruction;
-                response.Data = msg.Data;
-                response.States = state.GetStates();
-                
-                await OnChunkReceived(Response, response);
-            });
+                replyMessage: input.Postback,
+                // responsed generated
+                async msg =>
+                {
+                    response.Text = !string.IsNullOrEmpty(msg.SecondaryContent) ? msg.SecondaryContent : msg.Content;
+                    response.MessageLabel = msg.MessageLabel;
+                    response.Function = msg.FunctionName;
+                    response.RichContent = msg.SecondaryRichContent ?? msg.RichContent;
+                    response.Instruction = msg.Instruction;
+                    response.Data = msg.Data;
+                    response.States = state.GetStates();
+
+                    await OnChunkReceived(Response, response);
+                });
Evidence
Cancellation registration is only performed in SendMessage when IsStreamingMessage is true, but the
SSE endpoint lacks any RegisterConversation/UnregisterConversation block. Since providers fetch the
token via IConversationCancellationService.GetToken(conversationId), missing registration means
there is nothing for StopStreaming to cancel and streaming loops won’t observe cancellation.

src/Infrastructure/BotSharp.OpenAPI/Controllers/Conversation/ConversationController.cs[379-431]
src/Infrastructure/BotSharp.OpenAPI/Controllers/Conversation/ConversationController.cs[442-498]
src/Infrastructure/BotSharp.Core/Conversations/Services/ConversationCancellationService.cs[16-62]
src/Plugins/BotSharp.Plugin.OpenAI/Providers/Chat/ChatCompletionProvider.cs[257-270]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
`SendMessageSse` does not call `IConversationCancellationService.RegisterConversation(...)`, so `/conversation/{conversationId}/stop-streaming` cannot cancel SSE-started streams.

### Issue Context
Providers use `IConversationCancellationService.GetToken(conv.ConversationId)` and pass it into streaming enumerators; without registration this returns `CancellationToken.None`, and `CancelStreaming` will not find a CTS to cancel.

### Fix Focus Areas
- src/Infrastructure/BotSharp.OpenAPI/Controllers/Conversation/ConversationController.cs[442-498]
- src/Infrastructure/BotSharp.OpenAPI/Controllers/Conversation/ConversationController.cs[500-507]

### Implementation notes
- In `SendMessageSse`, add the same `RegisterConversation(conversationId)` + `try/finally { UnregisterConversation(conversationId); }` pattern used in `SendMessage` (likely gated by `input.IsStreamingMessage`, or always for SSE if SSE always implies streaming).
- Consider catching `OperationCanceledException` to close the SSE stream cleanly (optionally emit a final event / [DONE]).

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


View more (2)
4. MetaData dropped in responses 🐞
Description
ConversationController.SendMessage and SendMessageSse never copy RoleDialogModel.MetaData to
ChatResponseModel, so newly added meta_data/thinking_text does not reach OpenAPI clients.
Code

src/Infrastructure/BotSharp.OpenAPI/Controllers/Conversation/ConversationController.cs[R414-422]

+                async msg =>
+                {
+                    response.Text = !string.IsNullOrEmpty(msg.SecondaryContent) ? msg.SecondaryContent : msg.Content;
+                    response.Function = msg.FunctionName;
+                    response.MessageLabel = msg.MessageLabel;
+                    response.RichContent = msg.SecondaryRichContent ?? msg.RichContent;
+                    response.Instruction = msg.Instruction;
+                    response.Data = msg.Data;
+                });
Evidence
ChatResponseDto now exposes MetaData and providers populate RoleDialogModel.MetaData (e.g., GoogleAI
sets ThoughtSignature/ThinkingText), but the OpenAPI response mapping in SendMessage/SSE only maps
text/function/rich content/data and omits MetaData. Other outbound paths (ChatHub observer/hook)
explicitly forward MetaData, indicating it is intended to be surfaced.

src/Infrastructure/BotSharp.Abstraction/Conversations/Dtos/ChatResponseDto.cs[46-55]
src/Infrastructure/BotSharp.Abstraction/Conversations/Models/RoleDialogModel.cs[71-77]
src/Infrastructure/BotSharp.OpenAPI/Controllers/Conversation/ConversationController.cs[379-438]
src/Plugins/BotSharp.Plugin.GoogleAI/Providers/Chat/ChatCompletionProvider.cs[98-114]
src/Plugins/BotSharp.Plugin.ChatHub/Observers/ChatHubObserver.cs[72-90]
src/Plugins/BotSharp.Plugin.ChatHub/Hooks/ChatHubConversationHook.cs[106-119]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
`ChatResponseDto.MetaData` is now part of the API contract, but OpenAPI endpoints don’t populate it from `RoleDialogModel.MetaData`, dropping `thinking_text` and other metadata.

### Issue Context
Providers populate `RoleDialogModel.MetaData` (e.g., GoogleAI). `GetDialogs` already maps `MetaData`, but `SendMessage`/`SendMessageSse` do not.

### Fix Focus Areas
- src/Infrastructure/BotSharp.OpenAPI/Controllers/Conversation/ConversationController.cs[379-438]
- src/Infrastructure/BotSharp.OpenAPI/Controllers/Conversation/ConversationController.cs[442-491]

### Implementation notes
- In both callbacks (`SendMessage` and `SendMessageSse`), add `response.MetaData = msg.MetaData;` (and, if relevant, `response.IsStreaming = msg.IsStreaming;`).
- If you emit tool-call indication chunks via `OnReceiveToolCallIndication`, consider whether they should also include any relevant metadata.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


5. Thinking level key typo 🐞
Description
GoogleAI ChatCompletionProvider reads conversation state key "thyinking_level" (misspelled), so
state-based thinking level overrides never take effect.
Code

src/Plugins/BotSharp.Plugin.GoogleAI/Providers/Chat/ChatCompletionProvider.cs[R694-701]

+    private ThinkingLevel? ParseThinking(ReasoningSetting? settings, Agent agent)
+    {
+        var level = _state.GetState("thyinking_level");
+
+        if (string.IsNullOrEmpty(level) && _model == agent?.LlmConfig?.Model)
+        {
+            level = agent?.LlmConfig?.ReasoningEffortLevel;
+        }
Evidence
ParseThinking uses _state.GetState("thyinking_level"), and this string appears nowhere else in the
codebase, making it effectively impossible to set. As a result, thinking level falls back to agent
config/provider settings even when a caller intends to override via state.

src/Plugins/BotSharp.Plugin.GoogleAI/Providers/Chat/ChatCompletionProvider.cs[693-716]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
`ParseThinking` reads the wrong state key (`thyinking_level`), preventing state overrides for thinking level.

### Issue Context
This affects `GenerationConfig.ThinkingConfig` selection and makes per-conversation configuration ineffective.

### Fix Focus Areas
- src/Plugins/BotSharp.Plugin.GoogleAI/Providers/Chat/ChatCompletionProvider.cs[693-716]

### Implementation notes
- Rename the key to the intended spelling (e.g., `thinking_level`).
- Prefer a shared constant/enumeration for the key to avoid future typos.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Remediation recommended

6. MetaData passed without copy 📘
Description
The PR forwards message.MetaData directly into outbound response models/DTOs without creating a
defensive copy. This risks shared mutable dictionary state being mutated across hooks/message
boundaries.
Code

src/Infrastructure/BotSharp.OpenAPI/Controllers/Conversation/ConversationController.cs[134]

+                    MetaData = message.MetaData,
Evidence
PR Compliance ID 1 requires defensive copying of mutable collections across boundaries; these added
assignments pass the same dictionary reference into outbound objects rather than cloning it.

src/Infrastructure/BotSharp.OpenAPI/Controllers/Conversation/ConversationController.cs[134-134]
src/Infrastructure/BotSharp.OpenAPI/Controllers/Conversation/ConversationController.cs[150-150]
src/Plugins/BotSharp.Plugin.ChatHub/Hooks/ChatHubConversationHook.cs[115-115]
src/Plugins/BotSharp.Plugin.ChatHub/Observers/ChatHubObserver.cs[83-83]
Best Practice: Learned patterns

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
Outbound response objects are being constructed with `MetaData = message.MetaData` (shared mutable dictionary reference).

## Issue Context
Compliance requires defensive copies for mutable collections crossing hooks/message boundaries to prevent cross-request/component leakage.

## Fix Focus Areas
- src/Infrastructure/BotSharp.OpenAPI/Controllers/Conversation/ConversationController.cs[134-134]
- src/Infrastructure/BotSharp.OpenAPI/Controllers/Conversation/ConversationController.cs[150-150]
- src/Plugins/BotSharp.Plugin.ChatHub/Hooks/ChatHubConversationHook.cs[115-115]
- src/Plugins/BotSharp.Plugin.ChatHub/Observers/ChatHubObserver.cs[83-83]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


7. CancelStreaming keeps CTS cached 📘
Description
CancelStreaming cancels the CancellationTokenSource but does not remove it from the dictionary
or dispose it, which can retain cancelled/disposed state and leak resources. Async/concurrent
cleanup should ensure resources are released on cancellation paths.
Code

src/Infrastructure/BotSharp.Core/Conversations/Services/ConversationCancellationService.cs[R32-43]

+    public bool CancelStreaming(string conversationId)
+    {
+        if (_cancellationTokenSources.TryGetValue(conversationId, out var cts))
+        {
+            cts.Cancel();
+            _logger.LogInformation("Streaming cancelled for conversation {ConversationId}", conversationId);
+            return true;
+        }
+
+        _logger.LogWarning("No active streaming found for conversation {ConversationId}", conversationId);
+        return false;
+    }
Evidence
PR Compliance ID 5 requires correct cleanup on cancellation; this implementation cancels without
unregistering/removing/disposing the CTS, relying on separate calls to clean up.

src/Infrastructure/BotSharp.Core/Conversations/Services/ConversationCancellationService.cs[32-43]
Best Practice: Learned patterns

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`CancelStreaming` cancels a CTS but leaves it stored in `_cancellationTokenSources` and does not dispose it.

## Issue Context
On cancellation, CTS instances should be cleaned up deterministically to avoid memory/resource leaks and avoid reusing cancelled token sources.

## Fix Focus Areas
- src/Infrastructure/BotSharp.Core/Conversations/Services/ConversationCancellationService.cs[32-43]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


8. Cancel/dispose race 🐞
Description
ConversationCancellationService.CancelStreaming can call Cancel() on a CancellationTokenSource that
is concurrently removed and disposed by UnregisterConversation, which can throw and fail
stop-streaming requests.
Code

src/Infrastructure/BotSharp.Core/Conversations/Services/ConversationCancellationService.cs[R32-39]

+    public bool CancelStreaming(string conversationId)
+    {
+        if (_cancellationTokenSources.TryGetValue(conversationId, out var cts))
+        {
+            cts.Cancel();
+            _logger.LogInformation("Streaming cancelled for conversation {ConversationId}", conversationId);
+            return true;
+        }
Evidence
CancelStreaming uses TryGetValue to grab a CTS reference and then calls Cancel, while
UnregisterConversation can TryRemove and Dispose that CTS concurrently. There is no synchronization
between these methods beyond ConcurrentDictionary’s per-operation thread safety, so Cancel can run
after Dispose.

src/Infrastructure/BotSharp.Core/Conversations/Services/ConversationCancellationService.cs[32-52]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
`CancelStreaming` and `UnregisterConversation` can race: one thread disposes the CTS while another thread cancels it.

### Issue Context
Both methods operate on the same `CancellationTokenSource` values in a `ConcurrentDictionary` but perform multi-step operations without mutual exclusion.

### Fix Focus Areas
- src/Infrastructure/BotSharp.Core/Conversations/Services/ConversationCancellationService.cs[32-52]

### Implementation notes
Options:
- Use `TryRemove` in `CancelStreaming` (atomically remove then cancel+dispose), and adjust call sites to not require a separate unregister on cancel paths.
- Or protect cancel/remove/dispose operations with a per-conversation lock (or a single lock if contention is low).
- At minimum, guard `cts.Cancel()` with a `try/catch (ObjectDisposedException)` and treat as already-unregistered.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


View more (1)
9. Stop endpoint lacks authorization 🐞
Description
StopStreaming cancels by conversationId without verifying the authenticated user is allowed to
control that conversation, enabling any authorized user to cancel another user’s active stream if
they know the ID.
Code

src/Infrastructure/BotSharp.OpenAPI/Controllers/Conversation/ConversationController.cs[R500-507]

+    [HttpPost("/conversation/{conversationId}/stop-streaming")]
+    public ConverstionCancellationResponse StopStreaming([FromRoute] string conversationId)
    {
-        var indicator = new ChatResponseModel
-        {
-            ConversationId = conversationId,
-            MessageId = msg.MessageId,
-            Text = msg.Indication,
-            Function = "indicating",
-            Instruction = msg.Instruction,
-            States = new Dictionary<string, string>()
-        };
-        await OnChunkReceived(Response, indicator);
+        var streamingCancellation = _services.GetRequiredService<IConversationCancellationService>();
+        var cancelled = streamingCancellation.CancelStreaming(conversationId);
+
+        return new ConverstionCancellationResponse { Success = cancelled };
    }
Evidence
StopStreaming directly calls CancelStreaming(conversationId) without loading the conversation or
applying a userId filter; in contrast, other endpoints (e.g., GetConversation) apply an admin/user
check and constrain by UserId. This makes stop-streaming less restricted than other conversation
operations.

src/Infrastructure/BotSharp.OpenAPI/Controllers/Conversation/ConversationController.cs[500-507]
src/Infrastructure/BotSharp.OpenAPI/Controllers/Conversation/ConversationController.cs[164-180]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
`POST /conversation/{conversationId}/stop-streaming` cancels streaming solely by `conversationId` without verifying the caller is permitted to act on that conversation.

### Issue Context
Other endpoints in this controller consult `IUserService.IsAdminUser` and apply a `UserId` filter when retrieving conversation data.

### Fix Focus Areas
- src/Infrastructure/BotSharp.OpenAPI/Controllers/Conversation/ConversationController.cs[500-507]
- src/Infrastructure/BotSharp.OpenAPI/Controllers/Conversation/ConversationController.cs[164-180]

### Implementation notes
- Load the conversation with the same access rules used elsewhere (admin vs non-admin) before cancelling.
- If unauthorized/not found, return 404/403 rather than exposing whether a conversation is streaming.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


Grey Divider

ⓘ The new review experience is currently in Beta. Learn more

Grey Divider

Qodo Logo


#if DEBUG
_logger.LogCritical($"Content update: {text}");
_logger.LogCritical($"Content update: {text}");
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

1. Streaming text logged to logs 📘 Rule violation ⛨ Security

The PR logs full/partial model output (and tool-call args) during streaming, which can contain
sensitive user/model content. This violates the requirement to keep logs free of sensitive content
and use minimal structured metadata.
Agent Prompt
## Issue description
Streaming providers are logging user/model content (`text`, `allText`, tool-call args) which may contain sensitive data.

## Issue Context
Compliance requires logs to avoid user/model content; prefer minimal structured metadata (provider name, finish reason, token usage) and keep any content logging behind safe, explicit redaction/debug controls.

## Fix Focus Areas
- src/Plugins/BotSharp.Plugin.AzureOpenAI/Providers/Chat/ChatCompletionProvider.cs[310-310]
- src/Plugins/BotSharp.Plugin.AzureOpenAI/Providers/Chat/ChatCompletionProvider.cs[350-350]
- src/Plugins/BotSharp.Plugin.DeepSeekAI/Providers/Chat/ChatCompletionProvider.cs[277-277]
- src/Plugins/BotSharp.Plugin.DeepSeekAI/Providers/Chat/ChatCompletionProvider.cs[317-317]
- src/Plugins/BotSharp.Plugin.AnthropicAI/Providers/ChatCompletionProvider.cs[247-247]
- src/Plugins/BotSharp.Plugin.AnthropicAI/Providers/ChatCompletionProvider.cs[261-261]
- src/Plugins/BotSharp.Plugin.OpenAI/Providers/Chat/ChatCompletionProvider.cs[282-282]
- src/Plugins/BotSharp.Plugin.OpenAI/Providers/Chat/ChatCompletionProvider.cs[322-322]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Comment on lines +725 to +727
var parsedLevel = ThinkingLevel.LOW;
level = level.ToLower();
switch (level)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

2. parsethinkinglevel uses tolower() 📘 Rule violation ≡ Correctness

The PR normalizes level via ToLower() and then switches on it, which is culture-sensitive and
can lead to inconsistent identifier-like matching. Compliance requires ordinal/culture-invariant
comparisons for such matching.
Agent Prompt
## Issue description
`ParseThinkingLevel` uses culture-sensitive `ToLower()` before matching.

## Issue Context
Compliance requires ordinal/culture-invariant identifier matching (e.g., `StringComparison.OrdinalIgnoreCase` or `ToLowerInvariant()`).

## Fix Focus Areas
- src/Plugins/BotSharp.Plugin.GoogleAI/Providers/Chat/ChatCompletionProvider.cs[725-737]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Comment on lines 475 to +491
Response.Headers.Append(Microsoft.Net.Http.Headers.HeaderNames.Connection, "keep-alive");

await conv.SendMessage(agentId, inputMsg,
replyMessage: input.Postback,
// responsed generated
async msg =>
{
response.Text = !string.IsNullOrEmpty(msg.SecondaryContent) ? msg.SecondaryContent : msg.Content;
response.MessageLabel = msg.MessageLabel;
response.Function = msg.FunctionName;
response.RichContent = msg.SecondaryRichContent ?? msg.RichContent;
response.Instruction = msg.Instruction;
response.Data = msg.Data;
response.States = state.GetStates();
await OnChunkReceived(Response, response);
});
replyMessage: input.Postback,
// responsed generated
async msg =>
{
response.Text = !string.IsNullOrEmpty(msg.SecondaryContent) ? msg.SecondaryContent : msg.Content;
response.MessageLabel = msg.MessageLabel;
response.Function = msg.FunctionName;
response.RichContent = msg.SecondaryRichContent ?? msg.RichContent;
response.Instruction = msg.Instruction;
response.Data = msg.Data;
response.States = state.GetStates();

await OnChunkReceived(Response, response);
});
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

3. Sse stop-streaming broken 🐞 Bug ≡ Correctness

SendMessageSse never registers a cancellation token source for the conversation, so StopStreaming
cannot cancel SSE-initiated streaming and providers will run with CancellationToken.None.
Agent Prompt
### Issue description
`SendMessageSse` does not call `IConversationCancellationService.RegisterConversation(...)`, so `/conversation/{conversationId}/stop-streaming` cannot cancel SSE-started streams.

### Issue Context
Providers use `IConversationCancellationService.GetToken(conv.ConversationId)` and pass it into streaming enumerators; without registration this returns `CancellationToken.None`, and `CancelStreaming` will not find a CTS to cancel.

### Fix Focus Areas
- src/Infrastructure/BotSharp.OpenAPI/Controllers/Conversation/ConversationController.cs[442-498]
- src/Infrastructure/BotSharp.OpenAPI/Controllers/Conversation/ConversationController.cs[500-507]

### Implementation notes
- In `SendMessageSse`, add the same `RegisterConversation(conversationId)` + `try/finally { UnregisterConversation(conversationId); }` pattern used in `SendMessage` (likely gated by `input.IsStreamingMessage`, or always for SSE if SSE always implies streaming).
- Consider catching `OperationCanceledException` to close the SSE stream cleanly (optionally emit a final event / [DONE]).

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Comment on lines +414 to +422
async msg =>
{
response.Text = !string.IsNullOrEmpty(msg.SecondaryContent) ? msg.SecondaryContent : msg.Content;
response.Function = msg.FunctionName;
response.MessageLabel = msg.MessageLabel;
response.RichContent = msg.SecondaryRichContent ?? msg.RichContent;
response.Instruction = msg.Instruction;
response.Data = msg.Data;
});
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

4. Metadata dropped in responses 🐞 Bug ≡ Correctness

ConversationController.SendMessage and SendMessageSse never copy RoleDialogModel.MetaData to
ChatResponseModel, so newly added meta_data/thinking_text does not reach OpenAPI clients.
Agent Prompt
### Issue description
`ChatResponseDto.MetaData` is now part of the API contract, but OpenAPI endpoints don’t populate it from `RoleDialogModel.MetaData`, dropping `thinking_text` and other metadata.

### Issue Context
Providers populate `RoleDialogModel.MetaData` (e.g., GoogleAI). `GetDialogs` already maps `MetaData`, but `SendMessage`/`SendMessageSse` do not.

### Fix Focus Areas
- src/Infrastructure/BotSharp.OpenAPI/Controllers/Conversation/ConversationController.cs[379-438]
- src/Infrastructure/BotSharp.OpenAPI/Controllers/Conversation/ConversationController.cs[442-491]

### Implementation notes
- In both callbacks (`SendMessage` and `SendMessageSse`), add `response.MetaData = msg.MetaData;` (and, if relevant, `response.IsStreaming = msg.IsStreaming;`).
- If you emit tool-call indication chunks via `OnReceiveToolCallIndication`, consider whether they should also include any relevant metadata.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Comment on lines +694 to +701
private ThinkingLevel? ParseThinking(ReasoningSetting? settings, Agent agent)
{
var level = _state.GetState("thyinking_level");

if (string.IsNullOrEmpty(level) && _model == agent?.LlmConfig?.Model)
{
level = agent?.LlmConfig?.ReasoningEffortLevel;
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

5. Thinking level key typo 🐞 Bug ≡ Correctness

GoogleAI ChatCompletionProvider reads conversation state key "thyinking_level" (misspelled), so
state-based thinking level overrides never take effect.
Agent Prompt
### Issue description
`ParseThinking` reads the wrong state key (`thyinking_level`), preventing state overrides for thinking level.

### Issue Context
This affects `GenerationConfig.ThinkingConfig` selection and makes per-conversation configuration ineffective.

### Fix Focus Areas
- src/Plugins/BotSharp.Plugin.GoogleAI/Providers/Chat/ChatCompletionProvider.cs[693-716]

### Implementation notes
- Rename the key to the intended spelling (e.g., `thinking_level`).
- Prefer a shared constant/enumeration for the key to avoid future typos.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants