Please read this first
- Have you read the docs?Agents SDK docs
- Have you searched for related issues? Others may have faced similar issues.
I can’t post to GitHub from here, but I’ve got a ready-to-file issue.
Issue body:
## Summary
When using `OpenAIConversationsSession`, hosted tool output items such as `file_search_call` arrive from the SDK/model with a valid `id`, but the session persistence path strips that `id` before calling `conversations.items.create(...)`.
That produces a 400 from the Conversations API:
```text
Missing required parameter: 'items[0].id'
This appears to happen in the openai-agents session persistence layer, not in app-side serialization.
Environment
openai-agents==0.17.0
openai==2.34.0
- Windows
- Using
OpenAIConversationsSession
- Hosted
file_search tool enabled
Actual behavior
A run completes successfully, but persistence into the Conversations session fails with:
BadRequestError: Error code: 400 - {
"error": {
"message": "Missing required parameter: 'items[0].id'.",
"type": "invalid_request_error",
"param": "items[0].id",
"code": "missing_required_parameter"
}
}
The rejected payload reaching add_items(...) looks like this:
[
{
"queries": [
"review this file, and please let me know what its about",
"summarize the uploaded file and describe what it is about",
"main topic and purpose of the uploaded document"
],
"status": "completed",
"type": "file_search_call",
"results": null
},
{
"content": [...],
"role": "assistant",
"status": "completed",
"type": "message",
"phase": "final_answer"
}
]
Notice that the file_search_call item has no id by the time it is persisted.
Expected behavior
If the raw output item has an id, the id should still be present when persisting that item into OpenAIConversationsSession, especially for item types where the Conversations API requires it.
Evidence: raw SDK events do contain the id
From the raw streamed SDK events, the file_search_call item clearly has an id:
response.output_item.done
{
"item": {
"id": "fs_0e9f7c7c4bcd09e60069fdc26e25d88192aeb641a50f296f6f",
"queries": [
"review this file, and please let me know what its about",
"summarize the uploaded file and describe what it is about",
"main topic and purpose of the uploaded document"
],
"status": "completed",
"type": "file_search_call",
"results": null
},
"type": "response.output_item.done"
}
response.completed
"output": [
{
"id": "fs_0e9f7c7c4bcd09e60069fdc26e25d88192aeb641a50f296f6f",
"queries": [...],
"status": "completed",
"type": "file_search_call",
"results": null
},
{
"id": "msg_0e9f7c7c4bcd09e60069fdc27031e881929b7d866725444e12",
"type": "message",
...
}
]
So the id exists in the raw SDK/model output and is lost later.
Root cause in openai-agents
I traced the persistence path in openai-agents==0.15.2:
-
Run items are converted to input items via model_dump(exclude_unset=True):
agents/items.py
RunItemBase.to_input_item()
-
Session persistence then sanitizes items before saving:
agents/run_internal/session_persistence.py
-
In _sanitize_openai_conversation_item(...), the id is explicitly removed:
def _sanitize_openai_conversation_item(item):
if isinstance(item, dict):
clean_item = strip_internal_input_item_metadata(item)
clean_item.pop("id", None)
clean_item.pop("provider_data", None)
return clean_item
- Those sanitized items are then passed to
session.add_items(items_to_save), which eventually calls:
await self._openai_client.conversations.items.create(
conversation_id=session_id,
items=items,
)
Because id was stripped earlier, the Conversations API rejects the hosted tool item.
Why this looks incorrect
Stripping id may make sense for some replay/history normalization cases, but it appears too broad for OpenAIConversationsSession persistence of hosted output items like file_search_call, where the Conversations API expects id to still be present.
Minimal repro idea
- Create an agent with hosted
file_search
- Use
OpenAIConversationsSession
- Upload/index a file
- Ask a question that triggers
file_search
- Let the run complete
- Persistence fails on
conversations.items.create(...) with missing items[0].id
Workaround
I worked around this on the app side by wrapping OpenAIConversationsSession.add_items(...) and filtering/dropping invalid items so the rest of the turn can still persist, but that is only a mitigation.
Request
Please adjust the OpenAI Conversations persistence sanitization so it does not blindly remove id from output items that require it for conversations.items.create(...).
At minimum, file_search_call should preserve its id. More broadly, _sanitize_openai_conversation_item(...) may need item-type-aware behavior instead of unconditional clean_item.pop("id", None).
The exact code points from your installed package that support the issue are:
- `agents/run_internal/session_persistence.py:320`
- `agents/run_internal/session_persistence.py:341`
- `agents/run_internal/session_persistence.py:568`
- `agents/items.py:151`
And your local raw evidence is in:
- [sdk_events.jsonl](D:/repos/langchain_master/debug/sdk_events.jsonl)
If you want, I can also turn this into a shorter GitHub-ready version with just title, repro, expected, actual, and root cause.
Please read this first
I can’t post to GitHub from here, but I’ve got a ready-to-file issue.
Issue body:
This appears to happen in the
openai-agentssession persistence layer, not in app-side serialization.Environment
openai-agents==0.17.0openai==2.34.0OpenAIConversationsSessionfile_searchtool enabledActual behavior
A run completes successfully, but persistence into the Conversations session fails with:
The rejected payload reaching
add_items(...)looks like this:[ { "queries": [ "review this file, and please let me know what its about", "summarize the uploaded file and describe what it is about", "main topic and purpose of the uploaded document" ], "status": "completed", "type": "file_search_call", "results": null }, { "content": [...], "role": "assistant", "status": "completed", "type": "message", "phase": "final_answer" } ]Notice that the
file_search_callitem has noidby the time it is persisted.Expected behavior
If the raw output item has an
id, theidshould still be present when persisting that item intoOpenAIConversationsSession, especially for item types where the Conversations API requires it.Evidence: raw SDK events do contain the id
From the raw streamed SDK events, the
file_search_callitem clearly has anid:response.output_item.done{ "item": { "id": "fs_0e9f7c7c4bcd09e60069fdc26e25d88192aeb641a50f296f6f", "queries": [ "review this file, and please let me know what its about", "summarize the uploaded file and describe what it is about", "main topic and purpose of the uploaded document" ], "status": "completed", "type": "file_search_call", "results": null }, "type": "response.output_item.done" }response.completedSo the
idexists in the raw SDK/model output and is lost later.Root cause in
openai-agentsI traced the persistence path in
openai-agents==0.15.2:Run items are converted to input items via
model_dump(exclude_unset=True):agents/items.pyRunItemBase.to_input_item()Session persistence then sanitizes items before saving:
agents/run_internal/session_persistence.pyIn
_sanitize_openai_conversation_item(...), theidis explicitly removed:session.add_items(items_to_save), which eventually calls:Because
idwas stripped earlier, the Conversations API rejects the hosted tool item.Why this looks incorrect
Stripping
idmay make sense for some replay/history normalization cases, but it appears too broad forOpenAIConversationsSessionpersistence of hosted output items likefile_search_call, where the Conversations API expectsidto still be present.Minimal repro idea
file_searchOpenAIConversationsSessionfile_searchconversations.items.create(...)with missingitems[0].idWorkaround
I worked around this on the app side by wrapping
OpenAIConversationsSession.add_items(...)and filtering/dropping invalid items so the rest of the turn can still persist, but that is only a mitigation.Request
Please adjust the OpenAI Conversations persistence sanitization so it does not blindly remove
idfrom output items that require it forconversations.items.create(...).At minimum,
file_search_callshould preserve itsid. More broadly,_sanitize_openai_conversation_item(...)may need item-type-aware behavior instead of unconditionalclean_item.pop("id", None).