Skip to content

Conversation

@yurekami
Copy link

Summary

Problem

When using the OpenAI Agents SDK with LiteLLM configured for DeepSeek's thinking mode (deepseek/deepseek-reasoner), multi-turn conversations with tool calls fail because:

  1. DeepSeek API returns reasoning_content in the reasoning item's summary field
  2. When the conversation history is sent back for subsequent turns, the reasoning_content field is missing from assistant messages
  3. DeepSeek API requires this field in assistant messages for proper conversation continuity

Solution

The fix tracks pending_reasoning_content across message conversion:

  1. When a reasoning message is encountered, extract the reasoning text from the summary field (which contains summary_text items)
  2. Store it as pending_reasoning_content
  3. When an assistant message with tool calls is flushed, include the pending reasoning_content

Test plan

  • Added test for extracting reasoning_content from summary and including in assistant message with tool calls
  • Added test for reasoning message without summary field (should not cause errors)
  • Added test to verify reasoning_content is only added to the immediately following assistant message

🤖 Generated with Claude Code

openai#2155)

When using LiteLLM with DeepSeek in thinking mode, multi-turn
conversations with tool calls fail because the API requires the
`reasoning_content` field in assistant messages.

This change:
- Tracks `pending_reasoning_content` across reasoning and assistant messages
- Extracts reasoning text from the `summary` field of reasoning messages
- Includes `reasoning_content` in assistant messages that have tool calls

Fixes openai#2155

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment on lines +583 to +587
summary_items = reasoning_item.get("summary", [])
for summary_item in summary_items:
if isinstance(summary_item, dict) and summary_item.get("type") == "summary_text":
pending_reasoning_content = summary_item.get("text", "")
break

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Concatenate all summary_text parts before setting reasoning_content

The reasoning summary is a list of parts, but the loop breaks on the first summary_text. If a reasoning item contains multiple summary_text entries (e.g., summaries emitted in multiple parts), only the first chunk is copied into reasoning_content, so the assistant message history will carry a truncated reasoning payload. This can still break DeepSeek continuity because the required reasoning content is incomplete. Consider concatenating all summary_text parts in order instead of stopping at the first one.

Useful? React with 👍 / 👎.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant