Skip to content

fix(deepseek): replay reasoning content for tool calls#12360

Open
Jerry2003826 wants to merge 1 commit intocontinuedev:mainfrom
Jerry2003826:codex/fix-deepseek-tool-reasoning-content
Open

fix(deepseek): replay reasoning content for tool calls#12360
Jerry2003826 wants to merge 1 commit intocontinuedev:mainfrom
Jerry2003826:codex/fix-deepseek-tool-reasoning-content

Conversation

@Jerry2003826
Copy link
Copy Markdown

@Jerry2003826 Jerry2003826 commented May 10, 2026

Fixes #12246

Summary

  • Add reasoning_content: "" to DeepSeek assistant messages when replaying chat history through the OpenAI-compatible adapter.
  • Preserve any existing reasoning_content returned by DeepSeek instead of overwriting it.
  • Add regression coverage for the assistant tool-call replay shape used by CLI tool-result loops.

Root Cause

The CLI path uses @continuedev/openai-adapters directly. Unlike the core OpenAI converter, the DeepSeek adapter did not add reasoning_content when replaying previous assistant messages. During a tool-call loop, DeepSeek's strict API gateway can reject the follow-up request after a local tool result because the previous assistant tool-call message lacks this field.

Validation

  • npm test -- src/test/main.test.ts -t "DeepSeek" from packages/openai-adapters
  • npm test -- src/test/main.test.ts from packages/openai-adapters
  • npm test from packages/openai-adapters
  • npm run build from packages/openai-adapters
  • npx prettier --check packages/openai-adapters/src/apis/DeepSeek.ts packages/openai-adapters/src/test/main.test.ts

Summary by cubic

Ensure DeepSeek assistant tool-call replays include reasoning_content so follow-up requests aren’t rejected during CLI tool-result loops. Fixes #12246.

  • Bug Fixes
    • Add reasoning_content: "" to assistant messages during replay in the OpenAI-compatible adapter.
    • Preserve existing reasoning_content when provided by DeepSeek.
    • Add regression tests for the CLI assistant tool-call replay shape.

Written for commit 5950948. Summary will update on new commits.

@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented May 10, 2026

All contributors have signed the CLA ✍️ ✅
Posted by the CLA Assistant Lite bot.

@Jerry2003826 Jerry2003826 marked this pull request as ready for review May 10, 2026 20:07
@Jerry2003826 Jerry2003826 requested a review from a team as a code owner May 10, 2026 20:07
@Jerry2003826 Jerry2003826 requested review from sestinj and removed request for a team May 10, 2026 20:07
@dosubot dosubot Bot added the size:S This PR changes 10-29 lines, ignoring generated files. label May 10, 2026
Copy link
Copy Markdown
Contributor

@cubic-dev-ai cubic-dev-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No issues found across 2 files

@Jerry2003826
Copy link
Copy Markdown
Author

I have read the CLA Document and I hereby sign the CLA

@chatgpt-codex-connector
Copy link
Copy Markdown

💡 Codex Review

setFetchedModelsList((prev) =>
selectedProvider.provider === providerAtFetchTime ? models : prev,

P2 Badge Guard model-fetch response against stale provider switches

The stale-response check in handleFetchModels is ineffective because selectedProvider is captured from the same render as providerAtFetchTime, so selectedProvider.provider === providerAtFetchTime will always be true for that in-flight request. If a user starts fetching models for provider A and switches to provider B before the request resolves, the old response can still overwrite fetchedModelsList for the new provider, showing mismatched model options. Compare against current provider state at resolve time (e.g., via a ref/request token) instead of the closed-over value.

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@Jerry2003826
Copy link
Copy Markdown
Author

CI note: the checks for the area touched by this PR are green, including packages-checks (openai-adapters). The remaining failure is jetbrains-tests, specifically the IntelliJ integration test Autocomplete > testAutocomplete() failing at Autocomplete.kt:42 after the Tab autocomplete assertion.

This looks unrelated to this PR's packages/openai-adapters change. I found the same Autocomplete.kt:42 failure in recent unrelated PR runs as well, for example:

I tried to rerun the failed job, but GitHub requires repository admin rights for that. Could a maintainer rerun jetbrains-tests when convenient?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

size:S This PR changes 10-29 lines, ignoring generated files.

Projects

Status: Todo

Development

Successfully merging this pull request may close these issues.

[Bug] CLI: 400 reasoning_content error when using @file (Tool Calls) with DeepSeek API, regardless of thinking mode

1 participant