fix(ollama): remove Ollama from isReasoningTagProvider (#2279)#16191
Merged
steipete merged 2 commits intoopenclaw:mainfrom Feb 14, 2026
Merged
fix(ollama): remove Ollama from isReasoningTagProvider (#2279)#16191steipete merged 2 commits intoopenclaw:mainfrom
steipete merged 2 commits intoopenclaw:mainfrom
Conversation
Ollama's OpenAI-compatible endpoint handles reasoning natively via the `reasoning` field in streaming chunks. Treating Ollama as a reasoning-tag provider incorrectly forces <think>/<final> tag enforcement, which causes stripBlockTags() to discard all output (since Ollama models don't emit <final> tags), resulting in '(no output)' for every Ollama model. This fix removes 'ollama' from the isReasoningTagProvider() check, allowing Ollama models to work correctly through the standard content/reasoning field separation.
45de9e8 to
4f49f9e
Compare
Contributor
|
Landed via temp rebase onto main. Thanks @Glucksberg! |
hamidzr
pushed a commit
to hamidzr/openclaw
that referenced
this pull request
Feb 14, 2026
openperf
pushed a commit
to openperf/moltbot
that referenced
this pull request
Feb 14, 2026
openperf
pushed a commit
to openperf/moltbot
that referenced
this pull request
Feb 14, 2026
BigUncle
pushed a commit
to BigUncle/openclaw
that referenced
this pull request
Feb 14, 2026
mverrilli
pushed a commit
to mverrilli/openclaw
that referenced
this pull request
Feb 14, 2026
GwonHyeok
pushed a commit
to learners-superpumped/openclaw
that referenced
this pull request
Feb 15, 2026
Benkei-dev
pushed a commit
to Benkei-dev/openclaw
that referenced
this pull request
Feb 15, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Fixes #2279
Problem
All Ollama models return
(no output)when used through OpenClaw. The TUI and messaging channels show no response even though the models produce valid output via direct API calls.Root Cause
isReasoningTagProvider()insrc/utils/provider-utils.tsreturnstruefor Ollama, which causes:enforceFinalTag: true—stripBlockTags()enforces strict<final>tag extraction. Since Ollama models don't emit<final>tags, all text content is discarded.System prompt injection —
<think>/<final>format instructions are injected, which most Ollama models don't follow reliably.Why Ollama Doesn't Need Tag Enforcement
Ollama's OpenAI-compatible endpoint already handles reasoning natively via the
reasoningfield in streaming chunks:{"choices":[{"delta":{"content":"","reasoning":"The user said hello"}}]} {"choices":[{"delta":{"content":"Hello! How can I help?"}}]}The pi-ai library correctly maps
reasoning→ thinking blocks andcontent→ text blocks. Tag-based enforcement is unnecessary and actively harmful.Fix
Remove
"ollama"from theisReasoningTagProvider()check.Tests
Added comprehensive test suite for
isReasoningTagProvider()covering all providers.Greptile Overview
Greptile Summary
Removes
"ollama"fromisReasoningTagProvider()insrc/utils/provider-utils.tsto fix #2279, where all Ollama models returned(no output). The root cause was thatisReasoningTagProvider("ollama")returnedtrue, which triggeredenforceFinalTag: trueand injected<think>/<final>format instructions. Since Ollama models don't emit<final>tags,stripBlockTags()discarded all text content. Ollama's OpenAI-compatible endpoint already handles reasoning natively via thereasoningfield in streaming chunks, making tag-based enforcement both unnecessary and harmful."ollama"from the provider list inisReasoningTagProvider()with a clear explanatory commentisReasoningTagProvider()covering all provider branches (Ollama, Google variants, Minimax, null/undefined/empty, and standard providers)get-reply-run.ts,compact.ts,attempt.ts,agent-runner-utils.ts, and indirectly via the test), and removing Ollama from the check correctly prevents bothenforceFinalTagand reasoning tag system prompt injection for Ollama sessionsConfidence Score: 5/5
reasoningfield, making tag enforcement both unnecessary and actively harmful (causing all output to be discarded). The new test file covers all code paths in the function. I traced all 5 downstream consumers and confirmed the change correctly prevents bothenforceFinalTag: trueandreasoningTagHint: truefor Ollama sessions without affecting other providers.Last reviewed commit: 45de9e8