ion7-llm / chat.thinking

class

ion7.llm.chat.Thinking

_in_think boolean
_think_text string[] Accumulator for the active think block.
_open_tail string Last 6 chars seen outside a think block.
_close_tail string Last 7 chars seen inside a think block.
_tok_count integer Tokens emitted inside the active block.
_all_think string[] Concatenation of every closed block.

Functions

Thinking.new

Thinking.new()
→ ion7.llm.chat.Thinking

Thinking:reset

Reset to the initial state — call between generations. Returns the thinking text accumulated since the last reset (useful when the upstream loop wants to clear the buffer AND keep the trace).

Thinking:reset()
→ string

Thinking:in_think

True when the buffer is currently inside a `` block.

Thinking:in_think()
→ boolean

Thinking:active_token_count

Total tokens consumed inside the active block. Resets to 0 each time a block closes. Used by the reasoning-budget guard.

Thinking:active_token_count()
→ integer

Thinking:thinking

Concatenated text of every closed `` block since the last `:reset`, plus whatever is currently buffered in the active one.

Thinking:thinking()
→ string?

?

Feed a decoded piece. Returns one of the three transitions : - `"content"` — emit `text` on the assistant channel. - `"thinking"`— emit `text` on the reasoning channel. - `"split"` — the piece straddles the `` open tag : `prefix` is content, `suffix` is thinking. Same for `` in the other direction.

?()
piecestring
→ stringkind `"content" | "thinking" | "split"`
→ stringtext or prefix.
→ string?suffix on a `"split"` transition.

Thinking:force_close

Force-close the active think block. Used by the reasoning-budget guard when the model exceeds its allotment without emitting ``. Returns the body collected so far.

Thinking:force_close()
→ string