ion7-core / log

module

log

Functions

dispatch

C-side dispatcher : invoked by llama.cpp / ggml whenever they emit a log message. Receives the ggml level, a NUL-terminated `const char*` and an opaque user_data pointer (we always pass NULL). Filtering happens here rather than C-side because the level threshold is owned by `state.level` and can be flipped at runtime by `set_level`.

dispatch(level, text, ud)
levelintegerRaw `ggml_log_level` enum value.
textcdata`const char*` pointing to the message text.
udcdata?`void*` user data (unused, kept for sig).

M.set_level

Set the verbosity threshold. See module header for the mapping. Takes effect immediately for subsequent log calls.

M.set_level(level)
levelinteger0 (silent) to 4 (debug).

M.to_file

Direct log output to a file path, or restore stderr when `path` is nil / empty. The previous file handle (if any) is closed.

M.to_file(path)
pathstring|nilFile path, or nil for stderr.

M.set_timestamps

Toggle ISO-8601 timestamp prefix on each emitted line.

M.set_timestamps(enable)
enableboolean

M.install

Register our dispatcher with llama.cpp. Idempotent : a second call is a no-op. Call once at startup, before any model load, so even the backend init messages route through the configured destination.

M.install()

M.uninstall

Detach our dispatcher and restore llama.cpp's default logging (which writes to stderr unconditionally).

M.uninstall()

M.snapshot

Inspect the current configuration (useful for tests / diagnostics).

M.snapshot()
→ table{ level, has_file, timestamps, installed }