ion7-core / model

class

ion7.core.Model

_ptr cdata Underlying `llama_model*` (auto-freed via ffi.gc).
path string Origin path (or "" / "").

Functions

build_model_params

Build a `llama_model_params` cdata from the user-facing options table. Defaults match `llama_model_default_params` ; only the four flags we expose at the Lua level get overridden.

build_model_params(opts)
optstable?
→ cdata`struct llama_model_params`

Model.load

Load a GGUF model from disk.

Model.load(path, opts)
pathstringAbsolute path to the `.gguf` file.
optstable?Optional :
→ ion7.core.Model

raises — When the file cannot be opened or the model fails to load.

Model.load_splits

Load a sharded GGUF model from an explicit list of paths. Use when the shards do not follow the standard auto-discovery naming (`model-00001-of-NNNN.gguf`). Paths must be in shard order.

Model.load_splits(paths, opts)
pathsstring[]Array of GGUF shard file paths.
optstable?Same shape as `Model.load`.
→ ion7.core.Model

raises — When loading fails.

Model.load_from_fd

Load a model from an already-open `FILE*` handle. The caller retains ownership of the FILE — it is NOT closed by this call.

Model.load_from_fd(file, opts)
filecdata`FILE*` (from `ffi.C.fopen` or similar).
optstable?Same shape as `Model.load`.
→ ion7.core.Model

raises — When loading fails.

Model.fit_params

Auto-fit a model's `n_gpu_layers` and `n_ctx` to the available VRAM. Wraps the bridge's `ion7_params_fit`, which itself wraps libcommon's `common_fit_params`. NOT thread-safe — call before any other model load on the same process.

Model.fit_params(path, opts)
pathstringAbsolute path to the `.gguf` file.
optstable?Optional :
→ table|nil`{ n_gpu_layers, n_ctx }` on success, nil if the

Model:ptr

Return the raw `llama_model*` cdata pointer.

Model:ptr()
→ cdata

Model:free

Explicitly free the model and release VRAM immediately. After this the Model object is dead — calling any other method on it is undefined. The `ffi.gc` finalizer is disarmed first to avoid a double-free if the GC runs later.

Model:free()

Model:save

Save the (possibly modified) model back to a GGUF file. Useful after applying quantization or merging LoRA adapters.

Model:save(path)
pathstringOutput GGUF file path.