module
model.inspect
Functions
M.desc
Human-readable description string, e.g. `"Qwen3 32B Q4_K_M"`.
M.size
Total byte size of the model's tensors on disk. We cast through `double` so the value survives the `tonumber` cast when it exceeds 2^31 (any modern 7B+ model qualifies).
M.n_params
Total number of trained parameters (weights + biases).
M.n_ctx_train
Training-time context length declared by the GGUF header.
M.n_embd
Hidden embedding dimension used internally by the transformer stack.
M.n_embd_inp
Input embedding dimension (matches `n_embd` for most models, may differ for projection-heavy designs).
M.n_embd_out
Output embedding dimension. Differs from `n_embd_inp` for classifier-style models with a projection head.
M.n_layer
Number of transformer blocks.
M.n_head
Number of attention heads.
M.n_head_kv
Number of KV heads. Lower than `n_head` for GQA / MQA models.
M.n_swa
Sliding-window attention size. `0` for full-context models.
M.has_encoder
True for encoder-decoder architectures (T5-like).
M.has_decoder
True for decoder-bearing models — virtually all LLMs.
M.is_recurrent
True for state-space / RNN-like models (Mamba, RWKV).
M.is_hybrid
True for hybrid attention + SSM models (Jamba).
M.is_diffusion
True for diffusion-based models (LLaDA et al.).
M.rope_freq_scale_train
Training-time RoPE frequency scale.
M.rope_type
Symbolic name of the RoPE scheme : `"none" | "norm" | "neox" | "mrope" | "imrope" | "vision" | "unknown"`.
M.n_cls_out
Number of classifier output classes ; 0 for non-classifier models.
M.cls_label
Label string of the classifier output at index `i` (0-based), or nil if `i` exceeds `n_cls_out`.
M.decoder_start_token
Decoder start token id for encoder-decoder models, `-1` for plain decoders.
M.info
Return a flat table snapshot of every Model property in one call. Convenient for logging or sending to `print` / dkjson.