module
model.quantize
Functions
M.quantize
Quantise a GGUF model to a new file.
M.quantize(inp_path, out_path, opts)
inp_pathstringPath to source GGUF.
out_pathstringPath to write the quantised GGUF.
optstable?
→ integer0 on success, non-zero llama_ftype error code otherwise.