ion7-core / ion7-core

module

ion7-core

Functions

ion7.init

Initialise the llama.cpp + ggml backend and configure logging. Must be called once at process startup, BEFORE loading any model.

ion7.init(opts)
optstable?

ion7.shutdown

Release every llama.cpp / ggml resource. Safe to call multiple times (the second call is a no-op). Pair with `init` at process exit to release VRAM cleanly.

ion7.shutdown()

ion7.capabilities

Snapshot of the build-time and runtime capabilities of the linked libllama / bridge. Downstream modules (`ion7-llm`, `ion7-embed`) consult this to adapt behaviour (e.g. enable mmap when supported).

ion7.capabilities()
→ table{

ion7.time_us

Microsecond timestamp from llama.cpp's internal monotonic clock. Useful for precise inner-loop timing without `os.clock`'s platform-dependent resolution.

ion7.time_us()
→ integer

ion7.numa_init

Configure NUMA placement. Call BEFORE `ion7.init` for the policy to apply to llama.cpp's own thread spawning.

ion7.numa_init(strategy)
strategyinteger?Default `NUMA_DISTRIBUTE`.