llama.cpp
LLM inference in C/C++
Library ggml
Minimal
Install package
Latest min add llama.cpp - Source archive SHA256
- fed02e5a…e180
- Spec hash
- 8294a29b…2b38
- Target
- amd64/linux
- Needs network
- No
- Closed source
- No
Quick links
What is llama.cpp?
LLM inference in C/C++
How to use this package
Quick install
Installs the package into the current environment for this session. Use --build or --runtime to persist it as a build-time or runtime dependency.
min add llama.cpp Declare as a task dependency in minimal.toml
Listing the package under tasks.<name>.packages makes it available inside that task’s sandbox.
[tasks.dev]
packages = ["llama.cpp"] Build-time vs runtime
Choose build-time for tools needed during compilation, runtime for dynamic libraries loaded at runtime.
min add --build llama.cpp
min add --runtime llama.cpp