📦 llama.cpp Benchmark — llama.cpp模型性能测试
v1.0.0在GGUF模型上运行llama.cpp基准测试,测量提示处理(pp)与token生成(tg)性能,帮助用户量化LLM模型推理速度。
0· 86·0 当前·0 累计
下载技能包
最后更新
2026/4/5
安全扫描
OpenClaw
安全
high confidenceThe skill's files and instructions align with its stated purpose (benchmarking llama.cpp GGUF models); it clones and builds the upstream repo and runs local benchmarks — nothing in the package indicates covert data exfiltration or unrelated privilege requests.
评估建议
This skill appears to do what it says: it will clone/update the llama.cpp GitHub repo and build llama-bench, then run local benchmarks on GGUF files. Before installing: 1) Be prepared to install and run build tools (git, cmake, make/ninja, a C/C++ compiler) — the metadata doesn't list these dependencies. 2) Expect the build to use network access to GitHub and to write files under ~/Repo/llama.cpp and whatever output directory you choose. 3) The benchmark script searches your home directory and /...详细分析 ▾
ℹ 用途与能力
The skill's scripts and SKILL.md match the stated purpose: finding/building llama.cpp and running llama-bench. One minor inconsistency: the package metadata declares no required binaries, but the build/benchmark scripts assume tools like git, cmake, a C/C++ toolchain, and typical UNIX utilities (find, grep, make). These are expected for building llama.cpp but should be declared.
ℹ 指令范围
Runtime instructions and scripts are narrowly scoped to cloning/updating the llama.cpp repository, building it, and running llama-bench on local GGUF files. The benchmark script searches the user's home directory and /DATA to locate llama-bench (find ~ /DATA ...) — this is local-only scanning (no remote upload) but may traverse many user files. The build script runs git fetch/pull/clone (network access to GitHub) and compiles code locally; it may prompt interactively and will write under the chosen build directory.
✓ 安装机制
No remote arbitrary binary blobs or obscure download hosts are used; the build script clones from github.com/ggerganov/llama.cpp — a known upstream repository — and builds locally via cmake. No extract-from-unknown-URL operations detected.
✓ 凭证需求
The skill declares no environment variables or credentials. It references an optional LLAMA_BACKEND env var in docs (expected). It does not request or use tokens/secret env vars. Git operations are against a public GitHub repo and should not require credentials.
✓ 持久化与权限
The skill is not always-enabled and does not alter other skills or system-wide configuration. It creates/clobbers files under the chosen build directory (default ~/Repo/llama.cpp) and output directory (default ./benchmark_results), which is expected for a build/benchmark tool.
安全有层次,运行前请审查代码。
运行时依赖
无特殊依赖
版本
latestv1.0.02026/4/5
Initial release: benchmark GGUF models with llama-bench, auto-detect llama-bench, batch benchmarking, and build/update llama.cpp from source
● 无害
安装命令
点击复制官方npx clawhub@latest install llamacpp-bench
镜像加速npx clawhub@latest install llamacpp-bench --registry https://cn.longxiaskill.com镜像同步中