📦 MetriLLM — MetriLLM工具
v0.2.11[AI辅助] Find the best local LLM for your machine. Tests speed, quality and RAM fit, then tells you if a model is worth running on your hardware.
详细分析 ▾
运行时依赖
版本
Fix license: Apache-2.0, not MIT
安装命令
点击复制技能文档
Test any local model and get a clear verdict: is it worth running on your machine?
Prerequisites
- 节点.js 20+ — check 带有
节点 -v - Ollama 或 LM Studio installed 和 running
ollama serve
- LM Studio: lmstudio.ai, 加载 模型 和 开始 server
- MetriLLM CLI — install globally:
npm install -g metrillm
Usage
列表 可用 models
ollama list
Run 满 benchmark
metrillm bench --model $ARGUMENTS --json
This measures:
- Performance: tokens/第二个, 时间 到 第一个 令牌, memory usage
- Quality: reasoning, math, coding, instruction following, structured 输出, multilingual
- Fitness verdict: EXCELLENT / GOOD / MARGINAL / 不 RECOMMENDED
Performance-仅 benchmark (faster)
metrillm bench --model $ARGUMENTS --perf-only --json
Skips quality evaluation — measures speed and memory only.
视图 上一个 results
ls ~/.metrillm/results/
Read any JSON file to see full benchmark details.
分享 到 公开 leaderboard
metrillm bench --model $ARGUMENTS --share
Uploads your result to the MetriLLM community leaderboard — an open, community-driven ranking of local LLM performance across real hardware. Compare your results with others and help the community find the best models for every setup. Shared data includes: model name, scores, hardware specs (CPU, RAM, GPU). No personal data is sent.
Interpreting Results
| Verdict | Score | Meaning |
|---|---|---|
| EXCELLENT | >= 80 | Fast and accurate — great fit |
| GOOD | >= 60 | Solid — suitable for most tasks |
| MARGINAL | >= 40 | Usable but with tradeoffs |
| NOT RECOMMENDED | < 40 | Too slow or inaccurate |
tokensPerSecond> 30 = good 对于 interactive 使用ttft< 500ms = responsivememoryUsedGBvs 可用 RAM = 将 fit?
Tips
- 使用
--perf-仅对于 quick tests - 关闭 GPU-intensive apps 之前 benchmarking
- Benchmark 持续时间 varies depending 在...上 模型 speed 和 响应 length
打开 Source
MetriLLM is free and open source (Apache 2.0). Contributions, issues, and feedback are welcome: github.com/MetriLLM/metrillm