首页龙虾技能列表 › Model Usage — 模型使用统计

Model Usage — 模型使用统计

v1.0.0

跟踪和统计 AI 模型的使用情况,包括调用次数、token 消耗和性能指标。

101· 31,600·0 当前·0 累计

运行时依赖

无特殊依赖

安装命令 点击复制

官方clawhub install model-usage
镜像加速clawhub install model-usage --registry https://www.longxiaskill.com

技能文档

Overview

Get per-model usage cost from CodexBar's local cost logs. Supports "current model" (most recent daily entry) or "all models" summaries for Codex or Claude.

TODO: add Linux CLI support guidance once CodexBar CLI install path is documented for Linux.

Quick start

1) Fetch cost JSON via CodexBar CLI or pass a JSON file. 2) Use the bundled script to summarize by model.

python {baseDir}/scripts/model_usage.py --provider codex --mode current
python {baseDir}/scripts/model_usage.py --provider codex --mode all
python {baseDir}/scripts/model_usage.py --provider claude --mode all --format json --pretty

Current model logic

  • Uses the most recent daily row with modelBreakdowns.
  • Picks the model with the highest cost in that row.
  • Falls back to the last entry in modelsUsed when breakdowns are missing.
  • Override with --model when you need a specific model.

Inputs

  • Default: runs codexbar cost --format json --provider .
  • File or stdin:
codexbar cost --provider codex --format json > /tmp/cost.json
python {baseDir}/scripts/model_usage.py --input /tmp/cost.json --mode all
cat /tmp/cost.json | python {baseDir}/scripts/model_usage.py --input - --mode current

Output

  • Text (default) or JSON (--format json --pretty).
  • Values are cost-only per model; tokens are not split by model in CodexBar output.

References

  • Read references/codexbar-cli.md for CLI flags and cost JSON fields.
数据来源:ClawHub ↗ · 中文优化:龙虾技能库
OpenClaw 技能定制 / 插件定制 / 私有工作流定制

免费技能或插件可能存在安全风险,如需更匹配、更安全的方案,建议联系付费定制

了解定制服务