首页龙虾技能列表 › Llm Evaluator — Llm工具

\u2696\ufe0f Llm Evaluator — Llm工具

v1.0.0

[AI辅助] LLM-as-a-Judge evaluation system using Langfuse. Score AI outputs on relevance, accuracy, hallucination, and helpfulness. Backfill scoring on historical trac...

0· 296·0 当前·0 累计
by @aiwithabidi·MIT-0
下载技能包 项目主页
License
MIT-0
最后更新
2026/4/12
安全扫描
VirusTotal
可疑
查看报告
OpenClaw
可疑
medium confidence
The skill mostly fits its stated purpose (evaluating Langfuse traces via OpenRouter) but contains unexpected behaviors—hardcoded Langfuse API keys in code and an undeclared read of a user-local .env file—that don't align with the declared requirements and deserve careful review before use.
评估建议
Before installing or running this skill, inspect the included scripts (scripts/evaluator.py) yourself. Pay particular attention to: 1) the hardcoded LANGFUSE_SECRET_KEY/LANGFUSE_PUBLIC_KEY and LANGFUSE_HOST — verify they are not production secrets and consider removing or replacing them with environment-configured values; 2) the code path that reads ~/.openclaw/workspace/.env to obtain an OpenRouter key — ensure you are comfortable with that file being read or set OPENROUTER_API_KEY explicitly i...
详细分析 ▾
用途与能力
Name/description (Langfuse + OpenRouter) matches the script's behavior: it evaluates traces and posts scores to Langfuse using an OpenRouter-backed judge model. However, the code embeds LANGFUSE_SECRET_KEY and LANGFUSE_PUBLIC_KEY values and a LANGFUSE_HOST that are not declared in requires.env or SKILL.md; shipping hardcoded service credentials is unexpected and disproportionate to the stated purpose.
指令范围
SKILL.md directs running the included Python script, which is expected. The script also attempts to read a user-local file (~/.openclaw/workspace/.env) to find OPENROUTER_API_KEY if the env var isn't set; this config-file access is not declared in requires.config_paths and is an additional data access surface that users should be aware of.
安装机制
No install spec (instruction-only with an included script). That keeps install risk low — nothing is downloaded or executed automatically beyond running the bundled Python script.
凭证需求
The registry declares only OPENROUTER_API_KEY as required (which is appropriate). But the code embeds Langfuse public/secret keys and a Langfuse host URL; these are effectively credentials baked into the skill rather than requested from the environment. Also the script will make network calls to Langfuse and OpenRouter, which is expected but worth noting.
持久化与权限
The skill does not request always:true and does not request system-wide persistence. It runs network operations and writes scores to Langfuse, which is consistent with its purpose and not an unusual privilege level.
安全有层次,运行前请审查代码。

License

MIT-0

可自由使用、修改和再分发,无需署名。

运行时依赖

无特殊依赖

版本

latestv1.0.02026/3/5

- Initial release of the llm-evaluator skill. - Provides an LLM-as-a-Judge system for evaluating AI outputs using relevance, accuracy, hallucination, and helpfulness scores. - Integrates with Langfuse and uses GPT-5-nano for efficient automated judging. - Enables batch backfill scoring for historical traces and real-time evaluation of outputs. - Command-line interface for testing, scoring specific traces, and running backfills.

● 可疑

安装命令 点击复制

官方npx clawhub@latest install llm-evaluator
镜像加速npx clawhub@latest install llm-evaluator --registry https://cn.clawhub-mirror.com

技能文档

LLM-as-a-Judge evaluation system powered by Langfuse. Uses GPT-5-nano to score AI outputs.

当...时 到 使用

  • Evaluating quality 的 搜索 results 或 AI responses
  • Scoring traces 对于 relevance, accuracy, hallucination detection
  • Batch scoring recent unscored traces
  • Quality assurance 在...上 agent outputs

Usage

# Test with sample cases
python3 {baseDir}/scripts/evaluator.py test

# Score a specific Langfuse trace python3 {baseDir}/scripts/evaluator.py score

# Score with specific evaluator only python3 {baseDir}/scripts/evaluator.py score --evaluators relevance

# Backfill scores on recent unscored traces python3 {baseDir}/scripts/evaluator.py backfill --limit 20

Evaluators

EvaluatorMeasuresScale
relevanceResponse relevance to query0–1
accuracyFactual correctness0–1
hallucinationMade-up information detection0–1
helpfulnessOverall usefulness0–1

Credits

Built by M. Abidi | agxntsix.ai YouTube | GitHub Part of the AgxntSix Skill Suite for OpenClaw agents.

📅 Need help setting up OpenClaw for your business? Book a free consultation

数据来源:ClawHub ↗ · 中文优化:龙虾技能库
OpenClaw 技能定制 / 插件定制 / 私有工作流定制

免费技能或插件可能存在安全风险,如需更匹配、更安全的方案,建议联系付费定制

了解定制服务