首页龙虾技能列表 › Local Hub — Local工具

Local Hub — Local工具

v1.0.2

本地能力中心。通过 HTTP 调用本机麦克风、摄像头、Ollama、YOLO、Stable Diffusion、TTS/转写、通知、剪贴板、天气、白名单脚本等。当需要「验证是否有声音、拍一张照、用本地模型对话、文生图/图生图、朗读、系统通知、读剪贴板、看图描述、列模型、天气、跑脚本」时使用。需先确保 local_...

0· 312·0 当前·0 累计
by @damianleemoha (DamianLeeMoha)·MIT-0
下载技能包
License
MIT-0
最后更新
2026/4/12
安全扫描
VirusTotal
可疑
查看报告
OpenClaw
安全
medium confidence
The skill's name and instructions match: it's an instruction-only wrapper that tells the agent how to call a separately-installed local HTTP 'local_hub' service to access mic/camera/LLMs/SD/clipboard/notifications and run whitelisted scripts — coherent, but it grants access to sensitive local resources so you should audit and control the backend before running.
评估建议
This skill is essentially a client for a local service that performs sensitive actions (camera, mic, clipboard, filesystem, run scripts, call local models). It is internally coherent with its description, but before installing or enabling it you should: (1) inspect and vet the local_hub repository and release you plan to run (do not run an untrusted run.sh), (2) ensure the RUN_SCRIPT_WHITELIST only contains safe scripts and understand what TRANSCRIBE_SCRIPT does, (3) consider running the service...
详细分析 ▾
用途与能力
The name/description state a local capability hub and the SKILL.md only contains curl-based instructions to call a localhost API (audio, camera, LLM, SD, clipboard, notify, run-script, etc.). Required binary is only curl and no unrelated credentials, which is proportionate to the stated purpose.
指令范围
The instructions explicitly direct the agent to call endpoints that access sensitive local resources (microphone, camera, clipboard, filesystem paths returned by API) and to request execution of whitelisted local scripts via POST /run/script. While expected for a local hub, this expands what an autonomous agent can do on the host (sensor capture, file reads/writes, launching model services). Confirm you trust and inspect the local_hub service implementation and its whitelist configuration before enabling.
安装机制
The skill is instruction-only and does not install code itself. SKILL.md points to a GitHub repo/releases (a standard host) and documents manual steps (virtualenv, pip install, run.sh). No hidden download or extract steps in the skill itself.
凭证需求
The skill declares no required env vars or credentials. It mentions optional local-service envs (e.g., SD_BASE_URL) and service-side configs (TRANSCRIBE_SCRIPT, RUN_SCRIPT_WHITELIST) but does not request host secrets. This is proportionate to an instruction-only connector.
持久化与权限
always is false and the skill is user-invocable; autonomous invocation is allowed by default. Because the skill enables access to local sensors and script execution via the backend, consider the increased blast radius if you permit autonomous (unsupervised) use — review local_hub access controls and script whitelist.
安全有层次,运行前请审查代码。

License

MIT-0

可自由使用、修改和再分发,无需署名。

运行时依赖

无特殊依赖

版本

latestv1.0.22026/3/5

补充服务端下载:GitHub 仓库与 v1.0.0 zip 直链及解压后安装步骤

● 可疑

安装命令 点击复制

官方npx clawhub@latest install local-hub
镜像加速npx clawhub@latest install local-hub --registry https://cn.clawhub-mirror.com

技能文档

⚠️ 使用前必读:本 skill 仅包含「调用说明」,不包含服务端代码。实际提供接口的 local_hub 服务需在本机单独部署并先启动。服务未启动时,所有 curl 调用会连接失败。
>
服务端代码与下载
- 仓库:
- 直接下载 zip:
解压后进入目录,创建虚拟环境并安装依赖(python3 -m venv .venv-hubsource .venv-hub/bin/activatepip install -r requirements.txt),再执行 ./run.sh

这是什么:一个跑在你本机的 HTTP 服务(默认 http://127.0.0.1:9000),把麦克风、摄像头、Ollama、YOLO、Stable Diffusion 等统一成一组 API。Silas 通过 exec + curl 调用这些接口,不占用 OpenClaw 的 skill 检索维度,只是「多了一个可调的后端」。

与其它 skills 的关系:和 pptx、screen-capture 等不在同一维度——其它 skills 是「能力目录里的一条」;local_hub 是底层服务。本 skill 只是「调用说明」:告诉 Silas 何时、如何用 curl 去调这个服务。


前置条件

  • 服务必须先启动(用户或你在本机执行一次):
  cd ~/.openclaw/workspace/local_hub && source .venv-hub/bin/activate && ./run.sh
  
  • 健康检查(可选):curl -s http://127.0.0.1:9000/health 返回 {"status":"ok",...} 表示服务在线。

何时使用

需求用哪个接口说明
验证刚才播放的声音是否被听到POST /audio/check录约 1 秒,返回是否有声音(rms、has_sound)
本机 TTS 朗读一段文字POST /audio/ttsJSON: text, voice(可选);macOS say,返回音频 path
音频转文字POST /audio/transcribe上传 file 或 form path;需设 TRANSCRIBE_SCRIPT
拍一张摄像头画面POST /camera/snapshot返回保存的图片路径
对一张图做目标检测(YOLO)POST /vision/yolo需上传图片(form),返回检测框等
用视觉模型描述图片POST /vision/describe上传图片,Ollama llava 等;query model=llava
用本地 Ollama 对话(含 openclaw-distill)POST /llm/chatJSON body:model, prompt, system(可选)
列出本机 Ollama 模型GET /llm/models返回 models 列表
文本向量(embedding)POST /llm/embedJSON: model, text
文生图(Stable Diffusion)POST /image/txt2img需本机 A1111 在 7860 端口
图生图(Stable Diffusion)POST /image/img2img需提供 image_path + prompt
系统通知(macOS)POST /notifyJSON: title, body
读/写剪贴板GET /clipboardPOST /clipboardGET 读文本,POST JSON text 写
系统状态(音量、电池等)GET /system/statusmacOS 简要状态
天气GET /weatherquery 可选城市
执行白名单脚本POST /run/scriptJSON: name, params(可选);需配置 RUN_SCRIPT_WHITELIST

怎么用(curl 示例)

基地址http://127.0.0.1:9000(若改过端口或主机,以实际为准)。

1. 检查是否有声音(播放后验证)

curl -s -X POST "http://127.0.0.1:9000/audio/check?duration=1&threshold=0.01"

返回示例:{"rms":0.02,"threshold":0.01,"has_sound":true,"duration":1.0}。看 has_sound 为 true/false。

2. 摄像头拍一张

curl -s -X POST http://127.0.0.1:9000/camera/snapshot

返回示例:{"path":"/Users/.../local_hub/data/camera_1234567890.jpg"}。后续可读该文件或交给 describe-image。

3. 本地 Ollama 对话

curl -s -X POST http://127.0.0.1:9000/llm/chat \
  -H "Content-Type: application/json" \
  -d '{"model":"openclaw-distill","prompt":"你好","system":null}'

返回里有 response 字段即模型回复。可换 qwen2.5:14bllava 等本机已有模型。

4. YOLO 检测(上传图片)

curl -s -X POST http://127.0.0.1:9000/vision/yolo \
  -F "file=@/path/to/image.jpg"

返回 detections 列表及保存路径。

5. 文生图(Stable Diffusion)

curl -s -X POST http://127.0.0.1:9000/image/txt2img \
  -H "Content-Type: application/json" \
  -d '{"prompt":"a cute robot, high quality","negative_prompt":"","steps":20,"width":512,"height":512}'

返回 paths 数组,为生成图在 local_hub/data/ 下的路径。

6. 图生图

curl -s -X POST http://127.0.0.1:9000/image/img2img \
  -H "Content-Type: application/json" \
  -d '{"image_path":"/path/to/input.png","prompt":"make it cyberpunk","denoising_strength":0.6,"steps":20}'

7. TTS(朗读并保存音频)

curl -s -X POST http://127.0.0.1:9000/audio/tts \
  -H "Content-Type: application/json" \
  -d '{"text":"Hello world","voice":"Tingting"}'

8. 系统通知

curl -s -X POST http://127.0.0.1:9000/notify \
  -H "Content-Type: application/json" \
  -d '{"title":"提醒","body":"会议 5 分钟后开始"}'

9. 剪贴板

# 读取
curl -s http://127.0.0.1:9000/clipboard
# 写入
curl -s -X POST http://127.0.0.1:9000/clipboard -H "Content-Type: application/json" -d '{"text":"复制这段"}'

10. 图片描述(Ollama 视觉模型)

curl -s -X POST "http://127.0.0.1:9000/vision/describe?model=llava" -F "file=@/path/to/image.jpg"

11. 列出 Ollama 模型 / 天气 / 系统状态

curl -s http://127.0.0.1:9000/llm/models
curl -s "http://127.0.0.1:9000/weather?query=Beijing"
curl -s http://127.0.0.1:9000/system/status

12. 执行白名单脚本(需先配置 RUN_SCRIPT_WHITELIST 与 scripts/ 下可执行文件)

curl -s -X POST http://127.0.0.1:9000/run/script \
  -H "Content-Type: application/json" \
  -d '{"name":"my_script","params":{"key":"value"}}'

执行方式

  • exec 调用上述 curl 命令;返回的 JSON 可解析后回复用户(例如「检测到有声音」「已生成图:…」)。
  • 若服务未启动,curl 会报连接失败;此时可提示用户先启动 local_hub/run.sh,或说明「本地能力中心未就绪」。

依赖与端口

  • local_hub:workspace 内 local_hub/,Python + FastAPI,默认端口 9000
  • Ollama:本机 11434,local_hub 会转调。
  • Stable Diffusion:本机需有 A1111 类服务,默认 7860;可通过环境变量 SD_BASE_URL 改。

完整 API 与参数见:workspace/local_hub/README.md(若有)。

数据来源:ClawHub ↗ · 中文优化:龙虾技能库
OpenClaw 技能定制 / 插件定制 / 私有工作流定制

免费技能或插件可能存在安全风险,如需更匹配、更安全的方案,建议联系付费定制

了解定制服务