安全扫描
OpenClaw
安全
high confidenceThe skill is internally coherent: it wraps a local llama-tts binary to synthesize speech from a local model file and its scripts/instructions match that purpose.
评估建议
This skill appears to do what it says: run your local 'llama-tts' binary against local model files. Before installing or running it: 1) Verify the 'llama-tts' binary you use is from a trusted source and inspect its permissions; 2) Download model/vocoder files from the official Hugging Face pages and verify checksums/licensing; 3) Prefer placing models in a user-controlled directory rather than a global /data/public/... path to avoid accidental exposure or overwrites; 4) Be cautious about running...详细分析 ▾
✓ 用途与能力
Name and description describe local TTS. The only required binary is 'llama-tts' and the included script invokes that binary with model and vocoder files — this is proportionate to the claimed purpose.
ℹ 指令范围
SKILL.md and the script only run the local 'llama-tts' binary and ask you to download models from Hugging Face. Minor notes: the recommended model/vocoder paths are hardcoded to /data/public/machine-learning/models/text-to-speach/, which may be a shared/global path; instructions include a different vocoder release (Q5_1) than the vocoder filename used in the script (Q4_0) but the SKILL.md comments this as an alternative. No instructions request unrelated files, credentials, or external endpoints beyond model download links.
✓ 安装机制
No install spec — instruction-only plus a wrapper script. This is low-risk; nothing in the skill tries to fetch or execute code during install. Model downloads are documented but performed by the user (via Hugging Face links).
✓ 凭证需求
The skill requests no environment variables or credentials. The resources referenced (local model and vocoder files, llama-tts binary) are relevant and necessary for local TTS.
✓ 持久化与权限
The skill does not request always:true, does not modify other skills, and does not try to persist credentials. It is user-invocable and can be invoked autonomously by the agent (platform default) — nothing here elevates privilege beyond expected behavior.
安全有层次,运行前请审查代码。
运行时依赖
无特殊依赖
版本
latestv1.0.02026/2/17
Initial release
● 可疑
安装命令 点击复制
官方npx clawhub@latest install local-llama-tts
镜像加速npx clawhub@latest install local-llama-tts --registry https://cn.clawhub-mirror.com
技能文档
Synthesize speech locally using llama-tts and the OuteTTS-1.0-0.6B model.
Usage
You can use the wrapper script:
scripts/tts-local.sh [options] ""
Options
-o, --output: Output WAV file (default:output.wav)-s, --speaker: Speaker reference file (optional)-t, --temp: Temperature (default:0.4)
Scripts
- Location:
scripts/tts-local.sh(inside skill folder) - Model:
/data/public/machine-learning/models/text-to-speach/OuteTTS-1.0-0.6B-Q4_K_M.gguf - Vocoder:
/data/public/machine-learning/models/text-to-speach/WavTokenizer-Large-75-Q4_0.gguf - GPU: Enabled via
llama-tts.
Setup
- Model: Download from OuteAI/OuteTTS-1.0-0.6B-GGUF
- Vocoder: Download from ggml-org/WavTokenizer (Note: Felix uses a Q4_0 version, Q5_1 is linked here as a high-quality alternative).
Place files in /data/public/machine-learning/models/text-to-speach/ or update scripts/tts-local.sh.
Sampling Configuration
The model card recommends the following settings (hardcoded in the script):- Temperature: 0.4
- Repetition Penalty: 1.1
- Repetition Range: 64
- Top-k: 40
- Top-p: 0.9
- Min-p: 0.05
数据来源:ClawHub ↗ · 中文优化:龙虾技能库
OpenClaw 技能定制 / 插件定制 / 私有工作流定制
免费技能或插件可能存在安全风险,如需更匹配、更安全的方案,建议联系付费定制