首页龙虾技能列表 › Local Llama TTS — 技能工具

🔊 Local Llama TTS — 技能工具

v1.0.0

Local text-to-speech using llama-tts (llama.cpp) and OuteTTS-1.0-0.6B model.

0· 670·0 当前·0 累计
by @wuxxin·MIT-0
下载技能包
License
MIT-0
最后更新
2026/4/11
安全扫描
VirusTotal
可疑
查看报告
OpenClaw
安全
high confidence
The skill is internally coherent: it wraps a local llama-tts binary to synthesize speech from a local model file and its scripts/instructions match that purpose.
评估建议
This skill appears to do what it says: run your local 'llama-tts' binary against local model files. Before installing or running it: 1) Verify the 'llama-tts' binary you use is from a trusted source and inspect its permissions; 2) Download model/vocoder files from the official Hugging Face pages and verify checksums/licensing; 3) Prefer placing models in a user-controlled directory rather than a global /data/public/... path to avoid accidental exposure or overwrites; 4) Be cautious about running...
详细分析 ▾
用途与能力
Name and description describe local TTS. The only required binary is 'llama-tts' and the included script invokes that binary with model and vocoder files — this is proportionate to the claimed purpose.
指令范围
SKILL.md and the script only run the local 'llama-tts' binary and ask you to download models from Hugging Face. Minor notes: the recommended model/vocoder paths are hardcoded to /data/public/machine-learning/models/text-to-speach/, which may be a shared/global path; instructions include a different vocoder release (Q5_1) than the vocoder filename used in the script (Q4_0) but the SKILL.md comments this as an alternative. No instructions request unrelated files, credentials, or external endpoints beyond model download links.
安装机制
No install spec — instruction-only plus a wrapper script. This is low-risk; nothing in the skill tries to fetch or execute code during install. Model downloads are documented but performed by the user (via Hugging Face links).
凭证需求
The skill requests no environment variables or credentials. The resources referenced (local model and vocoder files, llama-tts binary) are relevant and necessary for local TTS.
持久化与权限
The skill does not request always:true, does not modify other skills, and does not try to persist credentials. It is user-invocable and can be invoked autonomously by the agent (platform default) — nothing here elevates privilege beyond expected behavior.
安全有层次,运行前请审查代码。

License

MIT-0

可自由使用、修改和再分发,无需署名。

运行时依赖

无特殊依赖

版本

latestv1.0.02026/2/17

Initial release

● 可疑

安装命令 点击复制

官方npx clawhub@latest install local-llama-tts
镜像加速npx clawhub@latest install local-llama-tts --registry https://cn.clawhub-mirror.com

技能文档

Synthesize speech locally using llama-tts and the OuteTTS-1.0-0.6B model.

Usage

You can use the wrapper script:

  • scripts/tts-local.sh [options] ""

Options

  • -o, --output : Output WAV file (default: output.wav)
  • -s, --speaker : Speaker reference file (optional)
  • -t, --temp : Temperature (default: 0.4)

Scripts

  • Location: scripts/tts-local.sh (inside skill folder)
  • Model: /data/public/machine-learning/models/text-to-speach/OuteTTS-1.0-0.6B-Q4_K_M.gguf
  • Vocoder: /data/public/machine-learning/models/text-to-speach/WavTokenizer-Large-75-Q4_0.gguf
  • GPU: Enabled via llama-tts.

Setup

Place files in /data/public/machine-learning/models/text-to-speach/ or update scripts/tts-local.sh.

Sampling Configuration

The model card recommends the following settings (hardcoded in the script):
  • Temperature: 0.4
  • Repetition Penalty: 1.1
  • Repetition Range: 64
  • Top-k: 40
  • Top-p: 0.9
  • Min-p: 0.05
数据来源:ClawHub ↗ · 中文优化:龙虾技能库
OpenClaw 技能定制 / 插件定制 / 私有工作流定制

免费技能或插件可能存在安全风险,如需更匹配、更安全的方案,建议联系付费定制

了解定制服务