首页龙虾技能列表 › Ollama Local — 技能工具

Ollama Local — 技能工具

v1.1.0

Manage and use local Ollama models. Use for model management (list/pull/remove), chat/completions, embeddings, and tool-use with local LLMs. Covers OpenClaw sub-agent integration and model selection guidance.

10· 5,311·47 当前·50 累计
by @timverhoogt (Timverhoogt)·MIT-0
下载技能包
License
MIT-0
最后更新
2026/2/26
安全扫描
VirusTotal
无害
查看报告
OpenClaw
安全
medium confidence
The skill's code and instructions match its stated purpose (managing and using local Ollama models); no unexpected installs or secret requests were found, but the docs/scripts assume an OLLAMA_HOST and mention OpenClaw auth profiles without those being declared in metadata — and the skill will send data to whatever host you configure, so the host must be trusted.
评估建议
This skill appears to do what it says: local Ollama model management and tool-enabled inference. Before installing, check these points: (1) The scripts use an OLLAMA_HOST environment variable but the metadata does not declare it — make sure OLLAMA_HOST points to a trusted local host (default) and not an untrusted remote server, because all chat/generate/embed requests (and any tool-call content) will be sent to that host. (2) The SKILL.md suggests adding an OpenClaw auth profile (a harmless plac...
详细分析 ▾
用途与能力
技能名称/描述与指令匹配:管理本地 Ollama 模型进行推理。
指令范围
指令范围狭窄,仅限于 Ollama CLI 命令(pull、run、list)。不指示读取无关系统文件。
安装机制
技能需要本地安装 Ollama。用户需单独安装 Ollama。
凭证需求
无需 API 凭证,因为 Ollama 在本地运行。这对于本地模型工具是适当的。
持久化与权限
always:false 且除模型管理外无持久系统修改。
安全有层次,运行前请审查代码。

License

MIT-0

可自由使用、修改和再分发,无需署名。

运行时依赖

无特殊依赖

版本

latestv1.1.02026/2/3

Generalized for any Ollama setup - host now configurable via OLLAMA_HOST env var (defaults to localhost:11434)

● 无害

安装命令 点击复制

官方npx clawhub@latest install ollama-local
镜像加速npx clawhub@latest install ollama-local --registry https://cn.clawhub-mirror.com

技能文档

Work with local Ollama models for inference, embeddings, and tool use.

Configuration

Set your Ollama host (defaults to http://localhost:11434):

export OLLAMA_HOST="http://localhost:11434"
# Or for remote server:
export OLLAMA_HOST="http://192.168.1.100:11434"

Quick Reference

# List models
python3 scripts/ollama.py list

# Pull a model python3 scripts/ollama.py pull llama3.1:8b

# Remove a model python3 scripts/ollama.py rm modelname

# Show model details python3 scripts/ollama.py show qwen3:4b

# Chat with a model python3 scripts/ollama.py chat qwen3:4b "What is the capital of France?"

# Chat with system prompt python3 scripts/ollama.py chat llama3.1:8b "Review this code" -s "You are a code reviewer"

# Generate completion (non-chat) python3 scripts/ollama.py generate qwen3:4b "Once upon a time"

# Get embeddings python3 scripts/ollama.py embed bge-m3 "Text to embed"

Model Selection

See references/models.md for full model list and selection guide.

Quick picks:

  • Fast answers: qwen3:4b
  • Coding: qwen2.5-coder:7b
  • General: llama3.1:8b
  • Reasoning: deepseek-r1:8b

Tool Use

Some local models support function calling. Use ollama_tools.py:

# Single request with tools
python3 scripts/ollama_tools.py single qwen2.5-coder:7b "What's the weather in Amsterdam?"

# Full tool loop (model calls tools, gets results, responds) python3 scripts/ollama_tools.py loop qwen3:4b "Search for Python tutorials and summarize"

# Show available example tools python3 scripts/ollama_tools.py tools

Tool-capable models: qwen2.5-coder, qwen3, llama3.1, mistral

OpenClaw Sub-Agents

Spawn local model sub-agents with sessions_spawn:

# Example: spawn a coding agent
sessions_spawn(
    task="Review this Python code for bugs",
    model="ollama/qwen2.5-coder:7b",
    label="code-review"
)

Model path format: ollama/

Parallel Agents (Think Tank Pattern)

Spawn multiple local agents for collaborative tasks:

agents = [
    {"label": "architect", "model": "ollama/gemma3:12b", "task": "Design the system architecture"},
    {"label": "coder", "model": "ollama/qwen2.5-coder:7b", "task": "Implement the core logic"},
    {"label": "reviewer", "model": "ollama/llama3.1:8b", "task": "Review for bugs and improvements"},
]

for a in agents: sessions_spawn(task=a["task"], model=a["model"], label=a["label"])

Direct API

For custom integrations, use the Ollama API directly:

# Chat
curl $OLLAMA_HOST/api/chat -d '{
  "model": "qwen3:4b",
  "messages": [{"role": "user", "content": "Hello"}],
  "stream": false
}'

# Generate curl $OLLAMA_HOST/api/generate -d '{ "model": "qwen3:4b", "prompt": "Why is the sky blue?", "stream": false }'

# List models curl $OLLAMA_HOST/api/tags

# Pull model curl $OLLAMA_HOST/api/pull -d '{"name": "phi3:mini"}'

Troubleshooting

Connection refused?

  • Check Ollama is running: ollama serve
  • Verify OLLAMA_HOST is correct
  • For remote servers, ensure firewall allows port 11434

Model not loading?

  • Check VRAM: larger models may need CPU offload
  • Try a smaller model first

Slow responses?

  • Model may be running on CPU
  • Use smaller quantization (e.g., :7b instead of :30b)

OpenClaw sub-agent falls back to default model?

  • Ensure ollama:default auth profile exists in OpenClaw config
  • Check model path format: ollama/modelname:tag
数据来源:ClawHub ↗ · 中文优化:龙虾技能库
OpenClaw 技能定制 / 插件定制 / 私有工作流定制

免费技能或插件可能存在安全风险,如需更匹配、更安全的方案,建议联系付费定制

了解定制服务