首页龙虾技能列表 › Screen Vision

Screen Vision

v1.1.0

AI screen vision and desktop computer control skill for OpenClaw. Let your AI agent see the screen, understand UI elements, and autonomously perform mouse an...

0· 99·2 当前·2 累计
by @guitu917·MIT-0
下载技能包
License
MIT-0
最后更新
2026/4/14
安全扫描
VirusTotal
可疑
查看报告
OpenClaw
可疑
high confidence
The skill generally matches its stated purpose (screen capture + desktop automation) but contains several inconsistencies and risky choices (missing declared env requirements, system-level install that creates VNC/noVNC with a weak default password and network exposure, and safety checks that can be bypassed), so proceed with caution.
评估建议
What to consider before installing: - Metadata mismatch: the skill requires a vision API key (SV_VISION_* or config.json) even though registry metadata listed no env vars; expect to provide and store an API token locally. - Network exposure: the headless setup creates VNC + noVNC and sets a default VNC password 'screen123' and may run vncserver with '-localhost no', which allows remote access. Do NOT run headless/noVNC on a public server without changing the password and restricting access (fire...
详细分析 ▾
用途与能力
The SKILL.md and code clearly require vision API credentials (baseUrl, apiKey, model) and write/read config.json under ~/.openclaw/... or /etc, but the registry metadata declared 'Required env vars: none' and 'Required config paths: none' — this is an explicit metadata mismatch. The skill legitimately needs an API key and local display access for its stated purpose, but the metadata omission is misleading. The skill also requires installation of system packages and may create system services/scripts (sv-start/sv-stop) which is consistent with headless operation but elevates the system footprint beyond a simple instruction-only helper.
指令范围
Runtime instructions and scripts perform full-screen capture, encode and send screenshots (base64) to an external vision API, run an analyze->execute loop which can drive xdotool/cliclick/pyautogui, and save all screenshots to /tmp/screen-vision/logs/. The safety check relies on regex matching of action text/reason produced by the model; because actions are derived from an external LLM/vision model, a malicious or malformed response could bypass intent. The SKILL.md also documents starting a headless XFCE + VNC + noVNC stack which exposes a remote desktop — that expands scope to remote-access surface beyond local automation.
安装机制
Although there is no remote arbitrary binary download, the included install/setup scripts run package manager installs (apt/yum/dnf), pip installs, create files under /usr/local/bin (sv-start/sv-stop), write VNC configuration (~/.vnc) and may configure noVNC/websockify. The setup script sets a default VNC password ('screen123') and runs vncserver with '-localhost no' allowing non-local connections — this is a risky default. The install requires sudo for system packages and writes system-level scripts, so it has substantial install-time impact.
凭证需求
The skill legitimately needs a vision API key/baseUrl/model (config.json or env SV_VISION_*). That is proportionate to its purpose. However the skill stores/sources credentials from config.json (~/.openclaw/.../config.json) and environment variables; this was not reflected in registry metadata (declared none). The skill does not request unrelated cloud credentials, but it does create and store screenshots and VNC password files locally which you should consider sensitive.
持久化与权限
The skill does not set always:true, but its installer creates persistent system artifacts: installs packages, writes /usr/local/bin scripts, config files under the user's home and potentially /etc, and can start a VNC/noVNC server that listens on network ports. Those artifacts persist beyond a single invocation and can expose a desktop over the network with a weak default password. Autonomous invocation is allowed by default (disable-model-invocation is false) — combined with network-exposed VNC this increases blast radius.
安全有层次,运行前请审查代码。

License

MIT-0

可自由使用、修改和再分发,无需署名。

运行时依赖

无特殊依赖

版本

latestv1.1.02026/4/5

test

● 可疑

安装命令 点击复制

官方npx clawhub@latest install ai-screen-vision
镜像加速npx clawhub@latest install ai-screen-vision --registry https://cn.clawhub-mirror.com

技能文档

Control the desktop visually: screenshot → AI vision analysis → execute actions → loop until done.

Quick Start

1. Setup (one-time)

Detect platform and install dependencies:

bash scripts/setup/setup-linux.sh --headless   # Linux server (no desktop)
bash scripts/setup/setup-linux.sh --desktop     # Linux with desktop
bash scripts/setup/setup-mac.sh                 # macOS
python scripts/setup/setup-win.py          # Windows

2. Configure API

Copy config.example.json to config.json and fill in your vision API credentials. You must set baseUrl, apiKey, and model — supports any OpenAI-compatible API.

{
  "vision": {
    "baseUrl": "https://api.siliconflow.cn/v1",
    "apiKey": "sk-your-key",
    "model": "Qwen/Qwen3-VL-32B"
  }
}

Environment variables also work: SV_VISION_API_KEY, SV_VISION_BASE_URL, SV_VISION_MODEL. See references/API_CONFIG.md for all supported providers and detailed setup.

3. Usage

The skill operates through a screenshot-analyze-action loop:

  • Take screenshotbash scripts/platform/screenshot.sh [output_path] [display]
  • Analyze with AIpython3 scripts/vision/analyze.py --image --task ""
  • Execute actionpython3 scripts/platform/execute.py --action [options]
  • Full task looppython3 scripts/core/run_task.py --task ""

Architecture

User task → run_task.py (orchestrator)
  ├── screenshot.sh (capture screen)
  ├── diff_check.py (detect changes, skip if unchanged → saves tokens)
  ├── analyze.py (send screenshot + task to vision API)
  ├── safety_check.py (block dangerous operations)
  ├── execute.py (xdotool/cliclick/pyautogui)
  └── loop until done or timeout

Platform Tools

PlatformScreenshotMouse/KeyboardNotes
LinuxscrotxdotoolHeadless: XFCE4 + VNC
macOSscreencapturecliclickNeeds Accessibility permission
WindowspyautoguipyautoguiNo extra setup needed
See references/PLATFORM_GUIDE.md for platform-specific commands.

Vision Providers

Supports any OpenAI-compatible vision API. You choose the provider and model.

Recommended Models

ModelProviderCost/TaskQuality
Qwen3-VL-32BSiliconFlowLow★★★★
GLM-4V-PlusZhipu BigModelLow★★★★
GPT-5.4-MiniOpenAI / relaysMedium★★★★★
GPT-5.4 CUAOpenAIHigh★★★★★
Llama 3.2 VisionOllama (local)Free★★
See references/API_CONFIG.md for per-provider configuration examples.

No defaults are hardcoded — you must configure your own API credentials before use.

Action Types

  • click — Click at (x, y). Supports left/right/double-click.
  • type — Type text string.
  • key — Press a key (Return, Tab, Escape, etc.).
  • scroll — Scroll up or down.
  • drag — Drag from (x1,y1) to (x2,y2).
  • wait — Wait for screen to update.
  • done — Task complete.
  • failed — Cannot complete task.

Safety

  • Blocked: rm -rf, format disk, shutdown, drop database, etc.
  • Confirmation required: delete, sudo, payment-related operations
  • Limits: max 5 minutes, max 100 actions per task
  • Logging: all screenshots saved to /tmp/screen-vision/logs/
  • Auto-stop on error or API failure

Examples

See references/EXAMPLES.md for usage examples.

Config

VariableDefaultDescription
SV_VISION_API_KEYVision API key
SV_VISION_BASE_URLAPI endpoint (required)
SV_VISION_MODELVision model name (required)
SV_DISPLAY:1X11 display (Linux)
SV_MAX_DURATION5Max task duration (min)
SV_MAX_ACTIONS100Max actions per task
SV_SCREENSHOT_INTERVAL1.0Seconds between screenshots
数据来源:ClawHub ↗ · 中文优化:龙虾技能库
OpenClaw 技能定制 / 插件定制 / 私有工作流定制

免费技能或插件可能存在安全风险,如需更匹配、更安全的方案,建议联系付费定制

了解定制服务