首页龙虾技能列表 › Segment Anything

Segment Anything

v1.0.0

Use SAM (Segment Anything Model) to remove image backgrounds and extract foreground subjects as transparent PNGs. Use when users want to remove backgrounds,...

0· 216·0 当前·0 累计
by @scikkk·MIT-0
下载技能包
License
MIT-0
最后更新
2026/4/14
安全扫描
VirusTotal
可疑
查看报告
OpenClaw
安全
high confidence
This skill is internally consistent with its description: it runs a SAM-based segmentation script, installs the expected Python packages (and the segment_anything repo) and downloads official SAM checkpoints; it does not request unrelated credentials or access.
评估建议
This skill appears to be what it claims, but it will: (1) auto-install the segment_anything package from GitHub at runtime, and (2) download large model checkpoints (~375MB–2.5GB) to ~/.cache/sam. Before installing, ensure you have sufficient disk space and bandwidth and that you trust pulling code from the segment-anything GitHub repo. If you prefer tighter control, pre-install the dependencies and provide a local checkpoint via --checkpoint to avoid runtime pip installs and downloads. Run in a...
详细分析 ▾
用途与能力
The name/description (SAM background removal) matches the code and declared dependencies: python3, pillow, numpy, torch, torchvision, and the segment_anything package. The script implements segmentation and saving transparent PNGs as advertised.
指令范围
SKILL.md simply instructs running scripts/segment.py and documents parameters. The runtime behavior (auto-installing segment_anything via pip and auto-downloading model checkpoints to ~/.cache/sam) is clearly described. The instructions do not read unrelated files, environment variables, or transmit data to unexpected endpoints.
安装机制
Install spec lists pillow, numpy, torch, torchvision (appropriate for SAM). The script may auto-run pip install git+https://github.com/facebookresearch/segment-anything.git if needed and downloads large model checkpoints from dl.fbaipublicfiles.com (Meta's public hosting). This is expected but involves dynamic code download and large network transfers (~375MB–2.5GB).
凭证需求
No environment variables, credentials, or unrelated config paths are requested. The script writes checkpoints to ~/.cache/sam and saves outputs where the user specifies; those are proportionate to the function.
持久化与权限
always is false and the skill does not modify other skills or system-wide settings. It stores model checkpoints in the user's cache directory only, which is reasonable for repeated use.
安全有层次,运行前请审查代码。

License

MIT-0

可自由使用、修改和再分发,无需署名。

运行时依赖

无特殊依赖

版本

latestv1.0.02026/3/14

- Initial release of "sam-segmentation" skill for background removal and image segmentation. - Extracts foreground subjects from images using Meta's Segment Anything Model (SAM) and saves as transparent PNGs. - Supports multiple model sizes (`vit_b`, `vit_l`, `vit_h`) for different speed and quality needs. - Allows foreground hint points, grid-sweep mode for extracting all distinct elements, and various mask filtering parameters. - Automatically installs needed dependencies (`segment_anything`, Pillow, numpy, torch, torchvision) on first use. - Model checkpoint is auto-downloaded if not provided.

● 可疑

安装命令 点击复制

官方npx clawhub@latest install sam
镜像加速npx clawhub@latest install sam --registry https://cn.clawhub-mirror.com

技能文档

Extract foreground subjects from images using Meta's Segment Anything Model, outputting transparent PNGs.

Quick Start

python3 scripts/segment.py  

Defaults to the image center as the foreground hint — works well for portraits and product shots where the subject is centered.

Parameters

ParamDescriptionDefault
inputInput image pathrequired
outputOutput PNG path (single mode) or directory (--all mode)required
--modelModel size: vit_b (fast) · vit_l (medium) · vit_h (best quality)vit_h
--checkpointLocal checkpoint path; auto-downloaded if omittedauto
--pointsForeground hint points as x,y, multiple allowedcenter
--allGrid-sweep mode: extract all distinct elementsoff
--gridGrid density for --all; 16 means 16×16=256 probe points16
--iou-threshMinimum predicted IoU to accept a mask (--all)0.88
--min-areaMinimum mask area as fraction of image (--all)0.001

Examples

# Basic background removal (auto-downloads vit_h ~2.5GB)
python3 scripts/segment.py photo.jpg output.png

# Specify hint point when subject is off-center python3 scripts/segment.py photo.jpg output.png --points 320,240

# Multiple hints with lightweight model python3 scripts/segment.py photo.jpg output.png --model vit_b --points 320,240 400,300

# Extract all elements (one PNG per element) python3 scripts/segment.py photo.jpg ./elements/ --all

# Denser grid to capture small objects python3 scripts/segment.py photo.jpg ./elements/ --all --grid 32

# Use a local checkpoint python3 scripts/segment.py photo.jpg output.png --checkpoint /path/to/sam_vit_h_4b8939.pth

Dependencies

segment_anything is auto-installed on first run, or install manually:

pip install git+https://github.com/facebookresearch/segment-anything.git
pip install pillow numpy torch torchvision

Workflow

  • User provides image path
  • Ask if hint points are needed (when subject is off-center)
  • Run script; checkpoint auto-downloads on first use to ~/.cache/sam/
  • Output transparent-background PNG

Model Selection

ModelSizeSpeedQuality
vit_b~375 MBfastestgood
vit_l~1.25 GBmediumbetter
vit_h~2.5 GBslowerbest
CUDA is used automatically when a GPU is available.

数据来源:ClawHub ↗ · 中文优化:龙虾技能库
OpenClaw 技能定制 / 插件定制 / 私有工作流定制

免费技能或插件可能存在安全风险,如需更匹配、更安全的方案,建议联系付费定制

了解定制服务