首页龙虾技能列表 › Web Fetcher — 技能工具

Web Fetcher — 技能工具

v1.1.0

Fetch web pages and extract readable content for AI use. Use when reading, summarizing, or crawling a specific URL or small set of URLs. Prefer low-friction...

0· 210·0 当前·0 累计
by @aurthes (Aurthes)·MIT-0
下载技能包
License
MIT-0
最后更新
2026/3/13
安全扫描
VirusTotal
无害
查看报告
OpenClaw
安全
high confidence
The skill's code, instructions, and requirements are coherent with a web-page fetching/markdown-extraction purpose and do not request unrelated credentials or installs.
评估建议
This skill appears to only fetch and return page text; it does not ask for credentials or install software. However, note two practical privacy points before installing: (1) The preferred fetch methods send the target URL (and the fetched page content as processed) to third-party conversion services (r.jina.ai, markdown.new, defuddle). If the page is sensitive or behind a private network, do not use those services — prefer a browser attach or local fetch. (2) The SKILL.md recommends attaching br...
详细分析 ▾
用途与能力
Name and description match what is implemented: lightweight remote conversion services + browser fallback. The included Python script and SKILL.md only perform HTTP fetches and classification; no unrelated credentials, binaries, or system access are requested.
指令范围
Instructions stay within the purpose (try conversion services, then browser or search fallback). They advise using OpenClaw browser tools or a Chrome relay/extension and to ask users to attach tabs when necessary — this is expected for accessing authenticated or JS-heavy pages, but it implies the user may share a live tab with the agent, so the user should be aware of what context they attach.
安装机制
No install spec — instruction-only with a small included Python script. Nothing is downloaded from third-party URLs during install, so there is no on-install code-fetch risk.
凭证需求
Skill requests no environment variables, credentials, or config paths. The behavior (fetching public URLs and recommending browser attach for protected pages) is proportionate to the stated purpose.
持久化与权限
always is false and the skill does not request persistent system-wide privileges or modify other skill configs. Autonomous invocation is allowed by default and is not in itself a concern here.
安全有层次,运行前请审查代码。

License

MIT-0

可自由使用、修改和再分发,无需署名。

运行时依赖

无特殊依赖

版本

latestv1.1.02026/3/13

- Improved reliability by introducing a fallback chain: prefer fast markdown services first, then use browser tools or search/indexed sources if direct fetch fails. - Added stricter rules for detecting Cloudflare blocks, login walls, and incomplete content; instructs not to treat failed fetches as valid. - Provided step-by-step extraction strategies for both single and multi-page tasks, with guidance for handling partial completions and blocked cases. - Clarified preferred output structure, including extraction method, confidence scores, and notes for any blocked or unresolved data. - Updated examples to illustrate the new workflow and fallback logic for common scenarios.

● 无害

安装命令 点击复制

官方npx clawhub@latest install aurthes-web-fetcher-v2
镜像加速npx clawhub@latest install aurthes-web-fetcher-v2 --registry https://cn.clawhub-mirror.com

技能文档

Fetch readable web content with a reliability-first fallback chain.

Core rule

Do not promise direct access to every site. Some sites use Cloudflare, login walls, bot detection, or legal restrictions. In those cases, switch to the next fallback instead of insisting the first method should work.

Preferred fetch order

1) Direct readable fetch

Try lightweight conversion services first:

  • r.jina.ai
   https://r.jina.ai/http://example.com
   
  • markdown.new
   https://markdown.new/https://example.com
   
  • defuddle
   https://defuddle.md/https://example.com
   

For deterministic retries, use the bundled script:

python {baseDir}/scripts/fetch_url.py "https://example.com/article"

The script returns JSON with:

  • chosen method
  • attempt history
  • blocked/thin-content detection
  • final content when successful

Use these when the user wants article text, page summaries, or structured extraction from normal public pages.

2) Detect failure modes early

Treat the fetch as failed or unreliable if you see signs like:

  • Just a moment...
  • Performing security verification
  • Enable JavaScript and cookies
  • CAPTCHA / challenge pages
  • login wall instead of target content
  • obvious truncation / missing article body

When this happens, stop treating the result as the page content.

3) Browser fallback for protected sites

For sites blocked behind Cloudflare or requiring real browser execution:

  • Prefer a real browser session via OpenClaw browser tools when available.
  • If the user is using the Chrome relay/extension, ask them to attach the tab and then inspect the live rendered page.
  • Snapshot the page and extract only the needed fields.

Use browser fallback for:

  • JS-heavy pages
  • Cloudflare-protected pages
  • sites that render key content after load
  • pages where the direct markdown services return verification screens

4) Search / indexed fallback

If direct fetch and browser fetch are not available or still fail:

  • search for the exact page / journal / article title
  • use search snippets, publisher mirror pages, cached summaries, or secondary sources
  • prefer official publisher pages when search can surface the needed field
  • clearly label data as secondary-source derived if it was not read directly from the target page

This is often enough for metadata tasks like:

  • editor-in-chief names
  • journal impact factors
  • publication frequency
  • ISSN
  • institutional affiliations

5) Partial-completion mode

If a site is inconsistent, return a mixed result instead of stalling:

  • fill the rows that can be verified directly
  • mark blocked / unresolved rows clearly
  • explain what failed and which fallback was used

Practical extraction strategy

For one page

  • Try r.jina.ai
  • If blocked, try markdown.new
  • If blocked, try defuddle
  • If still blocked, use browser tools
  • If browser unavailable, use search/indexed fallback
  • Report confidence level

For many similar pages

  • Fetch the index/list page first
  • Extract all target URLs or codes
  • Process pages in batches
  • Record success/failure per row
  • Retry only failures with stronger fallback methods
  • Deliver the best complete table possible

Output guidance

When extracting structured data, prefer columns like:

  • source URL
  • extraction method (direct, browser, search, secondary)
  • confidence (high, medium, low)
  • note for blocked/unverified rows

Examples

  • User: "Read this article" → direct fetch first
  • User: "What does this page say?" → direct fetch, then browser fallback if blocked
  • User: "Crawl this journal site" → index page first, then batched extraction with fallback chain
  • User: "Cloudflare blocked it" → switch to browser or search fallback, do not keep retrying the same failed method
数据来源:ClawHub ↗ · 中文优化:龙虾技能库
OpenClaw 技能定制 / 插件定制 / 私有工作流定制

免费技能或插件可能存在安全风险,如需更匹配、更安全的方案,建议联系付费定制

了解定制服务