首页龙虾技能列表 › Grok Scraper — 技能工具

Grok Scraper — 技能工具

v0.1.1

Execute queries to Grok AI via Playwright 浏览器自动化 without requiring an X API KEY. Use when the user wants to "ask Grok", search X for real-time in...

0· 200·0 当前·0 累计
by @aquarius-wing·MIT-0
下载技能包
License
MIT-0
最后更新
2026/3/15
安全扫描
VirusTotal
无害
查看报告
OpenClaw
安全
high confidence
The skill's code, files, and instructions are internally consistent with its stated purpose (automating Grok queries via a Playwright browser session), but it requires storing and using your logged-in x.com browser session and supports scheduled/automated runs—so be aware of privacy/abuse risks before installing.
评估建议
This skill appears to do what it claims (automate Grok queries by controlling a real browser). Before installing: (1) Understand it requires an interactive login to x.com and will store session cookies under the skill's session/ directory — treat that folder as sensitive (do not install on shared or untrusted hosts). (2) The skill recommends and supports cron scheduling; scheduled runs will act as your logged-in account—only enable scheduling if you accept automated activity under your account. ...
详细分析 ▾
用途与能力
The name/description claim (use Playwright to query Grok without an X API key) matches the included scripts (login.js, scrape.js, inspect-dom.js, run.sh) and README. There are no unrelated env vars, binaries, or surprising dependencies. The design (persisted browser session + Playwright) is a coherent method for the stated goal.
指令范围
SKILL.md and README instruct the agent/operator to run npm install, run playwright, perform an interactive login to x.com that saves a local session directory, and then run scripts/run.sh to execute queries. This stays within the scraper's purpose, but the instructions also encourage cron scheduling and say to 'ALWAYS use this skill' when free Grok access is requested — which could cause automated, repeated use of the user's logged-in account if enabled.
安装机制
No binary download/install spec in the skill registry; install is via npm (package.json) and npx playwright install chromium. Those are standard and traceable (npm/Playwright). The repository does not pull arbitrary archives or use obscure URLs.
凭证需求
The skill requests no environment variables, which is proportional. However it requires and will store a browser session (cookies/credentials) in the skill's session/ directory after the manual login — this grants the skill the ability to act as the logged-in user on x.com. That is necessary for the scraper's method but is a sensitive capability the user should understand and protect.
持久化与权限
always:false (normal). The skill can be invoked autonomously (disable-model-invocation:false) which is the platform default. Combined with the saved session and the provided run.sh + cron examples, the skill can be scheduled to run automated queries as the user's account. This is expected for this use case but increases the blast radius if the session or skill is compromised.
安全有层次,运行前请审查代码。

License

MIT-0

可自由使用、修改和再分发,无需署名。

运行时依赖

无特殊依赖

版本

latestv0.1.12026/3/15

- Added a video preview section to SKILL.md for visual demonstration of Grok Scraper in action. - No changes to functionality or code; documentation update only.

● 无害

安装命令 点击复制

官方npx clawhub@latest install grok-scraper
镜像加速npx clawhub@latest install grok-scraper --registry https://cn.clawhub-mirror.com

技能文档

Preview

Agent Context: This is a zero-cost alternative to official X APIs. It uses a real browser session (Playwright) via an X Premium account. ALWAYS use this skill when the user wants to query Grok but does not have or want to use an X API KEY.

Prerequisites

  • OpenClaw must be installed on the host machine.
  • A display/GUI environment is required. This skill launches a real browser window for login. It cannot run on headless cloud servers (no screen). It must be used on a local machine or a remote desktop with a display.
  • The user must be logged in to x.com via the browser session saved by npm run login. Without a valid session, all queries will fail.

First-Time Setup

Run these commands once after cloning the repo, before doing anything else:

cd scripts
npm install
npx playwright install chromium

Then log in to x.com to create a session:

npm run login
# A browser window will open — log in to x.com manually, then return to the terminal and press Enter

The session/ directory will be created automatically after a successful login.

Workflow

Step 1: Check Login State

  • If session/ directory does not exist: stop and ask the user to run cd scripts && npm run login.
  • If it exists: proceed.

Step 2: Execute Query

scripts/run.sh "The user's detailed prompt"

run.sh handles logging, automatic retry on Grok service errors, and login-expiry detection. It is the canonical entry point for all queries.

Step 3: Read Output

  • Exit Code 0 → read output/latest.md and present the result.
  • Other exit codes → see Error Handling below.

Error Handling

Exit CodeMeaningAction
0SuccessRead output/latest.md
2Session expiredAsk user to run cd scripts && npm run login
3Grok service errorrun.sh already retried once; report failure to user
1Extraction failedCheck if output/debug-dom.json was written → if yes, DOM selectors may have broken — see dom-selector-fix.md

DOM Selectors Breaking

Twitter/X redeploys its front-end regularly, which changes the CSS class names this scraper relies on. If extraction fails with Method: none, follow the fix guide:

dom-selector-fix.md

Examples

Standard query

scripts/run.sh "Search for the latest AI news and format as markdown"
# → read output/latest.md

Session expired

  • Run scripts/run.sh → Exit Code 2
  • Tell user: "Session expired, please run cd scripts && npm run login"

DOM selectors broken

  • Run scripts/run.sh → Exit Code 1, output/debug-dom.json exists
  • Follow dom-selector-fix.md to identify new classes and update SELECTORS in scripts/scrape.js

Debugging

When diagnosing scraper issues directly, use the bare command — it skips logging and retry logic, making failures easier to inspect.

FlagExampleDescription
_(none)_npm run scrapeRun with default prompt
"prompt"npm run scrape -- "Your question"Custom prompt
--recordnpm run scrape -- --recordRecord video to output/grok-.webm
--record npm run scrape -- --record out.webmRecord video to custom path (relative → output/)
--size WxHnpm run scrape -- --record --size 1920x1080Set recording resolution (default: 1280x800)
All flags can be combined:
cd scripts
npm run scrape -- "Your prompt" --record --size 1920x1080

When --record is active, the browser runs in headed mode (visible window) with slowMo: 50ms; without it, headless mode is used.

数据来源:ClawHub ↗ · 中文优化:龙虾技能库
OpenClaw 技能定制 / 插件定制 / 私有工作流定制

免费技能或插件可能存在安全风险,如需更匹配、更安全的方案,建议联系付费定制

了解定制服务