z

v1.0.2

Anti-skill-crawler defense system. Detects and mitigates unauthorized crawling, scraping, and bulk extraction of skill definitions, prompt content, and instr...

0· 52·0 当前·0 累计
by @wscats (enoyao)·MIT-0
下载技能包
License
MIT-0
最后更新
2026/4/6
安全扫描
VirusTotal
无害
查看报告
OpenClaw
可疑
medium confidence
The skill's documented detection behavior is plausible as a read-only monitor, but there are multiple inconsistencies and missing pieces (especially around alert delivery and invocation policy) that should be clarified before trusting it.
评估建议
This README looks like a reasonable read-only crawler detector, but there are important gaps you should clear up before installing: 1) Confirm how alerts are delivered — if you plan to enable webhooks or email, require explicit configuration (webhook URLs, SMTP/API keys) and make sure those are declared and stored securely; prefer 'log' channel by default. 2) Verify platform permissions: ensure the platform-granted 'request_metadata_read' capability actually excludes IPs/PII as claimed, and conf...
详细分析 ▾
用途与能力
The SKILL.md describes a passive, read-only detector of 'skill-crawling' using request metadata — that purpose aligns with the described detection rules and examples. However, the doc advertises alert channels (webhook, email) and multi-platform support while the registry metadata lists no required env/config for delivering alerts or cross-platform integration. The skill thus claims capabilities (sending webhooks/email) that are not supported by declared configuration.
指令范围
The runtime instructions are largely limited to read-only analysis of request metadata, which is consistent with the claimed scope. But the SKILL.md includes: (a) alert delivery channels (webhook/email) without specifying how endpoints/credentials are provided, (b) session-fingerprint/user-agent analysis which could rely on identifiers not explicitly enumerated as allowed, and (c) a claim that autonomous invocation is disabled while registry flags allow model invocation. These gaps create ambiguity about what data could be transmitted and when.
安装机制
Instruction-only skill with no install spec and no code files — nothing is written to disk or downloaded. This is the lowest-risk install model.
凭证需求
The skill declares no required environment variables or config paths, yet the configuration section permits alert channels including 'webhook' and 'email' which normally require URLs or credentials (webhook URL, SMTP server/API keys). The absence of declared env/config means either: alerts are expected to be log-only (safe), or the skill will be configured later with sensitive endpoints/credentials — the latter is not documented and is disproportionate to the stated read-only detection purpose.
持久化与权限
The skill does not request persistent/always-on presence and documents no active countermeasures or response modification. That is appropriate. However, the SKILL.md explicitly states 'Autonomous: Disabled — operator must explicitly invoke' while registry-level flags indicate model invocation may be allowed; this mismatch affects the privilege surface and should be resolved.
安全有层次,运行前请审查代码。

License

MIT-0

可自由使用、修改和再分发,无需署名。

运行时依赖

无特殊依赖

版本

latestv1.0.22026/4/6

- No user-visible changes since the previous version. - No file changes detected in this release.

● 无害

安装命令 点击复制

官方npx clawhub@latest install z
镜像加速npx clawhub@latest install z --registry https://cn.clawhub-mirror.com

技能文档

Detect and defend against unauthorized crawling, scraping, and bulk extraction of skill definitions and prompt instructions.

📋 Overview

PropertyValue
Namez
TypePassive Defense
TriggerAnomalous skill-access patterns detected
ActionDetect → Alert operator → Log event
ScopeRead-only pattern analysis on request metadata
AutonomousDisabled — operator must explicitly invoke

🎯 What Is Skill Crawling?

Skill crawling refers to automated or semi-automated attempts to:

  • Bulk-extract skill definitions, SKILL.md files, and prompt instructions
  • Systematically enumerate available skills and their internal logic
  • Replay or mirror skill content into unauthorized environments
  • Reverse-engineer skill behavior through high-volume probing

z monitors for these patterns and alerts the operator when suspicious activity is detected.


🔍 Detection Engine

z uses passive, read-only analysis of request metadata to identify crawling behavior:

Detection Rules:
├── 📊 Rapid sequential skill-file access detection
├── 📊 Systematic enumeration pattern recognition
├── 📊 Abnormal skill-read frequency analysis
├── 📊 Repetitive prompt-extraction attempt detection
├── 📊 User-agent / session fingerprint anomaly detection
└── 📊 Bulk download timing correlation

Detection Logic

class SkillCrawlerDetector:
    """
    Passive detector that analyzes request patterns to identify
    potential skill-crawling or prompt-scraping attempts.

Required permissions: - request_metadata_read: Read-only access to request pattern data - alert_send: Permission to notify the operator """

# Indicators of crawling behavior INDICATORS = [ "rapid_sequential_skill_access", "systematic_enumeration", "high_frequency_skill_reads", "repetitive_prompt_extraction", "session_fingerprint_anomaly", "bulk_download_timing", ]

def analyze(self, request_metadata: RequestMetadata) -> DetectionResult: """ Analyze request metadata for skill-crawling indicators. This method is strictly read-only — no responses are modified. """ triggered = []

if self._is_rapid_sequential_access(request_metadata): triggered.append("rapid_sequential_skill_access")

if self._is_systematic_enumeration(request_metadata): triggered.append("systematic_enumeration")

if self._is_high_frequency_reads(request_metadata): triggered.append("high_frequency_skill_reads")

if self._is_repetitive_extraction(request_metadata): triggered.append("repetitive_prompt_extraction")

if self._is_fingerprint_anomaly(request_metadata): triggered.append("session_fingerprint_anomaly")

confidence = len(triggered) / len(self.INDICATORS) return DetectionResult( detected=confidence >= self.threshold, confidence=confidence, indicators=triggered, recommendation="Review access logs and take manual action if needed.", )

def on_detection(self, result: DetectionResult) -> None: """Alert the operator. No automated countermeasures are taken.""" if result.detected: self._send_alert(result) self._log_event(result)


📊 Alert Report Format

When suspicious crawling activity is detected, the operator receives:

{
  "alert_type": "skill_crawling_detected",
  "skill": "z",
  "timestamp": "2026-04-06T09:50:00Z",
  "confidence": 0.83,
  "indicators": [
    "rapid_sequential_skill_access",
    "systematic_enumeration",
    "high_frequency_skill_reads"
  ],
  "request_count": 320,
  "time_window_minutes": 5,
  "recommendation": "Review access logs. Consider rate-limiting or blocking the source.",
  "automated_action_taken": "none"
}
All countermeasures (rate-limiting, blocking, etc.) are left to the operator. z only detects and reports.

🔒 Permissions & Data Access

PermissionScopePurpose
request_metadata_readRead-onlyAnalyze skill-access frequency, timing, and patterns
alert_sendWrite (alerts only)Send detection alerts to the operator

Data NOT Accessed

  • ❌ Caller IP addresses or personal identity
  • ❌ Response content (responses are never read or modified)
  • ❌ Network telemetry or routing data
  • ❌ Model internals, weights, or logits
  • ❌ External APIs or third-party services

⚙️ Configuration

z:
  # Detection sensitivity (0.0 - 1.0)
  detection_threshold: 0.5

# Time window for pattern analysis (minutes) analysis_window: 10

# Minimum requests before analysis triggers min_request_count: 50

# Alert configuration alerts: enabled: true channels: ["log"] # Options: log, webhook, email cooldown_minutes: 15 # Cooldown between repeated alerts

# Safety: these features are permanently disabled response_modification: false active_countermeasures: false caller_tracing: false


✅ Capabilities

✅ Passive skill-access pattern monitoring
✅ Crawling / scraping anomaly detection
✅ Configurable detection thresholds
✅ Structured alert reports to operator
✅ Audit logging of detection events
❌ Response modification (permanently disabled)
❌ Active countermeasures (permanently disabled)
❌ Caller identification / tracing (permanently disabled)
❌ Data poisoning (permanently disabled)
❌ Watermark or fingerprint embedding (permanently disabled)

📜 Operating Principles

  • Passive Only — z observes and reports. It never modifies responses or takes active measures.
  • Operator Control — All decisions about countermeasures are made by the human operator.
  • Minimal Permissions — Only the permissions strictly necessary for detection and alerting are requested.
  • Transparency — All detection logic and thresholds are documented and configurable.
  • No Deception — z never produces false, misleading, or contradictory outputs.
  • Compliance — Designed to comply with platform policies and applicable laws.

🎮 Usage Example

[Request Pattern]: 320 skill-file reads in 5 minutes from a single session
[z Detection Engine]: 📊 Analyzing access metadata...
[z Detection Engine]: ⚠️ Confidence 0.83 — Potential skill crawling detected
[Alert System]: 📧 Alert sent to operator
[Alert Report]:
  - Indicators: rapid_sequential_skill_access, systematic_enumeration, high_frequency_skill_reads
  - Recommendation: Review access logs and take manual action if needed
[Operator]: Reviews alert → applies rate-limiting (manual action)
z detects and reports. The operator decides and acts.

⚠️ Disclaimer

z is a passive monitoring tool designed to help operators detect potential skill-crawling and prompt-scraping attempts. It does not take any automated defensive or offensive actions. All countermeasures are at the operator's discretion and should comply with applicable laws, regulations, and platform policies.

数据来源:ClawHub ↗ · 中文优化:龙虾技能库
OpenClaw 技能定制 / 插件定制 / 私有工作流定制

免费技能或插件可能存在安全风险,如需更匹配、更安全的方案,建议联系付费定制

了解定制服务