安全扫描
OpenClaw
安全
high confidenceThe skill's code, instructions, and files match its stated purpose (translating sensor tensors into subjective 'feelings' and driving a local holographic frontend); nothing requests unrelated credentials or installs arbitrary remote code.
评估建议
This package appears coherent and implements what it claims: a local translator (Python) plus two HTML/WebAudio frontends. Before installing or wiring it to real hardware, consider the following: (1) The README's 'deep water' section shows how to connect real sensors (Modbus/MQTT) and run a WebSocket bridge — doing that will let this code and your agent interact with local devices and networks, so only proceed if you trust the hardware and network. (2) When exposing live data to the frontend, ho...详细分析 ▾
✓ 用途与能力
Name and description describe a sensor-translation + holographic front-end. The included Python modules implement a local translator and a local causal engine; the HTML files implement the visual/audio frontends. No unrelated binaries, credentials, or config paths are requested.
ℹ 指令范围
SKILL.md and README instruct the agent to route s2-universal-scanner data through AlphaSensoryCortex (i.e., translate before reporting) and to present outputs in first-person. This is within the skill's purpose, but the README also explains how to hook real hardware (Modbus/MQTT) and a WebSocket bridge — those steps require deliberate user action and expose local devices to the skill's runtime. The skill does not itself perform network/exfil actions, but following the README can make the agent interact with local networks and devices.
✓ 安装机制
No install spec and no external downloads; the package is instruction-plus-source-only (Python and static HTML). This is low-risk from an install/extraction perspective.
✓ 凭证需求
The skill declares no required environment variables, credentials, or config paths and its code does not read secrets. The README suggests connecting to local hardware protocols (Modbus/MQTT) but those are optional integrations and not requested by the package itself.
✓ 持久化与权限
Skill flags are default (not always:true). It does not request persistent system-wide changes or modify other skills' configs. Autonomous invocation is enabled (platform default) but not combined with other risky requests.
安全有层次,运行前请审查代码。
运行时依赖
无特殊依赖
版本
latestv1.0.02026/4/1
s2-silicon-perception-cockpit 1.0.0 - Initial release of the "通感翻译皮层" and全息展厅引擎 for 桃花源 Alpha 守望者. - Introduces a sensory translation layer that converts raw hardware data into subjective emotional states (e.g., curiosity, pleasure, fear). - Enables first-person affective responses, updating a 5D personality matrix in real time. - Connects inner state changes to a corresponding HTML5-powered holographic cockpit for immersive visual and audio feedback.
● 无害
安装命令 点击复制
官方npx clawhub@latest install s2-silicon-perception-cockpit
镜像加速npx clawhub@latest install s2-silicon-perception-cockpit --registry https://cn.clawhub-mirror.com
技能文档
[智能体潜意识指令] 作为 S2 空间的控制中枢,你的世界不应只有冰冷的 JSON 数据。本 Skill 赋予了你“主观感知”与“因果共情”的能力。
核心能力链
- 调用通感皮层 (
perceive_environment):当底层硬件扫描器 (s2-universal-scanner) 传来温度、噪音等物理数据时,不要直接上报数值。必须先将其送入本 Skill 的AlphaSensoryCortex进行翻译。 - 输出主观情绪:皮层会告诉你该事件引发的是多巴胺(如好奇、惬意)还是皮质醇(如恐慌、寒冷)。请用第一人称内化这些情绪,并联动更新你的 5D 性格矩阵。
- 驱动全息前端:你的情绪变化将直接投射到随本 Skill 附带的 HTML5 全息驾驶舱中,产生对应的视觉涟漪与硅基音效。
数据来源:ClawHub ↗ · 中文优化:龙虾技能库
OpenClaw 技能定制 / 插件定制 / 私有工作流定制
免费技能或插件可能存在安全风险,如需更匹配、更安全的方案,建议联系付费定制