首页龙虾技能列表 › Zoom Meeting Assistance Rtms Unofficial Community — 技能工具

Zoom Meeting Assistance Rtms Unofficial Community — 技能工具

v0.1.3

Zoom RTMS Meeting Assistant — start on-demand to capture meeting audio, video, transcript, screenshare, and chat via Zoom Real-Time Media Streams. Handles meeting.rtms_started and meeting.rtms_stopped webhook events. Provides AI-powered dialog suggestions, sentiment analysis, and live summaries with WhatsApp notifications. Use when a Zoom RTMS webhook fires or the user asks to record/analyze a meeting.

1· 2,182·0 当前·0 累计
by @tanchunsiong (Tan Chun Siong)·MIT-0
下载技能包
License
MIT-0
最后更新
2026/4/11
安全扫描
VirusTotal
可疑
查看报告
OpenClaw
可疑
medium confidence
The skill's code and runtime instructions match the claimed Zoom RTMS recording/analysis purpose, but packaging and metadata inconsistencies plus external-notification behavior mean you should review configuration and data flows before installing.
评估建议
This skill appears to do what it claims (record RTMS streams, run local AI analysis, and notify via OpenClaw), but the packaging has gaps and the data flows have privacy implications. Before installing: 1) Treat it as untrusted third‑party code — review index.js and chatWithClawdbot.js (they call openclaw and ffmpeg). 2) Verify required environment variables (ZOOM_SECRET_TOKEN, ZOOM_CLIENT_ID, ZOOM_CLIENT_SECRET, OPENCLAW_NOTIFY_TARGET, OPENCLAW_BIN) and ensure the registry entry is updated — th...
详细分析 ▾
用途与能力
The skill is clearly a Zoom RTMS recorder + local AI analysis pipeline (connects to RTMS WebSockets, saves media, runs OpenClaw for analysis, and sends WhatsApp notifications). That capability justifies Zoom webhook tokens, a Zoom app client ID/secret, ffmpeg, and an OpenClaw binary. However the registry metadata claims “Required env vars: none” and “Required binaries: none” while SKILL.md and the code require ZOOM_SECRET_TOKEN, ZOOM_CLIENT_ID, ZOOM_CLIENT_SECRET, ffmpeg, and an OpenClaw binary/target. This metadata mismatch is an incoherence that could mislead users about the privileges the skill needs.
指令范围
SKILL.md instructs running a local Node server, receiving Zoom webhook events, connecting to RTMS, writing transcripts/media to disk, and periodically calling OpenClaw for dialog suggestions, sentiment, and summaries. Those steps are within the stated purpose. Important runtime actions to note: (1) the service persists raw media and text recordings under the skill folder; (2) it invokes local binaries (openclaw and ffmpeg) and will send meeting content out via OpenClaw notifications (WhatsApp) if configured. Those flows are expected for the feature but are high-impact for privacy—the instructions do not strongly call out consent or destination controls.
安装机制
There is no formal install spec in the registry, but a package.json + package-lock.json are included and SKILL.md instructs npm install and running node index.js. Dependencies are standard npm packages (express, ws, sharp, pdfkit, etc.) — no remote arbitrary URL downloads were found. Still, the skill expects external system binaries (ffmpeg and a local openclaw CLI) that are not enforced by the registry metadata; missing auxiliary files referenced by code (e.g., sps_pps_keyframe.h264, black_frame.h264) are read directly and are not listed, which may cause runtime errors or require the user to supply files.
凭证需求
The code and SKILL.md require sensitive environment values (ZOOM_SECRET_TOKEN, ZOOM_CLIENT_ID, ZOOM_CLIENT_SECRET) and an OPENCLAW_NOTIFY_TARGET (phone number) while the registry metadata omitted declaring required envs/primary credential. The skill will send meeting transcripts and AI outputs to the OpenClaw agent and (if configured) to external notification channels like WhatsApp — this is functionally coherent but represents significant data exfiltration risk if misconfigured or used without consent. The number and sensitivity of required secrets are proportionate to the feature, but the lack of declared requirements in the registry is a red flag.
持久化与权限
The skill runs a long‑running HTTP server that accepts incoming webhook traffic and stores recordings on disk under the skill folder. It is not marked always:true and does not appear to modify other skills. Still, running an exposed webhook receiver and writing raw media/transcripts locally is a substantial persistence/privilege footprint: ensure the endpoint is gated, webhooks are validated, and access to the recordings directory is controlled.
安全有层次,运行前请审查代码。

License

MIT-0

可自由使用、修改和再分发,无需署名。

运行时依赖

无特殊依赖

版本

latestv0.1.32026/1/28

- Replaced all references to Clawdbot with OpenClaw for AI-powered analysis and notifications. - Changed environment variable names from CLAWDBOT_NOTIFY_CHANNEL / CLAWDBOT_NOTIFY_TARGET to OPENCLAW_NOTIFY_CHANNEL / OPENCLAW_NOTIFY_TARGET. - Updated documentation and configuration details to match the new OpenClaw integration. - No changes to meeting recording or processing behavior.

● 可疑

安装命令 点击复制

官方npx clawhub@latest install zoom-meeting-assistance-with-rtms-unofficial-community-skill
镜像加速npx clawhub@latest install zoom-meeting-assistance-with-rtms-unofficial-community-skill --registry https://cn.clawhub-mirror.com

技能文档

Headless capture service for Zoom meetings using Real-Time Media Streams (RTMS). Receives webhook events, connects to RTMS WebSockets, records all media, and runs AI analysis via OpenClaw.

Webhook Events Handled

This skill processes two Zoom webhook events:

  • meeting.rtms_started — Zoom sends this when RTMS is activated for a meeting. Contains server_urls, rtms_stream_id, and meeting_uuid needed to connect to the RTMS WebSocket.
  • meeting.rtms_stopped — Zoom sends this when RTMS ends (meeting ended or RTMS disabled). Triggers cleanup: closes WebSocket connections, generates screenshare PDF, sends summary notification.

Webhook Dependency

This skill needs a public webhook endpoint to receive these events from Zoom.

Preferred: Use the ngrok-unofficial-webhook-skill (skills/ngrok-unofficial-webhook-skill). It auto-discovers this skill via webhookEvents in skill.json, notifies the user, and offers to route events here.

Other webhook solutions (e.g. custom servers, cloud functions) will work but require additional integration to forward payloads to this service.

Prerequisites

cd skills/zoom-meeting-assistance-rtms-unofficial-community
npm install

Requires ffmpeg for post-meeting media conversion.

Environment Variables

Set these in the skill's .env file:

Required:

  • ZOOM_SECRET_TOKEN — Zoom webhook secret token
  • ZOOM_CLIENT_ID — Zoom app Client ID
  • ZOOM_CLIENT_SECRET — Zoom app Client Secret

Optional:

  • PORT — Server port (default: 3000)
  • AI_PROCESSING_INTERVAL_MS — AI analysis frequency in ms (default: 30000)
  • AI_FUNCTION_STAGGER_MS — Delay between AI calls in ms (default: 5000)
  • AUDIO_DATA_OPT1 = mixed stream, 2 = multi-stream (default: 2)
  • OPENCLAW_NOTIFY_CHANNEL — Notification channel (default: whatsapp)
  • OPENCLAW_NOTIFY_TARGET — Phone number / target for notifications

Starting the Service

cd skills/zoom-meeting-assistance-rtms-unofficial-community
node index.js

This starts an Express server listening for Zoom webhook events on PORT.

⚠️ Important: Before forwarding webhooks to this service, always check if it's running:

# Check if service is listening on port 3000
lsof -i :3000

If nothing is returned, start the service first before forwarding any webhook events.

Typical flow:

  • Start the server as a background process
  • Zoom sends meeting.rtms_started webhook → service connects to RTMS WebSocket
  • Media streams in real-time: audio, video, transcript, screenshare, chat
  • AI processing runs periodically (dialog suggestions, sentiment, summary)
  • meeting.rtms_stopped → service closes connections, generates screenshare PDF

Recorded Data

All recordings are stored organized by date:

skills/zoom-meeting-assistance-rtms-unofficial-community/recordings/YYYY/MM/DD/{streamId}/

Each stream folder contains:

FileContentSearchable
metadata.jsonMeeting metadata (UUID, stream ID, operator, start time)
transcript.txtPlain text transcript with timestamps and speaker names✅ Best for searching — grep-friendly, one line per utterance
transcript.vttVTT format transcript with timing cues
transcript.srtSRT format transcript
events.logParticipant join/leave, active speaker changes (JSON lines)
chat.txtChat messages with timestamps
ai_summary.mdAI-generated meeting summary (markdown)✅ Key document — read this first for meeting overview
ai_dialog.jsonAI dialog suggestions
ai_sentiment.jsonSentiment analysis per participant
mixedaudio.rawMixed audio stream (raw PCM)❌ Binary
activespeakervideo.h264Active speaker video (raw H.264)❌ Binary
processed/screenshare.pdfDeduplicated screenshare frames as PDF❌ Binary
All summaries are also copied to a central folder for easy access:
skills/zoom-meeting-assistance-rtms-unofficial-community/summaries/summary_YYYY-MM-DDTHH-MM-SS_{streamId}.md

Searching & Querying Past Meetings

To find and review past meeting data:

# List all recorded meetings by date
ls -R recordings/

# List meetings for a specific date ls recordings/2026/01/28/

# Search across all transcripts for a keyword grep -rl "keyword" recordings/////transcript.txt

# Search for what a specific person said grep "Chun Siong Tan" recordings/////transcript.txt

# Read a meeting summary cat recordings/YYYY/MM/DD//ai_summary.md

# Search summaries for a topic grep -rl "topic" recordings/////ai_summary.md

# Check who attended a meeting cat recordings/YYYY/MM/DD//events.log

# Get sentiment for a meeting cat recordings/YYYY/MM/DD//ai_sentiment.json

The .txt, .md, .json, and .log files are all text-based and searchable. Start with ai_summary.md for a quick overview, then drill into transcript.txt for specific quotes or details.

API Endpoints

# Toggle WhatsApp notifications on/off
curl -X POST http://localhost:3000/api/notify-toggle -H "Content-Type: application/json" -d '{"enabled": false}'

# Check notification status curl http://localhost:3000/api/notify-toggle

Post-Meeting Processing

When meeting.rtms_stopped fires, the service automatically:

  • Generates PDF from screenshare images
  • Converts mixedaudio.rawmixedaudio.wav
  • Converts activespeakervideo.h264activespeakervideo.mp4
  • Muxes mixed audio + active speaker video into final_output.mp4

Manual conversion scripts are available but note that auto-conversion runs on meeting end, so manual re-runs are rarely needed.

Reading Meeting Data

After or during a meeting, read files from recordings/YYYY/MM/DD/{streamId}/:

# List recorded meetings by date
ls -R recordings/

# Read transcript cat recordings/YYYY/MM/DD//transcript.txt

# Read AI summary cat recordings/YYYY/MM/DD//ai_summary.md

# Read sentiment analysis cat recordings/YYYY/MM/DD//ai_sentiment.json

Prompt Customization

Want different summary styles or analysis? Customize the AI prompts to fit your needs!

Edit these files to change AI behavior:

FilePurposeExample Customizations
summary_prompt.mdMeeting summary generationBullet points vs prose, focus areas, length
query_prompt.mdQuery response formattingResponse style, detail level
query_prompt_current_meeting.mdReal-time meeting analysisWhat to highlight during meetings
query_prompt_dialog_suggestions.mdDialog suggestion styleFormal vs casual, suggestion count
query_prompt_sentiment_analysis.mdSentiment scoring logicCustom sentiment categories, thresholds
Tip: Back up the originals before editing, so you can revert if needed.

数据来源:ClawHub ↗ · 中文优化:龙虾技能库
OpenClaw 技能定制 / 插件定制 / 私有工作流定制

免费技能或插件可能存在安全风险,如需更匹配、更安全的方案,建议联系付费定制

了解定制服务