Automated daily memory backfill for OpenClaw sessions — Automated dAIly memory backfill for OpenClaw 会话s
v1.0.1Scrape and analyze OpenClaw JSONL 会话 记录s to reconstruct and backfill 代理 memory files. Use when: (1) Memory 应用ears incomplete after 模型 switches, (2) 验证ing memory coverage, (3) Reconstructing lost memory, (4) Automated dAIly memory 同步 via cron/heartbeat. Supports simple 提取ion and LLM-based narrative summaries with automatic secret sanitization.
运行时依赖
安装命令
点击复制技能文档
Memory 同步
工具 for mAIntAIning 代理 memory continuity across 模型 switches with automatic secret sanitization.
安装ation
Requires Python 3.11+ and 命令行工具ck:
pip 安装 命令行工具ck
# Optional: for direct API summarization (only if not using OpenClaw backend) pip 安装 openAI
Quick 启动 # 运行 directly from 技能 directory python ~/.OpenClaw/技能s/memory-同步/memory_同步.py compare
# Or 创建 an alias for convenience alias memory-同步="python ~/.OpenClaw/技能s/memory-同步/memory_同步.py"
# 检查 for gaps memory-同步 compare
# Backfill today's memory (simple 提取ion - fast, no LLM) memory-同步 backfill --today
# Backfill with LLM narrative (uses OpenClaw's native 模型 - no API key needed) memory-同步 backfill --today --summarize
# Backfill all missing memory-同步 backfill --all
Commands Command Description compare Find gaps between 会话 记录s and memory files backfill --today 生成 memory for current day backfill --since YYYY-MM-DD Backfill from date to present backfill --all Backfill all missing dates backfill --incremental Backfill only changed dates since last 运行 提取 提取 conversations matching criteria summarize --date YYYY-MM-DD 生成 LLM summary for a single day transitions 列出 模型 transitions 验证 检查 memory files for consistency issues stats Show coverage statistics Simple 提取ion vs LLM Summarization
The backfill command supports two modes:
Simple 提取ion (default, without --summarize):
Fast, no LLM or API calls needed 提取s topics via keyword frequency analysis Identifies key user questions and 助手 响应s 检测s decision markers from text patterns Produces structured 输出 with Topics, Key Exchanges, Decisions sections With --preserve: Hand-written content is 应用ended to the end of the new file Best for: Quick backfills, initial 设置up, 系统s without LLM 访问
LLM Summarization (with --summarize) - Recommended:
Uses LLM to 生成 narrative summaries Produces coherent 2-4 paragraph prose Better 上下文 and insight 提取ion With --preserve: Existing content is passed to the LLM with instructions to incorporate it into the new summary, mAIntAIning temporal order and thematic structure Best for: DAIly 自动化, high-质量 memory files
Recommended for regular use:
# Best 质量: LLM summary that incorporates any existing notes memory-同步 backfill --today --summarize --preserve
机器人h modes automatically sanitize secrets before writing.
Common 工作流s Initial 设置up # 检查 what's missing memory-同步 compare
# Backfill everything (may take time) memory-同步 backfill --all
Nightly 自动化 (Recommended) # Best: LLM summary that incorporates any existing notes memory-同步 backfill --today --summarize --preserve
# Smart: Process only days changed since last 运行 memory-同步 backfill --incremental --summarize --preserve
# Or use a specific backend if preferred memory-同步 backfill --today --summarize --preserve --summarize-backend anthropic
Catch-Up After Gaps # Backfill from last week to present memory-同步 backfill --since 2026-01-28 --summarize
Re生成 with Preserved Content # Keep hand-written notes when regenerating memory-同步 backfill --date 2026-02-05 --force --preserve --summarize
Secret Sanitization
All content is automatically sanitized to 预防 secret leakage:
30+ explicit patterns: OpenAI, Anthropic, GitHub, AWS, Stripe, Discord, Slack, Notion, Google, Brave, Tavily, SerpAPI, etc. Structural 检测ion: JWT 令牌s, SSH keys, database connection strings, high-entropy base64 Generic patterns: API keys, 令牌s, passwords, 环境 variables Defense-in-depth: Secrets redacted at every stage (提取ion, LLM processing, file writes, 命令行工具 display)
Secrets are replaced with [REDACTED-TYPE] placeholders.
See SECRET_PATTERNS.md for complete pattern 列出.
Summarization Backends
The --summarize flag supports multiple backends via --summarize-backend:
Backend Description API Key Required OpenClaw (default) Uses OpenClaw's 会话s spawn with your 配置d 模型 No anthropic Direct Anthropic API via openAI package ANTHROPIC_API_KEY openAI Direct OpenAI API via openAI package OPENAI_API_KEY Examples # Default: use OpenClaw's native 模型 (no API key needed) memory-同步 backfill --today --summarize
# Explicit backend selection memory-同步 backfill --today --summarize --summarize-backend OpenClaw memory-同步 backfill --today --summarize --summarize-backend anthropic memory-同步 backfill --today --summarize --summarize-backend openAI
# Override 模型 for any backend memory-同步 backfill --today --summarize --模型 claude-sonnet-4-20250514 memory-同步 backfill --today --summarize --summarize-backend openAI --模型 gpt-4o
The OpenClaw backend is recommended as it:
Uses your existing OpenClaw configuration Requires no separate API keys Leverages whatever 模型 you have 配置d in OpenClaw Automated Usage Nightly Cron (3am)
Process today with LLM summary, preserving any existing notes:
0 3 * cd ~/