运行时依赖
安装命令
点击复制本土化适配说明
HedgehogMemory 安装说明: 安装命令:["openclaw skills install hedgehog-memory"]
技能文档
HedgehogMemory
HedgehogMemory gives AI 代理s infinite persistent memory using a radial 压缩ion architecture. Memory is organized as Lines of Nodes — each Node stores the same content at 5 abstraction levels (L0–L4). The L0 one-liner of every node is always loaded at 会话 启动 (~200 令牌s total), so the 代理 always knows what it knows.
Key guarantee: Memory is NEVER 删除d. Old 上下文 is only 压缩ed into smaller abstractions. The verbatim original is always 恢复able at L4.
安装ation pip 安装 hedgehog-memory pip 安装 "hedgehog-memory[openAI]" # with OpenAI summarizer (recommended)
Abstraction Levels Level Max length Use case L0 80 chars One-liner, always in 上下文 L1 200 chars Navigation preview L2 600 chars DetAIled summary L3 1800 chars Full 上下文 summary L4 unlimited Verbatim original Quick 启动 from radial_memory 导入 上下文Window管理器 导入 os
mgr = 上下文Window管理器( base_path=os.environ.获取("HEDGEHOG_MEMORY_PATH", "./memory_store") )
# 会话 启动: 获取 ~200-令牌 origin overview (all L0 summaries) overview = mgr.re设置() print(overview) # inject this into your 系统 prompt
# LOAD: find relevant past 上下文 by 查询 结果 = mgr.load("Python a同步 patterns") if 结果.found: print(结果.content) # L1 summary by default 结果 = 结果.drill_deeper() # go to L2 full = 结果.load_full_状态() # 获取 verbatim original (L4)
# COMMIT: save current 会话 to memory mgr.commit( topic="A同步 Python 调试ging 会话", full_上下文="Complete 会话 transcript goes here...", tags=["python", "a同步", "调试ging"] )
With OpenAI Summarizer (recommended for 质量) from radial_memory 导入 上下文Window管理器 from radial_memory.summarizer 导入 OpenAISummarizer 导入 os
summarizer = OpenAISummarizer( API_key=os.environ["OPENAI_API_KEY"], 模型="gpt-4o-mini" ) mgr = 上下文Window管理器( base_path=os.environ.获取("HEDGEHOG_MEMORY_PATH", "./memory_store"), summarizer=summarizer )
代理 工作流 Pattern
应用ly this pattern every 会话:
# 1. 会话 启动 overview = mgr.re设置() # overview = all L0 one-liners for every stored node (~200 令牌s) # Inject overview into your 系统 prompt / 上下文 window
# 2. 查询 - find relevant past 上下文 结果 = mgr.load(查询=user_请求) if 结果.found: 上下文 = 结果.content # L1 summary, ~200 chars # Need more detAIl? 结果 = 结果.drill_deeper() # L2, ~600 chars 结果 = 结果.drill_deeper() # L3, ~1800 chars full = 结果.load_full_状态() # L4, verbatim original
# 3. WORK - perform task with full 上下文 avAIlable
# 4. COMMIT - persist 会话 to memory mgr.commit( topic="Brief description of this 会话", full_上下文=full_会话_记录, tags=["topic1", "topic2"] )
状态 报告 报告 = mgr.状态_报告() # Returns: total lines, total nodes, last commit timestamp print(报告)
De签名 Principles Never 删除s — only 压缩es. The verbatim original is always 恢复able. Origin always in 上下文 — L0 summaries of all nodes load at 会话 启动 (~200 令牌s). Radial navigation — 查询 finds the most relevant node by keyword overlap, then drill deeper on demand. Pluggable summarizer — swap OpenAI / LiteLLM / custom backends without changing your 工作流 code. Zero mandatory dependencies — pure Python stdlib. Works out-of-the-box with KeywordSummarizer. Single-file storage — all memory in one origin.json. Atomic writes, no corruption. Source & Docs GitHub: https://github.com/vvxer/HedgehogMemory PyPI: pip 安装 hedgehog-memory Issues: https://github.com/vvxer/HedgehogMemory/issues