将学习记录和错误记录到 Markdown 文件中,实现持续改进。编码代理可以后续将这些处理为修复,重要的学习记录会被提升到项目记忆中。
首次使用初始化
在记录任何内容之前,确保项目或工作区根目录中存在 .learnings/ 目录和文件。如果缺少任何文件,请创建它们:
mkdir -p .learnings
[ -f .learnings/LEARNINGS.md ] || printf "# Learnings\n\nCorrections, insights, and knowledge gaps captured during development.\n\nCategories: correction | insight | knowledge_gap | best_practice\n\n---\n" > .learnings/LEARNINGS.md
[ -f .learnings/ERRORS.md ] || printf "# Errors\n\nCommand failures and integration errors.\n\n---\n" > .learnings/ERRORS.md
[ -f .learnings/FEATURE_REQUESTS.md ] || printf "# Feature Requests\n\nCapabilities requested by the user.\n\n---\n" > .learnings/FEATURE_REQUESTS.md
切勿覆盖现有文件。如果 .learnings/ 已初始化,此操作不会产生任何效果。
不要记录秘密、令牌、私钥、环境变量或完整的源代码/配置文件,除非用户明确要求该级别的细节。优先使用简短摘要或脱敏摘录,而非原始命令输出或完整转录。
如果需要自动提醒或设置协助,请使用钩子集成中描述的选入钩子工作流。
快速参考
| 情况 | 操作 |
|---|
| 命令/操作失败 | 记录到 .learnings/ERRORS.md |
| 用户纠正你 | 记录到 .learnings/LEARNINGS.md,类别为 correction |
| 用户想要缺失的功能 | 记录到 .learnings/FEATURE_REQUESTS.md |
| API/外部工具失败 | 记录到 .learnings/ERRORS.md,附集成详情 |
| 知识已过时 | 记录到 .learnings/LEARNINGS.md,类别为 knowledge_gap |
| 发现更好的方法 | 记录到 .learnings/LEARNINGS.md,类别为 best_practice |
| 简化/加固重复模式 | 记录/更新 .learnings/LEARNINGS.md,Source: simplify-and-harden 和稳定的 Pattern-Key |
| 与已有条目相似 | 用 See Also 链接,考虑提升优先级 |
| 广泛适用的学习 | 提升到 CLAUDE.md、AGENTS.md 和/或 .github/copilot-instructions.md |
| 工作流改进 | 提升到 AGENTS.md(OpenClaw 工作区) |
| 工具注意事项 | 提升到 TOOLS.md(OpenClaw 工作区) |
| 行为模式 | 提升到 SOUL.md(OpenClaw 工作区) |
OpenClaw 设置(推荐)
OpenClaw 是此技能的主要平台。它使用基于工作区的提示注入和自动技能加载。
安装
通过 ClawdHub(推荐):
clawdhub install self-improving-agent
手动安装:
git clone https://github.com/peterskoett/self-improving-agent.git ~/.openclaw/skills/self-improving-agent
从原始仓库为 OpenClaw 重制:https://github.com/pskoett/pskoett-ai-skills - https://github.com/pskoett/pskoett-ai-skills/tree/main/skills/self-improvement
工作区结构
OpenClaw 将这些文件注入到每个会话中:
~/.openclaw/workspace/
├── AGENTS.md # 多代理工作流、委派模式
├── SOUL.md # 行为准则、个性、原则
├── TOOLS.md # 工具能力、集成注意事项
├── MEMORY.md # 长期记忆(仅主会话)
├── memory/ # 每日记忆文件
│ └── YYYY-MM-DD.md
└── .learnings/ # 此技能的日志文件
├── LEARNINGS.md
├── ERRORS.md
└── FEATURE_REQUESTS.md
创建学习文件
mkdir -p ~/.openclaw/workspace/.learnings
然后创建日志文件(或从 assets/ 复制):
LEARNINGS.md — 纠正、知识差距、最佳实践
ERRORS.md — 命令失败、异常
FEATURE_REQUESTS.md — 用户请求的功能
提升目标
当学习记录被证明具有广泛适用性时,将它们提升到工作区文件:
| 学习类型 | 提升到 | 示例 |
|---|
| 行为模式 | SOUL.md | "简洁明了,避免免责声明" |
| 工作流改进 | AGENTS.md | "为长任务生成子代理" |
| 工具注意事项 | TOOLS.md | "Git push 需要先配置认证" |
会话间通信
OpenClaw 提供工具在会话间共享学习记录:
- sessions_list — 查看活跃/最近的会话
- sessions_history — 读取另一个会话的转录
- sessions_send — 向另一个会话发送学习记录
- sessions_spawn — 生成子代理进行后台工作
仅在可信环境中且用户明确希望跨会话共享时使用这些工具。优先发送简短的脱敏摘要和相关文件路径,而非原始转录、秘密或完整命令输出。
可选:启用钩子
用于会话开始时的自动提醒:
# 将钩子复制到 OpenClaw 钩子目录
cp -r hooks/openclaw ~/.openclaw/hooks/self-improvement# 启用它
openclaw hooks enable self-improvement
详见 references/openclaw-integration.md 获取完整详情。
通用设置(其他代理)
对于 Claude Code、Codex、Copilot 或其他代理,在项目或工作区根目录中创建 .learnings/:
mkdir -p .learnings
使用上面显示的标题内联创建文件。避免从当前仓库或工作区读取模板,除非您明确信任该路径。
在代理文件 AGENTS.md、CLAUDE.md 或 .github/copilot-instructions.md 中添加引用,提醒自己记录学习。(这是基于钩子提醒的替代方案)
自我改进工作流
当错误或纠正发生时:
- 记录到
.learnings/ERRORS.md、LEARNINGS.md 或 FEATURE_REQUESTS.md
- 审查并将广泛适用的学习提升到:
-
CLAUDE.md - 项目事实和约定
-
AGENTS.md - 工作流和自动化
-
.github/copilot-instructions.md - Copilot 上下文
记录格式
学习条目
追加到 .learnings/LEARNINGS.md:
## [LRN-YYYYMMDD-XXX] categoryLogged: ISO-8601 时间戳
Priority: low | medium | high | critical
Status: pending
Area: frontend | backend | infra | tests | docs | config
Summary
一句话描述学到了什么Details
完整上下文:发生了什么,哪里出错,什么是正确的Suggested Action
具体的修复或改进建议Metadata
- Source: conversation | error | user_feedback
- Related Files: path/to/file.ext
- Tags: tag1, tag2
- See Also: LRN-20250110-001(如果与已有条目相关)
- Pattern-Key: simplify.dead_code | harden.input_validation(可选,用于重复模式跟踪)
- Recurrence-Count: 1(可选)
- First-Seen: 2025-01-15(可选)
- Last-Seen: 2025-01-15(可选)
错误条目
追加到 .learnings/ERRORS.md:
## [ERR-YYYYMMDD-XXX] skill_or_command_nameLogged: ISO-8601 时间戳
Priority: high
Status: pending
Area: frontend | backend | infra | tests | docs | config
Summary
简要描述什么失败了Error
实际错误消息或输出
Context
- 尝试的命令/操作
- 使用的输入或参数
- 相关的环境详情
- 相关输出的摘要或脱敏摘录(默认避免完整转录和包含秘密的数据)
Suggested Fix
如果可识别,可能的解决方案Metadata
- Reproducible: yes | no | unknown
- Related Files: path/to/file.ext
- See Also: ERR-20250110-001(如果是重复问题)
功能请求条目
追加到 .learnings/FEATURE_REQUESTS.md:
## [FEAT-YYYYMMDD-XXX] capability_nameLogged: ISO-8601 时间戳
Priority: medium
Status: pending
Area: frontend | backend | infra | tests | docs | config
Requested Capability
用户想要做什么User Context
为什么需要它,解决什么问题Complexity Estimate
simple | medium | complexSuggested Implementation
如何构建,可能扩展什么Metadata
- Frequency: first_time | recurring
- Related Features: existing_feature_name
ID 生成
格式:TYPE-YYYYMMDD-XXX
- TYPE:
LRN(学习)、ERR(错误)、FEAT(功能)
- YYYYMMDD: 当前日期
- XXX: 顺序号或随机 3 字符(如
001、A7B)
示例:LRN-20250115-001、ERR-20250115-A3F、FEAT-20250115-002
解决条目
当问题被修复时,更新条目:
- 将
Status: pending → Status: resolved
### Resolution
- Resolved: 2025-01-16T09:00:00Z
- Commit/PR: abc123 或 #42
- Notes: 简要描述做了什么
其他状态值:
in_progress - 正在积极处理
wont_fix - 决定不处理(在 Resolution 注释中添加原因)
promoted - 已提升到 CLAUDE.md、AGENTS.md 或 .github/copilot-instructions.md
提升到项目记忆
当学习具有广泛适用性(非一次性修复)时,将其提升到永久项目记忆。
何时提升
- 学习适用于多个文件/功能
- 任何贡献者(人类或 AI)都应知道的知识
- 防止重复犯错
- 记录项目特定的约定
提升目标
| 目标 | 适合的内容 |
|---|
CLAUDE.md | 所有 Claude 交互的项目事实、约定、注意事项 |
AGENTS.md | 代理特定的工作流、工具使用模式、自动化规则 |
.github/copilot-instructions.md | GitHub Copilot 的项目上下文和约定 |
SOUL.md | 行为准则、沟通风格、原则(OpenClaw 工作区) |
TOOLS.md | 工具能力、使用模式、集成注意事项(OpenClaw 工作区) |
如何提升
- 提炼学习为简洁的规则或事实
- 添加到目标文件的适当部分(如需要则创建文件)
- 更新原始条目:
- 将
Status: pending →
Status: promoted
- 添加
Promoted: CLAUDE.md、
AGENTS.md 或
.github/copilot-instructions.md提升示例
学习(详细):
项目使用 pnpm workspaces。尝试 npm install 但失败了。
锁文件是 pnpm-lock.yaml。必须使用 pnpm install。
在 CLAUDE.md 中(简洁):
## Build & Dependencies
- Package manager: pnpm (not npm) - use
pnpm install
学习(详细):
修改 API 端点时,必须重新生成 TypeScript 客户端。
忘记这一点会导致运行时类型不匹配。
在 AGENTS.md 中(可操作):
## After API Changes
- Regenerate client:
pnpm run generate:api
- Check for type errors:
pnpm tsc --noEmit
重复模式检测
如果记录的内容与已有条目相似:
- 先搜索:
grep -r "keyword" .learnings/
- 链接条目:在 Metadata 中添加
See Also: ERR-20250110-001
- 如果问题持续出现则提升优先级
- 考虑系统性修复:重复问题通常表明:
- 缺少文档(→ 提升到 CLAUDE.md 或 .github/copilot-instructions.md)
- 缺少自动化(→ 添加到 AGENTS.md)
- 架构问题(→ 创建技术债务工单)
简化与加固反馈
使用此工作流从 simplify-and-harden 技能中摄取重复模式,并将其转化为持久的提示指导。
摄取工作流
- 从任务摘要中读取
simplify_and_harden.learning_loop.candidates。
- 对每个候选项,使用
pattern_key 作为稳定的去重键。
- 在
.learnings/LEARNINGS.md 中搜索具有该键的已有条目:
-
grep -n "Pattern-Key: " .learnings/LEARNINGS.md
- 递增
Recurrence-Count
- 更新
Last-Seen
- 添加
See Also 链接到相关条目/任务
- 创建新的
LRN-... 条目
- 设置
Source: simplify-and-harden
- 设置
Pattern-Key、
Recurrence-Count: 1 和
First-Seen/
Last-Seen提升规则(系统提示反馈)
当以下条件全部满足时,将重复模式提升到代理上下文/系统提示文件:
Recurrence-Count >= 3
- 至少在 2 个不同任务中出现
- 在 30 天时间窗口内发生
提升目标:
CLAUDE.md
AGENTS.md
.github/copilot-instructions.md
SOUL.md / TOOLS.md 用于 OpenClaw 工作区级指导(如适用)
将提升的规则写为简短的预防规则(编码前/编码时应做什么),
而非冗长的事故报告。
定期审查
在自然断点处审查 .learnings/:
何时审查
- 开始新的主要任务之前
- 完成功能之后
- 在有过去学习记录的领域工作时
- 活跃开发期间每周
快速状态检查
# 计算待处理项
grep -h "Status\\: pending" .learnings/.md | wc -l# 列出待处理的高优先级项
grep -B5 "Priority\
\: high" .learnings/.md | grep "^## \["# 查找特定领域的学习
grep -l "Area\\: backend" .learnings/.md
审查操作
- 解决已修复的项
- 提升适用的学习
- 链接相关条目
- 升级重复问题
检测触发器
在注意到以下情况时自动记录:
纠正(→ 类别为 correction 的学习):
- "不,那不对..."
- "实际上,应该是..."
- "你关于...是错的"
- "那已经过时了..."
功能请求(→ 功能请求):
- "你还能..."
- "我希望你能..."
- "有没有办法..."
- "为什么你不能..."
知识差距(→ 类别为 knowledge_gap 的学习):
- 用户提供了你不知道的信息
- 你引用的文档已过时
- API 行为与你的理解不同
错误(→ 错误条目):
- 命令返回非零退出码
- 异常或堆栈跟踪
- 意外的输出或行为
- 超时或连接失败
优先级指南
| 优先级 | 何时使用 |
|---|
critical | 阻塞核心功能、数据丢失风险、安全问题 |
high | 重大影响、影响常见工作流、重复问题 |
medium | 中等影响、有变通方案 |
low | 轻微不便、边缘情况、锦上添花 |
领域标签
用于按代码库区域筛选学习记录:
| 领域 | 范围 |
|---|
frontend | UI、组件、客户端代码 |
backend | API、服务、服务端代码 |
infra | CI/CD、部署、Docker、云 |
tests | 测试文件、测试工具、覆盖率 |
docs | 文档、注释、README |
config | 配置文件、环境、设置 |
最佳实践
- 立即记录 - 问题发生后上下文最新鲜
- 要具体 - 未来的代理需要快速理解
- 包含复现步骤 - 特别是对于错误
- 链接相关文件 - 使修复更容易
- 建议具体修复 - 而不是仅仅"调查"
- 使用一致的类别 - 便于筛选
- 积极提升 - 如有疑问,添加到 CLAUDE.md 或 .github/copilot-instructions.md
- 定期审查 - 过时的学习记录价值降低
Gitignore 选项
保持学习记录本地(每个开发者):
.learnings/
此仓库使用该默认值,避免意外提交敏感或嘈杂的本地日志。
在仓库中跟踪学习记录(团队共享):
不要添加到 .gitignore - 学习记录成为共享知识。
混合模式(跟踪模板,忽略条目):
.learnings/.md
!.learnings/.gitkeep
钩子集成
通过代理钩子启用自动提醒。这是选入功能 - 您必须明确配置钩子。
快速设置(Claude Code / Codex)
在项目中创建 .claude/settings.json:
{
"hooks": {
"UserPromptSubmit": [{
"matcher": "",
"hooks": [{
"type": "command",
"command": "./skills/self-improvement/scripts/activator.sh"
}]
}]
}
}
这会在每次提示后注入学习评估提醒(约 50-100 token 开销)。
高级设置(带错误检测)
{
"hooks": {
"UserPromptSubmit": [{
"matcher": "",
"hooks": [{
"type": "command",
"command": "./skills/self-improvement/scripts/activator.sh"
}]
}],
"PostToolUse": [{
"matcher": "Bash",
"hooks": [{
"type": "command",
"command": "./skills/self-improvement/scripts/error-detector.sh"
}]
}]
}
}
这是可选的。推荐的默认设置是仅激活器;仅在您对钩子脚本检查命令输出中的错误模式感到满意时才启用 PostToolUse。
可用钩子脚本
| 脚本 | 钩子类型 | 用途 |
|---|
scripts/activator.sh | UserPromptSubmit | 在任务后提醒评估学习 |
scripts/error-detector.sh | PostToolUse (Bash) | 在命令错误时触发 |
详见
references/hooks-setup.md 获取详细配置和故障排除。
自动技能提取
当学习记录足够有价值可以成为可复用技能时,使用提供的辅助工具进行提取。
技能提取标准
当满足以下任一条件时,学习记录符合技能提取资格:
| 标准 | 描述 |
|---|
| 重复出现 | 有 2+ 个相似问题的 See Also 链接 |
| 已验证 | 状态为 resolved 且有可用的修复 |
| 非显而易见 | 需要实际调试/调查才能发现 |
| 广泛适用 | 非项目特定;跨代码库有用 |
| 用户标记 | 用户说"把这个保存为技能"或类似 |
提取工作流
- 识别候选:学习记录符合提取标准
- 运行辅助工具(或手动创建):
./skills/self-improvement/scripts/extract-skill.sh skill-name --dry-run
./skills/self-improvement/scripts/extract-skill.sh skill-name
- 自定义 SKILL.md:用学习内容填写模板
- 更新学习记录:将状态设为
promoted_to_skill,添加 Skill-Path
- 验证:在新的会话中读取技能,确保其自包含
手动提取
如果您更喜欢手动创建:
- 创建
skills//SKILL.md
- 使用
assets/SKILL-TEMPLATE.md 中的模板
- 遵循代理技能规范:
- 包含
name 和
description 的 YAML frontmatter
- 名称必须与文件夹名称匹配
- 技能文件夹内不要有 README.md
提取检测触发器
注意以下信号,表明学习应成为技能:
在对话中:
- "把这个保存为技能"
- "我一直在遇到这个问题"
- "这对其他项目也有用"
- "记住这个模式"
在学习条目中:
- 多个
See Also 链接(重复问题)
- 高优先级 + 已解决状态
- 类别:
best_practice 且具有广泛适用性
- 用户称赞解决方案的反馈
技能质量门
提取前,验证:
- [ ] 解决方案已测试且可用
- [ ] 描述在没有原始上下文的情况下也清晰
- [ ] 代码示例是自包含的
- [ ] 没有项目特定的硬编码值
- [ ] 遵循技能命名约定(小写、连字符)
多代理支持
此技能适用于不同的 AI 编码代理,具有代理特定的激活方式。
Claude Code
激活方式:钩子(UserPromptSubmit、PostToolUse)
设置:.claude/settings.json 配置钩子
检测:通过钩子脚本自动
Codex CLI
激活方式:钩子(与 Claude Code 相同的模式)
设置:.codex/settings.json 配置钩子
检测:通过钩子脚本自动
GitHub Copilot
激活方式:手动(无钩子支持)
设置:添加到 .github/copilot-instructions.md:
## Self-ImprovementAfter solving non-obvious issues, consider logging to .learnings/:
- Use format from self-improvement skill
- Link related entries with See Also
- Promote high-value learnings to skills
Ask in chat: "Should I log this as a learning?"
检测:会话结束时手动审查
OpenClaw
激活方式:工作区注入 + 代理间消息
设置:参见上方"OpenClaw 设置"部分
检测:通过会话工具和工作区文件
代理无关指导
无论使用哪个代理,在以下情况下应用自我改进:
- 发现非显而易见的事情 - 解决方案不是立即得到的
- 纠正自己 - 最初的方法是错误的
- 学习项目约定 - 发现了未文档化的模式
- 遇到意外错误 - 特别是诊断困难的情况
- 找到更好的方法 - 改进了原始解决方案
Copilot Chat 集成
对于 Copilot 用户,在相关时将以下内容添加到提示中:
完成此任务后,评估是否应将任何学习记录到 .learnings/,使用自我改进技能格式。
或使用快速提示:
- "将此记录到学习日志"
- "从该解决方案创建技能"
- "检查 .learnings/ 中的相关问题"
Log learnings and errors to markdown files for continuous improvement. Coding agents can later process these into fixes, and important learnings get promoted to project memory.
First-Use Initialisation
Before logging anything, ensure the .learnings/ directory and files exist in the project or workspace root. If any are missing, create them:
mkdir -p .learnings
[ -f .learnings/LEARNINGS.md ] || printf "# Learnings\n\nCorrections, insights, and knowledge gaps captured during development.\n\nCategories: correction | insight | knowledge_gap | best_practice\n\n---\n" > .learnings/LEARNINGS.md
[ -f .learnings/ERRORS.md ] || printf "# Errors\n\nCommand failures and integration errors.\n\n---\n" > .learnings/ERRORS.md
[ -f .learnings/FEATURE_REQUESTS.md ] || printf "# Feature Requests\n\nCapabilities requested by the user.\n\n---\n" > .learnings/FEATURE_REQUESTS.md
Never overwrite existing files. This is a no-op if .learnings/ is already initialised.
Do not log secrets, tokens, private keys, environment variables, or full source/config files unless the user explicitly asks for that level of detail. Prefer short summaries or redacted excerpts over raw command output or full transcripts.
If you want automatic reminders or setup assistance, use the opt-in hook workflow described in Hook Integration.
Quick Reference
| Situation | Action |
|---|
| Command/operation fails | Log to .learnings/ERRORS.md |
| User corrects you | Log to .learnings/LEARNINGS.md with category correction |
| User wants missing feature | Log to .learnings/FEATURE_REQUESTS.md |
| API/external tool fails | Log to .learnings/ERRORS.md with integration details |
| Knowledge was outdated | Log to .learnings/LEARNINGS.md with category knowledge_gap |
| Found better approach | Log to .learnings/LEARNINGS.md with category best_practice |
| Simplify/Harden recurring patterns | Log/update .learnings/LEARNINGS.md with Source: simplify-and-harden and a stable Pattern-Key |
| Similar to existing entry | Link with See Also, consider priority bump |
| Broadly applicable learning | Promote to CLAUDE.md, AGENTS.md, and/or .github/copilot-instructions.md |
| Workflow improvements | Promote to AGENTS.md (OpenClaw workspace) |
| Tool gotchas | Promote to TOOLS.md (OpenClaw workspace) |
| Behavioral patterns | Promote to SOUL.md (OpenClaw workspace) |
OpenClaw Setup (Recommended)
OpenClaw is the primary platform for this skill. It uses workspace-based prompt injection with automatic skill loading.
Installation
Via ClawdHub (recommended):
clawdhub install self-improving-agent
Manual:
git clone https://github.com/peterskoett/self-improving-agent.git ~/.openclaw/skills/self-improving-agent
Remade for openclaw from original repo : https://github.com/pskoett/pskoett-ai-skills - https://github.com/pskoett/pskoett-ai-skills/tree/main/skills/self-improvement
Workspace Structure
OpenClaw injects these files into every session:
~/.openclaw/workspace/
├── AGENTS.md # Multi-agent workflows, delegation patterns
├── SOUL.md # Behavioral guidelines, personality, principles
├── TOOLS.md # Tool capabilities, integration gotchas
├── MEMORY.md # Long-term memory (main session only)
├── memory/ # Daily memory files
│ └── YYYY-MM-DD.md
└── .learnings/ # This skill's log files
├── LEARNINGS.md
├── ERRORS.md
└── FEATURE_REQUESTS.md
Create Learning Files
mkdir -p ~/.openclaw/workspace/.learnings
Then create the log files (or copy from assets/):
LEARNINGS.md — corrections, knowledge gaps, best practices
ERRORS.md — command failures, exceptions
FEATURE_REQUESTS.md — user-requested capabilities
Promotion Targets
When learnings prove broadly applicable, promote them to workspace files:
| Learning Type | Promote To | Example |
|---|
| Behavioral patterns | SOUL.md | "Be concise, avoid disclaimers" |
| Workflow improvements | AGENTS.md | "Spawn sub-agents for long tasks" |
| Tool gotchas | TOOLS.md | "Git push needs auth configured first" |
Inter-Session Communication
OpenClaw provides tools to share learnings across sessions:
- sessions_list — View active/recent sessions
- sessions_history — Read another session's transcript
- sessions_send — Send a learning to another session
- sessions_spawn — Spawn a sub-agent for background work
Use these only in trusted environments and only when the user explicitly wants cross-session sharing. Prefer sending a short sanitized summary and relevant file paths, not raw transcripts, secrets, or full command output.
Optional: Enable Hook
For automatic reminders at session start:
# Copy hook to OpenClaw hooks directory
cp -r hooks/openclaw ~/.openclaw/hooks/self-improvement# Enable it
openclaw hooks enable self-improvement
See references/openclaw-integration.md for complete details.
Generic Setup (Other Agents)
For Claude Code, Codex, Copilot, or other agents, create .learnings/ in the project or workspace root:
mkdir -p .learnings
Create the files inline using the headers shown above. Avoid reading templates from the current repo or workspace unless you explicitly trust that path.
Add reference to agent files AGENTS.md, CLAUDE.md, or .github/copilot-instructions.md to remind yourself to log learnings. (this is an alternative to hook-based reminders)
Self-Improvement Workflow
When errors or corrections occur:
- Log to
.learnings/ERRORS.md, LEARNINGS.md, or FEATURE_REQUESTS.md
- Review and promote broadly applicable learnings to:
-
CLAUDE.md - project facts and conventions
-
AGENTS.md - workflows and automation
-
.github/copilot-instructions.md - Copilot context
Logging Format
Learning Entry
Append to .learnings/LEARNINGS.md:
## [LRN-YYYYMMDD-XXX] categoryLogged: ISO-8601 timestamp
Priority: low | medium | high | critical
Status: pending
Area: frontend | backend | infra | tests | docs | config
Summary
One-line description of what was learnedDetails
Full context: what happened, what was wrong, what's correctSuggested Action
Specific fix or improvement to makeMetadata
- Source: conversation | error | user_feedback
- Related Files: path/to/file.ext
- Tags: tag1, tag2
- See Also: LRN-20250110-001 (if related to existing entry)
- Pattern-Key: simplify.dead_code | harden.input_validation (optional, for recurring-pattern tracking)
- Recurrence-Count: 1 (optional)
- First-Seen: 2025-01-15 (optional)
- Last-Seen: 2025-01-15 (optional)
Error Entry
Append to .learnings/ERRORS.md:
## [ERR-YYYYMMDD-XXX] skill_or_command_nameLogged: ISO-8601 timestamp
Priority: high
Status: pending
Area: frontend | backend | infra | tests | docs | config
Summary
Brief description of what failedError
Actual error message or output
Context
- Command/operation attempted
- Input or parameters used
- Environment details if relevant
- Summary or redacted excerpt of relevant output (avoid full transcripts and secret-bearing data by default)
Suggested Fix
If identifiable, what might resolve thisMetadata
- Reproducible: yes | no | unknown
- Related Files: path/to/file.ext
- See Also: ERR-20250110-001 (if recurring)
Feature Request Entry
Append to .learnings/FEATURE_REQUESTS.md:
## [FEAT-YYYYMMDD-XXX] capability_nameLogged: ISO-8601 timestamp
Priority: medium
Status: pending
Area: frontend | backend | infra | tests | docs | config
Requested Capability
What the user wanted to doUser Context
Why they needed it, what problem they're solvingComplexity Estimate
simple | medium | complexSuggested Implementation
How this could be built, what it might extendMetadata
- Frequency: first_time | recurring
- Related Features: existing_feature_name
ID Generation
Format: TYPE-YYYYMMDD-XXX
- TYPE:
LRN (learning), ERR (error), FEAT (feature)
- YYYYMMDD: Current date
- XXX: Sequential number or random 3 chars (e.g.,
001, A7B)
Examples: LRN-20250115-001, ERR-20250115-A3F, FEAT-20250115-002
Resolving Entries
When an issue is fixed, update the entry:
- Change
Status: pending → Status: resolved
- Add resolution block after Metadata:
### Resolution
- Resolved: 2025-01-16T09:00:00Z
- Commit/PR: abc123 or #42
- Notes: Brief description of what was done
Other status values:
in_progress - Actively being worked on
wont_fix - Decided not to address (add reason in Resolution notes)
promoted - Elevated to CLAUDE.md, AGENTS.md, or .github/copilot-instructions.md
Promoting to Project Memory
When a learning is broadly applicable (not a one-off fix), promote it to permanent project memory.
When to Promote
- Learning applies across multiple files/features
- Knowledge any contributor (human or AI) should know
- Prevents recurring mistakes
- Documents project-specific conventions
Promotion Targets
| Target | What Belongs There |
|---|
CLAUDE.md | Project facts, conventions, gotchas for all Claude interactions |
AGENTS.md | Agent-specific workflows, tool usage patterns, automation rules |
.github/copilot-instructions.md | Project context and conventions for GitHub Copilot |
SOUL.md | Behavioral guidelines, communication style, principles (OpenClaw workspace) |
TOOLS.md | Tool capabilities, usage patterns, integration gotchas (OpenClaw workspace) |
How to Promote
- Distill the learning into a concise rule or fact
- Add to appropriate section in target file (create file if needed)
- Update original entry:
- Change
Status: pending →
Status: promoted
- Add
Promoted: CLAUDE.md,
AGENTS.md, or
.github/copilot-instructions.mdPromotion Examples
Learning (verbose):
Project uses pnpm workspaces. Attempted npm install but failed.
Lock file is pnpm-lock.yaml. Must use pnpm install.
In CLAUDE.md (concise):
## Build & Dependencies
- Package manager: pnpm (not npm) - use
pnpm install
Learning (verbose):
When modifying API endpoints, must regenerate TypeScript client.
Forgetting this causes type mismatches at runtime.
In AGENTS.md (actionable):
## After API Changes
- Regenerate client:
pnpm run generate:api
- Check for type errors:
pnpm tsc --noEmit
Recurring Pattern Detection
If logging something similar to an existing entry:
- Search first:
grep -r "keyword" .learnings/
- Link entries: Add
See Also: ERR-20250110-001 in Metadata
- Bump priority if issue keeps recurring
- Consider systemic fix: Recurring issues often indicate:
- Missing documentation (→ promote to CLAUDE.md or .github/copilot-instructions.md)
- Missing automation (→ add to AGENTS.md)
- Architectural problem (→ create tech debt ticket)
Simplify & Harden Feed
Use this workflow to ingest recurring patterns from the simplify-and-harden
skill and turn them into durable prompt guidance.
Ingestion Workflow
- Read
simplify_and_harden.learning_loop.candidates from the task summary.
- For each candidate, use
pattern_key as the stable dedupe key.
- Search
.learnings/LEARNINGS.md for an existing entry with that key:
-
grep -n "Pattern-Key: " .learnings/LEARNINGS.md
- Increment
Recurrence-Count
- Update
Last-Seen
- Add
See Also links to related entries/tasks
- Create a new
LRN-... entry
- Set
Source: simplify-and-harden
- Set
Pattern-Key,
Recurrence-Count: 1, and
First-Seen/
Last-SeenPromotion Rule (System Prompt Feedback)
Promote recurring patterns into agent context/system prompt files when all are true:
Recurrence-Count >= 3
- Seen across at least 2 distinct tasks
- Occurred within a 30-day window
Promotion targets:
CLAUDE.md
AGENTS.md
.github/copilot-instructions.md
SOUL.md / TOOLS.md for OpenClaw workspace-level guidance when applicable
Write promoted rules as short prevention rules (what to do before/while coding),
not long incident write-ups.
Periodic Review
Review .learnings/ at natural breakpoints:
When to Review
- Before starting a new major task
- After completing a feature
- When working in an area with past learnings
- Weekly during active development
Quick Status Check
# Count pending items
grep -h "Status\\: pending" .learnings/.md | wc -l# List pending high-priority items
grep -B5 "Priority\
\: high" .learnings/.md | grep "^## \["# Find learnings for a specific area
grep -l "Area\\: backend" .learnings/.md
Review Actions
- Resolve fixed items
- Promote applicable learnings
- Link related entries
- Escalate recurring issues
Detection Triggers
Automatically log when you notice:
Corrections (→ learning with correction category):
- "No, that's not right..."
- "Actually, it should be..."
- "You're wrong about..."
- "That's outdated..."
Feature Requests (→ feature request):
- "Can you also..."
- "I wish you could..."
- "Is there a way to..."
- "Why can't you..."
Knowledge Gaps (→ learning with knowledge_gap category):
- User provides information you didn't know
- Documentation you referenced is outdated
- API behavior differs from your understanding
Errors (→ error entry):
- Command returns non-zero exit code
- Exception or stack trace
- Unexpected output or behavior
- Timeout or connection failure
Priority Guidelines
| Priority | When to Use |
|---|
critical | Blocks core functionality, data loss risk, security issue |
high | Significant impact, affects common workflows, recurring issue |
medium | Moderate impact, workaround exists |
low | Minor inconvenience, edge case, nice-to-have |
Area Tags
Use to filter learnings by codebase region:
| Area | Scope |
|---|
frontend | UI, components, client-side code |
backend | API, services, server-side code |
infra | CI/CD, deployment, Docker, cloud |
tests | Test files, testing utilities, coverage |
docs | Documentation, comments, READMEs |
config | Configuration files, environment, settings |
Best Practices
- Log immediately - context is freshest right after the issue
- Be specific - future agents need to understand quickly
- Include reproduction steps - especially for errors
- Link related files - makes fixes easier
- Suggest concrete fixes - not just "investigate"
- Use consistent categories - enables filtering
- Promote aggressively - if in doubt, add to CLAUDE.md or .github/copilot-instructions.md
- Review regularly - stale learnings lose value
Gitignore Options
Keep learnings local (per-developer):
.learnings/
This repo uses that default to avoid committing sensitive or noisy local logs by accident.
Track learnings in repo (team-wide):
Don't add to .gitignore - learnings become shared knowledge.
Hybrid (track templates, ignore entries):
.learnings/.md
!.learnings/.gitkeep
Hook Integration
Enable automatic reminders through agent hooks. This is opt-in - you must explicitly configure hooks.
Quick Setup (Claude Code / Codex)
Create .claude/settings.json in your project:
{
"hooks": {
"UserPromptSubmit": [{
"matcher": "",
"hooks": [{
"type": "command",
"command": "./skills/self-improvement/scripts/activator.sh"
}]
}]
}
}
This injects a learning evaluation reminder after each prompt (~50-100 tokens overhead).
Advanced Setup (With Error Detection)
{
"hooks": {
"UserPromptSubmit": [{
"matcher": "",
"hooks": [{
"type": "command",
"command": "./skills/self-improvement/scripts/activator.sh"
}]
}],
"PostToolUse": [{
"matcher": "Bash",
"hooks": [{
"type": "command",
"command": "./skills/self-improvement/scripts/error-detector.sh"
}]
}]
}
}
This is optional. The recommended default is activator-only setup; enable PostToolUse only if you are comfortable with hook scripts inspecting command output for error patterns.
Available Hook Scripts
| Script | Hook Type | Purpose |
|---|
scripts/activator.sh | UserPromptSubmit | Reminds to evaluate learnings after tasks |
scripts/error-detector.sh | PostToolUse (Bash) | Triggers on command errors |
See
references/hooks-setup.md for detailed configuration and troubleshooting.
Automatic Skill Extraction
When a learning is valuable enough to become a reusable skill, extract it using the provided helper.
Skill Extraction Criteria
A learning qualifies for skill extraction when ANY of these apply:
| Criterion | Description |
|---|
| Recurring | Has See Also links to 2+ similar issues |
| Verified | Status is resolved with working fix |
| Non-obvious | Required actual debugging/investigation to discover |
| Broadly applicable | Not project-specific; useful across codebases |
| User-flagged | User says "save this as a skill" or similar |
Extraction Workflow
- Identify candidate: Learning meets extraction criteria
- Run helper (or create manually):
./skills/self-improvement/scripts/extract-skill.sh skill-name --dry-run
./skills/self-improvement/scripts/extract-skill.sh skill-name
- Customize SKILL.md: Fill in template with learning content
- Update learning: Set status to
promoted_to_skill, add Skill-Path
- Verify: Read skill in fresh session to ensure it's self-contained
Manual Extraction
If you prefer manual creation:
- Create
skills//SKILL.md
- Use template from
assets/SKILL-TEMPLATE.md
- Follow Agent Skills spec:
- YAML frontmatter with
name and
description
- Name must match folder name
- No README.md inside skill folder
Extraction Detection Triggers
Watch for these signals that a learning should become a skill:
In conversation:
- "Save this as a skill"
- "I keep running into this"
- "This would be useful for other projects"
- "Remember this pattern"
In learning entries:
- Multiple
See Also links (recurring issue)
- High priority + resolved status
- Category:
best_practice with broad applicability
- User feedback praising the solution
Skill Quality Gates
Before extraction, verify:
- [ ] Solution is tested and working
- [ ] Description is clear without original context
- [ ] Code examples are self-contained
- [ ] No project-specific hardcoded values
- [ ] Follows skill naming conventions (lowercase, hyphens)
Multi-Agent Support
This skill works across different AI coding agents with agent-specific activation.
Claude Code
Activation: Hooks (UserPromptSubmit, PostToolUse)
Setup: .claude/settings.json with hook configuration
Detection: Automatic via hook scripts
Codex CLI
Activation: Hooks (same pattern as Claude Code)
Setup: .codex/settings.json with hook configuration
Detection: Automatic via hook scripts
GitHub Copilot
Activation: Manual (no hook support)
Setup: Add to .github/copilot-instructions.md:
## Self-ImprovementAfter solving non-obvious issues, consider logging to .learnings/:
- Use format from self-improvement skill
- Link related entries with See Also
- Promote high-value learnings to skills
Ask in chat: "Should I log this as a learning?"
Detection: Manual review at session end
OpenClaw
Activation: Workspace injection + inter-agent messaging
Setup: See "OpenClaw Setup" section above
Detection: Via session tools and workspace files
Agent-Agnostic Guidance
Regardless of agent, apply self-improvement when you:
- Discover something non-obvious - solution wasn't immediate
- Correct yourself - initial approach was wrong
- Learn project conventions - discovered undocumented patterns
- Hit unexpected errors - especially if diagnosis was difficult
- Find better approaches - improved on your original solution
Copilot Chat Integration
For Copilot users, add this to your prompts when relevant:
After completing this task, evaluate if any learnings should be logged to .learnings/ using the self-improvement skill format.
Or use quick prompts:
- "Log this to learnings"
- "Create a skill from this solution"
- "Check .learnings/ for related issues"