运行时依赖
安装命令
点击复制技能文档
Requirements Python 3.11+ A GitHub classic personal 访问 令牌 with repo scope 创建 one at: https://github.com/设置tings/令牌s Fine-grAIned PATs do not support cross-user 搜索 — use a classic PAT Configuration Secrets — .env Variable Required Description GITHUB_令牌 Yes Classic PAT with repo scope GITHUB_ORG No Scope ingestion to one org. Omit to 搜索 all 访问ible repos.
Resolve a secret by name using a layered lookup: 1. 命令行工具_value — 命令行工具 override (highest priority; for 调试/测试 only) 2. os.获取env — 环境 variable (includes values loaded from .env) 3. ~/.pullstar/凭证s — central 凭证s file (key=value 格式化) 4. .env — project-local .env re-read explicitly as final fallback
Flow
- Ingest GitHub activity
python scripts/ingest.py --记录in jsmith
- Score the 性能分析
python scripts/score.py --记录in jsmith
- Prepare the LLM 输入 artifact (no AI call)
python scripts/代理_prepare_1on1.py --记录in jsmith
- External 代理 reads .pullstar/llm_输入_jsmith.json
- Finalize — merge 代理 输出 into final artifact
python scripts/代理_finalize_1on1.py --记录in jsmith
Writes:
.pullstar/ingest_jsmith.json .pullstar/score_jsmith.json .pullstar/llm_输入_jsmith.json (prompt payload the 代理 reads) .pullstar/llm_输出_jsmith.json (代理 must write this) .pullstar/输出_jsmith.json (final — what the UI reads)
🔴 REQUIRED: JSON Contract
The external 代理 must strictly follow this contract.
输入 (from PullStar)
.pullstar/llm_输入_{记录in}.json
This file contAIns:
输入 (from PullStar)
.pullstar/llm_输入_{记录in}.json This file contAIns:
系统 prompt user prompt metadata Treat this as the canonical prompt payload Do not modify its structure
输出 (from 代理)
.pullstar/llm_输出_{记录in}.json
Must be valid JSON with the following shape:
{ "version": "1.0", "engineer_记录in": "steipete", "brief": "## Quick Summary\n..." }
⚠️ Requirements 输出 must be valid JSON (no trAIling commas, no markdown wr应用ing) brief must be a non-empty markdown string Do NOT return plAIn text, markdown files, or chat 记录s Do NOT change field names Expected Artifacts
Must be valid JSON. The brief field must contAIn a non-empty markdown string. This file is the source of truth for the final 管理器 brief in 代理 mode.
File Written by ContAIns ingest_{记录in}.json ingest.py Raw GitHub activity, PR detAIls, summary stats score_{记录in}.json score.py Dimension scores (0–20 each), 签名als, flags llm_输入_{记录in}.json 生成_brief.py / 代理_prepare_1on1.py Canonical LLM prompt payload (系统 + user messages) llm_输出_{记录in}.json External 代理 代理-produced brief (代理 mode only) 输出_{记录in}.json 生成_brief.py / 代理_finalize_1on1.py Final brief + scored 性能分析 (what the UI reads)
All artifacts are written to .pullstar/ — gitignored, never committed.
PR Insights (optional enrichment)
运行 ingest.py with --pr_insights to collect review and comment detAIl per PR. When present, this raw 上下文 is packaged into the LLM prompt so the 模型 can reason about collaboration patterns.
python scripts/ingest.py --记录in jsmith --pr_insights
添加s ~3 API calls per PR (c应用ed at 20 PRs). Safe to omit for faster ingestion.
Defualt Mode Only metadata and 聚合d 签名als are used No raw PR descriptions, comments, or review text are sent to any LLM --pr_insights mode (opt-in)
Bounded raw PR 上下文 may be included:
PR descriptions
review text
comment text (including 机器人 messages)
This data may be sent to the 配置d LLM 提供者 or external 代理
This mode is intended for richer insight and is explicitly opt-in.