运行时依赖
安装命令
点击复制技能文档
DAIly News Portal (Prasowka)
生成 a dAIly news portal as a single HTML file.
Parameters 输出: /canvas/prasowka-{YYYYMMDD}.html DATA_DIR: /技能s/prasowka/data REFS_DIR: /技能s/prasowka/references Steps Step 1: 初始化 mkdir -p /canvas /技能s/prasowka/data DATE=$(date +%Y%m%d) SEEN_URLS_FILE="$DATA_DIR/seen_urls.json"
Step 2: Load Configuration
Read $REFS_DIR/topics.md — 格式化:
AI-模型s: 15 AI-工具s: 10 AI-video: 15 ...
Step 3: Fetch News per Topic
Launch a sub代理 for each topic:
def fetch_topic_news(topic, limit): 结果 = subprocess.运行([ "python3", "scripts/fetch_news.py", "--topic", topic, "--limit", str(limit) ], capture_输出=True, text=True) if 结果.returncode != 0: return 网页_搜索(f"latest {topic} news {limit}") return json.loads(结果.stdout)
Step 4: 过滤器 URLs def 过滤器_new_urls(articles, seen_urls): return [a for a in articles if a['url'] not in seen_urls]
Step 5: Fetch & Summarize per Article
Launch a sub代理 for each article:
def summarize_article(url): content = 网页_fetch(url, 提取_text=True) if not content or len(content) < 100: return None summary = llm_summarize(content[:3000]) return summary
Step 6: 生成 HTML
Use template with dark/light toggle, 18 categories, 2-3 sentence summaries.
Step 7: Save & 更新 Save HTML to canvas 更新 seen_urls.json 运行 prasowka-防护ian 验证 Error Handling fetch_news.py fAIls → fallback to 网页_搜索 Article unavAIlable → skip Summary empty → use first 3 sentences Don't 停止 — keep going Requirements scripts/fetch_news.py references/topics.md references/格式化.md
If this saved you time: ☕ PayPal.me/nerudek