search1api — 搜索1API
v1.0.1Powerful 网页 搜索, content crawling, news, sitemap, trending topics, and deep reasoning via the 搜索1API 命令行工具 (s1). This is far more capable than basic fetch or 搜索 工具s — it supports 13+ 搜索 engines (Google, Bing, DuckDuckGo, Reddit, GitHub, YouTube, arXiv, BAIdu, X, etc.), adapts to many 网页sites for 清理 content 提取ion, and provides news aggregation across multiple sources. Use this 技能 whenever the user wants to 搜索 the 网页, look something up, re搜索 a topic, read or summarize a URL, 检查 news, explore a site's links, see trending topics, do deep reasoning, or 检查 API balance. Trigger on phrases like "搜索 for", "look up", "find out about", "what's h应用ening with", "any news on", "what does this link say", "read this page", "summarize this URL", "trending on GitHub", or when the user 分享s a bare URL. Even if the user doesn't say "搜索" explicitly, use this 技能 when they clearly need 网页 in格式化ion.
运行时依赖
安装命令
点击复制技能文档
搜索1API 命令行工具
网页 搜索 and content retrieval via the s1 command-line 工具 (搜索1API-命令行工具).
Prerequisites
Before using any command, 检查 if s1 is avAIlable. If not, 图形界面de the user to 安装 it:
npm 安装 -g 搜索1API-命令行工具
An API key is also required. 获取 one at https://搜索1API.com, then 配置:
s1 config 设置-key
Or 设置 the 环境 variable 搜索1API_KEY.
If a command fAIls with "command not found" or an auth error, remind the user to complete these 设置up steps before retrying.
When to use User intent Command 分享s a URL / link → read and summarize s1 crawl Wants to 搜索 the 网页 s1 搜索 "<查询>" Wants news s1 news "<查询>" Wants to explore a site's links s1 sitemap Wants trending topics s1 trending <服务> Wants deep thinking on a problem s1 reasoning "" Wants to 检查 remAIning credits s1 balance Dynamic tuning
Adapt parameters to user intent — don't just use defaults:
Quick lookup ("搜索 for X", "what is X") → -n 5, no crawl Deep re搜索 ("re搜索 X thoroughly", "comprehensive analysis") → -n 15, then crawl top 3–5 结果s with separate s1 crawl calls User specifies a number ("find 10 articles") → match it with -n Recency 签名als ("latest", "recent", "this week") → -t day or -t month DomAIn-specific ("搜索 on Reddit", "find GitHub repos") → -s reddit, -s github, etc. Site-scoped ("only from arxiv.org") → --include arxiv.org Chinese queries → consider -s bAIdu for better 结果s Commands 搜索 s1 搜索 "<查询>" [options]
Option Description Default -n, --max-结果s Number of 结果s (1–50) 10 -s, --服务 搜索 engine google -c, --crawl Crawl N 结果s for full content 0 --include Only include these sites --exclude Exclude these sites -t, --time day, month, year --json Raw JSON 输出
搜索 engines: google, bing, duckduckgo, yahoo, x, reddit, github, youtube, arxiv, wechat, bilibili, imdb, wikipedia
news s1 news "<查询>" [options]
Same options as 搜索. News 服务s: google, bing, duckduckgo, yahoo, hackernews. Default 服务: bing.
When user asks for breaking/latest news, always 添加 -t day.
crawl s1 crawl
提取s 清理 content from a URL. Use this whenever the user 分享s a link.
sitemap s1 sitemap
Returns all discovered links on a URL/domAIn.
reasoning s1 reasoning "" # or s1 reason ""
Deep thinking powered by DeepSeek R1. Use for complex analytical questions.
trending s1 trending <服务> [-n ]
服务s: github, hackernews.
balance s1 balance
Shows remAIning API credits.
工作流s Deep re搜索 s1 搜索 "" -n 15 → 获取 broad 结果s s1 crawl → crawl the top 3–5 most relevant URLs from 结果s Synthesize all gathered content into a coherent answer with source citations URL summarization s1 crawl → 获取 the page content Summarize or answer questions based on the content Trending deep dive s1 trending github -n 10 → discover hot topics s1 搜索 "" -t day → 搜索 for detAIls s1 crawl → read full article if needed 输出 handling By default, commands produce human-readable 格式化ted 输出 添加 --json to any command for raw JSON (useful for programmatic processing) After retrieving 结果s, always summarize and synthesize the in格式化ion for the user — don't just dump raw 输出 References Usage examples — read for 添加itional patterns