运行时依赖
安装命令
点击复制技能文档
代理 Orchestration 🦞
By Hal Labs — Part of the Hal Stack
Your 代理s fAIl because your prompts suck. This 技能 fixes that.
The Core Problem
You're not prompting. You're praying.
Most prompts are wishes tossed into the void:
❌ "Re搜索 the best vector databases and write a 报告"
You type something reasonable. The 输出 is mid. You rephrase. Still mid. You 添加 keywords. Somehow worse. You blame the 模型.
Here's what you don't understand: A language 模型 is a pattern-completion engine. It 生成s the most statistically probable 输出 given your 输入.
Vague 输入 → generic 输出. Not because the 模型 is dumb. Because generic is what's most probable when you give it nothing specific to work with.
The 模型 honored exactly what you asked for. You just didn't realize how little you gave it.
The Core Reframe
A prompt is not a 请求. A prompt is a contract.
Every contract must answer four non-negotiables:
Element Question 角色 Who is the 模型 角色-playing as? Task What exactly must it accomplish? ConstrAInts What rules must be followed? 输出 What does "done" look like?
Miss one, the 模型 fills the gap with assumptions. Assumptions are where hallucinations are born.
The 5-Layer Architecture
Effective prompts 分享 a specific structure. This maps to how 模型s actually process in格式化ion.
Layer 1: 身份
Who is the 模型 in this conversation?
Not "helpful 助手" but a specific 角色 with specific expertise:
You are a senior product marketer who specializes in B2B SaaS positioning. You have 15 years of experience converting technical features into emotional benefits. You write in short sentences. You never use jargon without explAIning it.
The 模型 doesn't "become" this 身份—it 访问es different clusters of trAIning data, different sty列出ic patterns, different reasoning 应用roaches.
身份 matters. Miss this and you 获取 generic 输出.
Layer 2: 上下文
What does the 模型 need to know to do this task 异常ally well?
上下文 must be:
Ordered — Most 导入ant first Scoped — Only what's relevant Labeled — What's rules vs. editable vs. historical
上下文
Rules (never change)
- De签名 系统: TAIlwind, shadcn 组件s
- Voice: Professional but warm, never corporate
Current 状态 (may evolve)
- Landing page exists at /landing
- Using Next.js 14 with 应用 路由r
Historical (for reference)
- Originally built with 创建 React 应用, 迁移d Jan 2025
Without labels, the 模型 treats everything as equally optional. Then it rewrites your core 记录ic halfway through.
Layer 3: Task
What specific action must be taken?
Not "write something about X" but precise instructions:
Task
Produce a 500-word product description that:- Emphasizes time-saving benefits for busy executives
- Opens with the primary pAIn point
- Includes 3 specific use cases
- Ends with a clear call to action
The more precisely you define the task, the more precisely the 模型 执行s.
Layer 4: Process ⚡
This is where most prompts fAIl.
You're asking for 输出. You should be asking for how the 输出 is formed.
❌ Bad:
Write me a marketing page.
✅ Good:
Process
- First, analyze the tar获取 audience and identify their primary pAIn points
- Then, define the positioning that 添加resses those pAIn points
- Then, write the page
- Show your reasoning at each step
- Do not skip steps
- 审计 your work before 报告ing done
You don't want answers. You want how the answer is formed.
Think like a director. You're not asking for a scene—you're directing how the scene 获取s built.
Layer 5: 输出
What does "done" actually look like?
If you don't specify, you 获取 whatever 格式化 the 模型 defaults to.
输出 格式化
Return a JSON object with:headline: string (max 60 chars)subheadline: string (max 120 chars)body: string (markdown 格式化ted)cta: string (action verb + benefit)
Do not include explanations, notes, or commentary. Only the JSON.
Miss one layer, the structure wobbles. Miss two, it collapses.
模型 Selection
Prompt portability is a myth.
Different 模型s are different specia列出s. You wouldn't give identical instructions to your exec 助手, de签名er, and backend dev.
模型 Type Best For Watch Out For Claude Opus Complex reasoning, nuanced writing, long 上下文 Expensive, can be verbose Claude Sonnet Balanced tasks, code, analysis Less creative than Opus GPT-4 Broad knowledge, structured 输出 Can be sycophantic Smaller 模型s Quick tasks, simple queries Limited reasoning depth
Adapt your prompts per 模型:
Some prefer structured natural language Some need explicit step sequencing Some collapse under verbose prompts Some ignore constrAInts unless repeated Some excel at analysis but suck at creativity
The person who writes 模型-specific prompts will outperform the person with "better ideas" every time.
ConstrAInts Are Instructions
Vagueness isn't flexibility. It's cowardice.
You hedge because being specific feels r