Best Practices for Writing Effective Skill Descriptions
Best Practices for Writing Effective Skill Descriptions
Skill descriptions are the first thing an agent sees when discovering tool capabilities. A well-written description bridges the gap between what a skill does and how the agent should use it. In OpenClaw, where skills unlock powerful tool integrations โ from browser automation to CLI exec โ clarity is critical.
This guide outlines best practices for crafting skill descriptions that are precise, actionable, and agent-friendly.
Be Clear and Specific
Avoid vague language like "helps with tasks" or "assists in automation." Instead, state exactly what the skill does:
Good: Edit files by replacing exact text. Uses precise string matching.
Bad: Tool for modifying content.
Specificity prevents hallucination and helps the model decide when to use the skill. If an agent can't tell from the description whether a skill is relevant, it will either skip it or call it incorrectly โ both costly mistakes.
Name Matters: Use Descriptive, Consistent Labels
The name field in your SKILL.md becomes the skill's identifier across the system. Use lowercase, hyphenated names that reflect function:
name: web-fetch
Avoid generic names like tool1 or utility. If you're building a suite of related skills, use a consistent naming convention (e.g., git-clone, git-commit, git-status). This makes discovery intuitive and keeps your skill list scannable.
Structure Your Frontmatter with Purpose
Skill files use YAML frontmatter to declare metadata. Always include:
nameโ the skill identifierdescriptionโ a concise summary of what the skill does- Optional:
user-invocable,disable-model-invocation,metadata
Example:
name: exec
user-invocable: true
description: Execute shell commands with background continuation. Use pty=true for TTY-required commands.
The description field is what appears in the agent's tool list, so make it count. This is your skill's elevator pitch โ one to three sentences that tell the agent everything it needs to decide whether to use it.
Describe Scope and Context
Clarify when and where the skill should be used. Include:
- Input requirements โ what parameters does it expect? (file paths, URLs, specific formats)
- Preconditions โ what needs to be true before invocation? (file must exist, service must be running)
- Execution context โ where does it run? (host, sandbox, requires network access)
For example, OpenClaw's browser tool description specifies:
"Control the browser via OpenClaw's browser control server... When using refs from snapshot (e.g. e12), keep the same tab..."
This sets expectations about statefulness and UI references โ information the agent needs to use the tool correctly across multiple calls.
Flag Security and Risk
If a skill performs high-impact actions like file deletion, system exec, or sending external messages, state it clearly:
"Runs commands on the host system. Use with caution. Elevated mode required for privileged operations."
This helps the agent reason about risk before invoking the tool. An agent that knows a skill is destructive will ask for confirmation. An agent that doesn't know won't.
Include Real-World Use Cases
A short example or use case dramatically improves usability. Instead of only describing the interface, show what the skill is for:
"Common uses: running build scripts, checking service status, installing packages with pty=true for interactive installers."
Use cases give the agent pattern-matching material. When a user asks "can you install this package?", the agent connects that to the use case and selects the right skill.
Declare Dependencies
If the skill requires binaries, environment variables, or configuration, note them in metadata:
{
"openclaw": {
"requires": {
"bins": ["ffmpeg"],
"env": ["API_KEY"]
}
}
}
This ensures the agent knows the skill may not be available on all systems, and can surface a helpful error message instead of failing silently.
Keep It Concise โ But Complete
While brevity is valuable, never sacrifice clarity. A description should be:
- Short enough to fit comfortably in context windows
- Complete enough to avoid follow-up guessing
Aim for one to three sentences in the main description field. Use the body of SKILL.md for extended documentation, examples, and edge cases. The description gets the agent in the door; the body teaches it the details.
Test Your Description
After writing, ask yourself: Could an agent use this description alone to decide whether to call the skill? If the answer is no, revise.
Try this exercise: read only the description (not the full SKILL.md) and imagine you're an agent that just received a user request. Can you tell whether this skill is relevant? Can you tell what parameters to pass? If you're uncertain, your agent will be too.
Wrapping Up
Good skill descriptions aren't just documentation โ they're the interface between your skill and every agent that might use it. By being specific, structured, and honest about capabilities and risks, you make your skills more discoverable, reliable, and safe to use.
The payoff is compounding: every skill with a clear description is one less failure mode, one less hallucination, and one more tool an agent can confidently reach for when it matters.
Enjoyed this article?
Join the ClawMakers community to discuss this and more with fellow builders.
Join on Skool โ It's Free โ