chore: restructure skills repo with new agents and skill bundles
- Add new skills: deep-dive, docs-rag, meta-creator, ppt-maker, sdlc - Add agent configs: g-assistent, meta-creator, sdlc with prompt files - Add reference docs for custom agents and skills specification - Add utility scripts: install-agents.sh, orchestrate.py, puml2svg.sh - Update README and commit-message skill config - Remove deprecated skills: codereview, python, testing, typescript - Add .gitignore
@@ -0,0 +1,22 @@
|
|||||||
|
# Python
|
||||||
|
__pycache__/
|
||||||
|
*.pyc
|
||||||
|
*.pyo
|
||||||
|
.venv/
|
||||||
|
venv/
|
||||||
|
|
||||||
|
# Node
|
||||||
|
node_modules/
|
||||||
|
npm-debug.log*
|
||||||
|
|
||||||
|
# Eval outputs
|
||||||
|
evals-workspace/
|
||||||
|
*.eval-output.json
|
||||||
|
|
||||||
|
# OS
|
||||||
|
.DS_Store
|
||||||
|
Thumbs.db
|
||||||
|
|
||||||
|
# Editor
|
||||||
|
.vscode/
|
||||||
|
*.swp
|
||||||
@@ -1,10 +1,10 @@
|
|||||||
{
|
{
|
||||||
"name": "commit-message",
|
"name": "commit-message",
|
||||||
"description": "Agent specializing in generating Conventional Commits messages.",
|
"description": "Agent specializing in generating Conventional Commits messages. Use when the user wants to commit changes, needs a commit message suggestion, or is ready to wrap up a task.",
|
||||||
"prompt": "You are an expert at generating git commit messages following the Conventional Commits 1.0.0 specification. Your goal is to analyze staged changes and provide a high-quality, professional commit message. Always use the `commit-message` skill for guidance.",
|
"prompt": "file://prompts/commit-message.md",
|
||||||
"tools": ["fs_read", "execute_bash", "grep", "glob"],
|
"tools": ["fs_read", "execute_bash", "grep", "glob"],
|
||||||
"allowedTools": ["fs_read", "execute_bash", "grep", "glob"],
|
"allowedTools": ["fs_read", "grep", "glob"],
|
||||||
"resources": [
|
"resources": [
|
||||||
"skill://skills/commit-message/SKILL.md"
|
"skill://.kiro/skills/commit-message/SKILL.md"
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -0,0 +1,10 @@
|
|||||||
|
{
|
||||||
|
"name": "g-assistent",
|
||||||
|
"description": "通用开发助手,自动路由到合适的 skill 处理用户请求。支持代码分析、代码审查、提交信息、系统设计、测试、文档查询等开发任务。",
|
||||||
|
"prompt": "file://prompts/g-assistent.md",
|
||||||
|
"tools": ["fs_read", "fs_write", "execute_bash", "grep", "glob", "code", "fetch"],
|
||||||
|
"allowedTools": ["fs_read", "grep", "glob", "code", "fetch"],
|
||||||
|
"resources": [
|
||||||
|
"skill://.kiro/skills/**/SKILL.md"
|
||||||
|
]
|
||||||
|
}
|
||||||
@@ -1,10 +0,0 @@
|
|||||||
{
|
|
||||||
"name": "main",
|
|
||||||
"description": "Eval agent for common-skills. Loads all skills for evaluation.",
|
|
||||||
"prompt": "You are an evaluation assistant. Load the relevant skill when the user's request matches its domain, then answer based on the skill's guidance.",
|
|
||||||
"tools": ["fs_read", "execute_bash", "grep", "glob"],
|
|
||||||
"allowedTools": ["fs_read", "execute_bash", "grep", "glob"],
|
|
||||||
"resources": [
|
|
||||||
"skill://skills/**/SKILL.md"
|
|
||||||
]
|
|
||||||
}
|
|
||||||
@@ -0,0 +1,11 @@
|
|||||||
|
{
|
||||||
|
"name": "meta-creator",
|
||||||
|
"description": "Creates and iteratively improves agent skills and custom agents. Use when you want to create a new skill, update an existing skill, create a new agent, or improve any of these based on eval results.",
|
||||||
|
"prompt": "file://prompts/meta-creator.md",
|
||||||
|
"tools": ["fs_read", "fs_write", "glob", "grep", "web_fetch", "web_search"],
|
||||||
|
"allowedTools": ["fs_read", "glob", "grep", "web_fetch", "web_search"],
|
||||||
|
"resources": [
|
||||||
|
"skill://skills/meta-creator/SKILL.md"
|
||||||
|
],
|
||||||
|
"welcomeMessage": "Ready to create or improve agent skills and custom agents. Describe what you want to build, or share an existing skill/agent with eval results to improve."
|
||||||
|
}
|
||||||
@@ -0,0 +1,24 @@
|
|||||||
|
你是 git 提交信息专家,严格遵循 Conventional Commits 1.0.0 规范生成高质量提交信息。始终使用 `commit-message` skill 作为指导。
|
||||||
|
|
||||||
|
When the user sends a greeting or help request (e.g., "hi", "hello", "help", "你好", "帮助", "?"), respond with:
|
||||||
|
|
||||||
|
---
|
||||||
|
👋 **Commit Message** — Conventional Commits 专家
|
||||||
|
|
||||||
|
**功能:**
|
||||||
|
- 分析 staged 变更,自动生成规范提交信息
|
||||||
|
- 支持 feat / fix / docs / refactor / chore 等所有类型
|
||||||
|
- 支持 scope、breaking changes、多行 body
|
||||||
|
- 交互式精炼:生成后可调整直到满意
|
||||||
|
|
||||||
|
**执行步骤:**
|
||||||
|
1. 运行 `git status` 检查 staged 文件
|
||||||
|
2. 运行 `git diff --cached` 分析变更内容
|
||||||
|
3. 按 Conventional Commits 规范起草提交信息
|
||||||
|
4. 展示给用户确认,按需调整后执行提交
|
||||||
|
|
||||||
|
**使用示例:**
|
||||||
|
- `commit these changes`
|
||||||
|
- `帮我生成一个 commit message`
|
||||||
|
- `给我一个提交信息,这次修复了登录 bug`
|
||||||
|
---
|
||||||
@@ -0,0 +1,27 @@
|
|||||||
|
你是一个通用开发助手,拥有一组 skill。根据用户请求匹配最合适的 skill,加载并严格按照其指令执行。如果没有匹配的 skill,直接用你的能力回答。
|
||||||
|
|
||||||
|
When the user sends a greeting or help request (e.g., "hi", "hello", "help", "你好", "帮助", "?"), respond with:
|
||||||
|
|
||||||
|
---
|
||||||
|
👋 **G-Assistent** — 通用开发助手,自动路由到最合适的 skill
|
||||||
|
|
||||||
|
**功能:**
|
||||||
|
- 代码分析与审查
|
||||||
|
- 生成 git 提交信息(commit-message skill)
|
||||||
|
- 软件开发全流程规划(sdlc skill)
|
||||||
|
- 3GPP 技术文档检索(docs-rag skill)
|
||||||
|
- PPT 幻灯片生成(ppt-maker skill)
|
||||||
|
- 技术深度分析报告(deep-dive skill)
|
||||||
|
- 通用开发问题解答
|
||||||
|
|
||||||
|
**执行步骤:**
|
||||||
|
1. 分析用户请求,匹配最合适的 skill
|
||||||
|
2. 加载 skill 并严格按照其指令执行
|
||||||
|
3. 如无匹配 skill,直接用内置能力回答
|
||||||
|
|
||||||
|
**使用示例:**
|
||||||
|
- `帮我生成一个 commit message`
|
||||||
|
- `帮我设计一个用户认证系统`
|
||||||
|
- `deep dive into this codebase`
|
||||||
|
- `生成一份销售汇报 PPT`
|
||||||
|
---
|
||||||
@@ -0,0 +1,27 @@
|
|||||||
|
You are a specialist for creating and improving agent skills and custom agents. When the user asks to create or update a skill or agent, activate the `meta-creator` skill and follow its instructions exactly.
|
||||||
|
|
||||||
|
When the user sends a greeting or help request (e.g., "hi", "hello", "help", "你好", "帮助", "?"), respond with:
|
||||||
|
|
||||||
|
---
|
||||||
|
👋 **Meta Creator** — Agent & Skill 创建/优化专家
|
||||||
|
|
||||||
|
**功能:**
|
||||||
|
- 创建新的 skill(SKILL.md + evals)
|
||||||
|
- 更新/优化已有 skill
|
||||||
|
- 创建新的自定义 agent(.kiro/agents/*.json)
|
||||||
|
- 更新/优化已有 agent
|
||||||
|
- 根据 eval 结果迭代改进 skill 或 agent
|
||||||
|
|
||||||
|
**执行步骤:**
|
||||||
|
1. 收集需求(目标、示例任务、环境要求)
|
||||||
|
2. 创建/更新 `SKILL.md`(frontmatter + 指令正文)
|
||||||
|
3. 创建/更新 `evals/evals.json`(≥3 个测试用例)
|
||||||
|
4. 如需 agent:创建 `.kiro/agents/<name>.json` + `prompts/<name>.md`
|
||||||
|
5. 检查 `scripts/install-agents.sh` 是否需要同步更新
|
||||||
|
|
||||||
|
**使用示例:**
|
||||||
|
- `创建一个 skill,用于生成 SQL 查询`
|
||||||
|
- `优化 commit-message skill,增加对 emoji 的支持`
|
||||||
|
- `新建一个 agent,专门处理代码审查任务`
|
||||||
|
- `根据这些 eval 结果改进 deep-dive skill`
|
||||||
|
---
|
||||||
@@ -0,0 +1,21 @@
|
|||||||
|
You are a systematic software development lifecycle assistant. When the user asks to build, design, or plan a software project, activate the `sdlc` skill and follow its instructions exactly.
|
||||||
|
|
||||||
|
When the user sends a greeting or help request (e.g., "hi", "hello", "help", "你好", "帮助", "?"), respond with:
|
||||||
|
|
||||||
|
---
|
||||||
|
👋 **SDLC Assistant** — 系统化软件开发全流程助手
|
||||||
|
|
||||||
|
**功能:**
|
||||||
|
- 需求分析 → 生成 `specs/requirements.md`
|
||||||
|
- 系统设计 → 生成 `specs/design.md`(含架构图、数据模型、ADR)
|
||||||
|
- 任务分解 → 生成 `specs/tasks.md`(里程碑、依赖关系)
|
||||||
|
- 实现计划 → 生成 `specs/impl-plan.md`(验收标准、DoD)
|
||||||
|
- 代码实现 → 按计划逐任务实现
|
||||||
|
- 验证收尾 → 检查所有 DoD,更新状态
|
||||||
|
|
||||||
|
**使用示例:**
|
||||||
|
- `帮我设计一个用户认证系统`
|
||||||
|
- `我要做一个任务管理 App,帮我做需求分析`
|
||||||
|
- `help me build a REST API for an e-commerce platform`
|
||||||
|
- `continue`(恢复上次未完成的 SDLC 流程)
|
||||||
|
---
|
||||||
@@ -0,0 +1,11 @@
|
|||||||
|
{
|
||||||
|
"name": "sdlc",
|
||||||
|
"description": "Systematic SDLC assistant. Guides through requirements analysis, system design, task decomposition, and implementation planning for any software project.",
|
||||||
|
"prompt": "file://prompts/sdlc.md",
|
||||||
|
"tools": ["fs_read", "fs_write", "execute_bash", "grep", "glob"],
|
||||||
|
"allowedTools": ["fs_read", "grep", "glob"],
|
||||||
|
"resources": [
|
||||||
|
"skill://.kiro/skills/sdlc/SKILL.md",
|
||||||
|
"file://.kiro/skills/sdlc/assets/phase-checklist.md"
|
||||||
|
]
|
||||||
|
}
|
||||||
@@ -2,26 +2,41 @@
|
|||||||
|
|
||||||
Shared Kiro agent skills for the team. All skills are evaluated before merge.
|
Shared Kiro agent skills for the team. All skills are evaluated before merge.
|
||||||
|
|
||||||
## Structure
|
## Skills
|
||||||
|
|
||||||
|
| Skill | Description |
|
||||||
|
|---|---|
|
||||||
|
| [commit-message](skills/commit-message/README.md) | Generate Conventional Commits messages from staged changes |
|
||||||
|
| [deep-dive](skills/deep-dive/README.md) | Produce structured technical reports from code, docs, or URLs |
|
||||||
|
| [docs-rag](skills/docs-rag/README.md) | RAG over local 3GPP Release 19 specification documents |
|
||||||
|
| [meta-creator](skills/meta-creator/README.md) | Create and iterate on agent skills and custom agents |
|
||||||
|
| [ppt-maker](skills/ppt-maker/README.md) | Convert Markdown to professional PPTX with auto-charts |
|
||||||
|
| [sdlc](skills/sdlc/README.md) | Guide a full software development lifecycle end-to-end |
|
||||||
|
|
||||||
|
## Repository Structure
|
||||||
|
|
||||||
```
|
```
|
||||||
skills/
|
skills/
|
||||||
├── codereview/
|
├── commit-message/
|
||||||
│ ├── SKILL.md
|
│ ├── SKILL.md
|
||||||
│ └── evals/evals.json
|
│ ├── README.md
|
||||||
|
│ ├── assets/ ← architecture.svg, workflow.svg + .puml sources
|
||||||
|
│ ├── evals/evals.json
|
||||||
|
│ └── references/
|
||||||
|
├── deep-dive/
|
||||||
├── docs-rag/
|
├── docs-rag/
|
||||||
│ ├── SKILL.md
|
├── meta-creator/
|
||||||
│ ├── data/index.json
|
├── ppt-maker/
|
||||||
│ └── evals/evals.json
|
└── sdlc/
|
||||||
├── python/
|
|
||||||
├── testing/
|
|
||||||
└── typescript/
|
|
||||||
scripts/
|
scripts/
|
||||||
├── run_evals.py ← eval runner with regression protection
|
├── run_evals.py ← eval runner with regression protection
|
||||||
└── sync.sh ← sync skills into a project
|
├── puml2svg.sh ← convert .puml diagrams to SVG
|
||||||
|
├── sync.sh ← sync skills into a project
|
||||||
|
├── install-agents.sh ← install agent configs into a project
|
||||||
|
└── orchestrate.py ← multi-skill orchestration
|
||||||
.githooks/
|
.githooks/
|
||||||
└── pre-push ← blocks push if changed skills regress
|
└── pre-push ← blocks push if changed skills regress
|
||||||
baselines.json ← recorded pass rates (committed to repo)
|
baselines.json ← recorded pass rates (committed to repo)
|
||||||
```
|
```
|
||||||
|
|
||||||
## Using Skills in Your Project
|
## Using Skills in Your Project
|
||||||
@@ -31,18 +46,37 @@ baselines.json ← recorded pass rates (committed to repo)
|
|||||||
COMMON_SKILLS_DIR=~/common-skills bash scripts/sync.sh
|
COMMON_SKILLS_DIR=~/common-skills bash scripts/sync.sh
|
||||||
|
|
||||||
# Sync specific skills only
|
# Sync specific skills only
|
||||||
COMMON_SKILLS_DIR=~/common-skills bash scripts/sync.sh codereview python
|
COMMON_SKILLS_DIR=~/common-skills bash scripts/sync.sh commit-message ppt-maker
|
||||||
```
|
```
|
||||||
|
|
||||||
## Contributing a New Skill
|
## Contributing a New Skill
|
||||||
|
|
||||||
1. Create `skills/<name>/SKILL.md` with YAML frontmatter (`name`, `description`)
|
1. Create `skills/<name>/SKILL.md` with YAML frontmatter (`name`, `description`)
|
||||||
2. Add `skills/<name>/evals/evals.json` with at least 3 eval cases
|
2. Add `skills/<name>/evals/evals.json` with at least 3 eval cases
|
||||||
3. Run evals locally and update baseline:
|
3. Add architecture and workflow diagrams:
|
||||||
|
```bash
|
||||||
|
# Create skills/<name>/assets/architecture.puml and workflow.puml, then:
|
||||||
|
bash scripts/puml2svg.sh <name>
|
||||||
|
```
|
||||||
|
4. Run evals locally and update baseline:
|
||||||
```bash
|
```bash
|
||||||
python scripts/run_evals.py <name> --update-baseline
|
python scripts/run_evals.py <name> --update-baseline
|
||||||
```
|
```
|
||||||
4. Push — the pre-push hook will verify no regressions on changed skills
|
5. Push — the pre-push hook will verify no regressions on changed skills
|
||||||
|
|
||||||
|
## Updating Diagrams
|
||||||
|
|
||||||
|
Each skill stores PlantUML source files in `assets/` alongside the rendered SVGs.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Regenerate all SVGs
|
||||||
|
bash scripts/puml2svg.sh
|
||||||
|
|
||||||
|
# Regenerate a single skill
|
||||||
|
bash scripts/puml2svg.sh deep-dive
|
||||||
|
```
|
||||||
|
|
||||||
|
Requires Java and Graphviz (`apt install graphviz`). The PlantUML jar is resolved automatically from the VS Code extension path; override with `PLANTUML_JAR=/path/to/plantuml.jar`.
|
||||||
|
|
||||||
## Running Evals
|
## Running Evals
|
||||||
|
|
||||||
@@ -51,13 +85,13 @@ COMMON_SKILLS_DIR=~/common-skills bash scripts/sync.sh codereview python
|
|||||||
python scripts/run_evals.py
|
python scripts/run_evals.py
|
||||||
|
|
||||||
# Run single skill
|
# Run single skill
|
||||||
python scripts/run_evals.py codereview
|
python scripts/run_evals.py commit-message
|
||||||
|
|
||||||
# Check for regressions against baselines.json
|
# Check for regressions against baselines.json
|
||||||
python scripts/run_evals.py --check-regression
|
python scripts/run_evals.py --check-regression
|
||||||
|
|
||||||
# After improving a skill, record new baseline
|
# After improving a skill, record new baseline
|
||||||
python scripts/run_evals.py codereview --update-baseline
|
python scripts/run_evals.py commit-message --update-baseline
|
||||||
```
|
```
|
||||||
|
|
||||||
## Install pre-push Hook
|
## Install pre-push Hook
|
||||||
|
|||||||
@@ -0,0 +1,480 @@
|
|||||||
|
# Kiro CLI Custom Agents — 配置参考
|
||||||
|
|
||||||
|
> 原文:https://kiro.dev/docs/cli/custom-agents/configuration-reference/
|
||||||
|
> 更新:2026-04-14
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 快速开始
|
||||||
|
|
||||||
|
推荐在 Kiro 会话中使用 `/agent generate` 命令,通过 AI 辅助生成 Agent 配置。
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 文件位置
|
||||||
|
|
||||||
|
### 本地 Agent(项目级)
|
||||||
|
|
||||||
|
```
|
||||||
|
<project>/.kiro/agents/<name>.json
|
||||||
|
```
|
||||||
|
|
||||||
|
仅在该目录或其子目录下运行 Kiro CLI 时可用。
|
||||||
|
|
||||||
|
### 全局 Agent(用户级)
|
||||||
|
|
||||||
|
```
|
||||||
|
~/.kiro/agents/<name>.json
|
||||||
|
```
|
||||||
|
|
||||||
|
在任意目录下均可使用。
|
||||||
|
|
||||||
|
### 优先级
|
||||||
|
|
||||||
|
同名 Agent 时,**本地优先于全局**(并输出警告)。
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 配置字段总览
|
||||||
|
|
||||||
|
| 字段 | 说明 |
|
||||||
|
|------|------|
|
||||||
|
| `name` | Agent 名称(可选,默认取文件名) |
|
||||||
|
| `description` | Agent 描述 |
|
||||||
|
| `prompt` | 系统提示词(内联或 `file://` URI) |
|
||||||
|
| `mcpServers` | 可访问的 MCP 服务器 |
|
||||||
|
| `tools` | 可用工具列表 |
|
||||||
|
| `toolAliases` | 工具名称重映射 |
|
||||||
|
| `allowedTools` | 无需确认即可使用的工具 |
|
||||||
|
| `toolsSettings` | 工具专项配置 |
|
||||||
|
| `resources` | 可访问的本地资源 |
|
||||||
|
| `hooks` | 生命周期钩子命令 |
|
||||||
|
| `includeMcpJson` | 是否引入 mcp.json 中的 MCP 服务器 |
|
||||||
|
| `model` | 指定使用的模型 ID |
|
||||||
|
| `keyboardShortcut` | 快速切换快捷键 |
|
||||||
|
| `welcomeMessage` | 切换到该 Agent 时显示的欢迎语 |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 字段详解
|
||||||
|
|
||||||
|
### `name`
|
||||||
|
|
||||||
|
Agent 的标识名称,用于显示和识别。
|
||||||
|
|
||||||
|
```json
|
||||||
|
{ "name": "aws-expert" }
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### `description`
|
||||||
|
|
||||||
|
人类可读的描述,帮助区分不同 Agent。
|
||||||
|
|
||||||
|
```json
|
||||||
|
{ "description": "An agent specialized for AWS infrastructure tasks" }
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### `prompt`
|
||||||
|
|
||||||
|
类似系统提示词,为 Agent 提供高层上下文。支持内联文本或 `file://` URI。
|
||||||
|
|
||||||
|
**内联:**
|
||||||
|
```json
|
||||||
|
{ "prompt": "You are an expert AWS infrastructure specialist" }
|
||||||
|
```
|
||||||
|
|
||||||
|
**文件引用:**
|
||||||
|
```json
|
||||||
|
{ "prompt": "file://./prompts/aws-expert.md" }
|
||||||
|
```
|
||||||
|
|
||||||
|
**路径解析规则:**
|
||||||
|
- 相对路径:相对于 Agent 配置文件所在目录
|
||||||
|
- `"file://./prompt.md"` → 同目录
|
||||||
|
- `"file://../shared/prompt.md"` → 上级目录
|
||||||
|
- 绝对路径:直接使用
|
||||||
|
- `"file:///home/user/prompts/agent.md"`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### `mcpServers`
|
||||||
|
|
||||||
|
定义 Agent 可访问的 MCP 服务器。
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"mcpServers": {
|
||||||
|
"fetch": {
|
||||||
|
"command": "fetch3.1",
|
||||||
|
"args": []
|
||||||
|
},
|
||||||
|
"git": {
|
||||||
|
"command": "git-mcp",
|
||||||
|
"args": [],
|
||||||
|
"env": { "GIT_CONFIG_GLOBAL": "/dev/null" },
|
||||||
|
"timeout": 120000
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**字段:**
|
||||||
|
- `command`(必填):启动 MCP 服务器的命令
|
||||||
|
- `args`(可选):命令参数
|
||||||
|
- `env`(可选):环境变量
|
||||||
|
- `timeout`(可选):每次请求超时毫秒数,默认 `120000`
|
||||||
|
- `oauth`(可选):HTTP 类型 MCP 服务器的 OAuth 配置
|
||||||
|
- `redirectUri`:自定义重定向 URI
|
||||||
|
- `oauthScopes`:请求的 OAuth 权限范围数组
|
||||||
|
|
||||||
|
**OAuth 示例:**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"mcpServers": {
|
||||||
|
"github": {
|
||||||
|
"type": "http",
|
||||||
|
"url": "https://api.github.com/mcp",
|
||||||
|
"oauth": {
|
||||||
|
"redirectUri": "127.0.0.1:8080",
|
||||||
|
"oauthScopes": ["repo", "user"]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### `tools`
|
||||||
|
|
||||||
|
Agent 可使用的工具列表。
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"tools": ["read", "write", "shell", "@git", "@rust-analyzer/check_code"]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**引用方式:**
|
||||||
|
- 内置工具:`"read"`、`"shell"`
|
||||||
|
- MCP 服务器所有工具:`"@server_name"`
|
||||||
|
- MCP 服务器特定工具:`"@server_name/tool_name"`
|
||||||
|
- 所有工具:`"*"`
|
||||||
|
- 所有内置工具:`"@builtin"`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### `toolAliases`
|
||||||
|
|
||||||
|
重命名工具,解决命名冲突或创建更直观的名称。
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"toolAliases": {
|
||||||
|
"@github-mcp/get_issues": "github_issues",
|
||||||
|
"@gitlab-mcp/get_issues": "gitlab_issues",
|
||||||
|
"@aws-cloud-formation/deploy_stack_with_parameters": "deploy_cf"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### `allowedTools`
|
||||||
|
|
||||||
|
无需用户确认即可使用的工具。支持精确匹配和通配符。
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"allowedTools": [
|
||||||
|
"read",
|
||||||
|
"@git/git_status",
|
||||||
|
"@server/read_*",
|
||||||
|
"@fetch"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**匹配方式:**
|
||||||
|
|
||||||
|
| 模式 | 说明 |
|
||||||
|
|------|------|
|
||||||
|
| `"read"` | 精确匹配内置工具 |
|
||||||
|
| `"@server_name/tool_name"` | 精确匹配 MCP 工具 |
|
||||||
|
| `"@server_name"` | 该服务器的所有工具 |
|
||||||
|
| `"@server/read_*"` | 前缀通配 |
|
||||||
|
| `"@server/*_get"` | 后缀通配 |
|
||||||
|
| `"@git-*/*"` | 服务器名通配 |
|
||||||
|
| `"?ead"` | `?` 匹配单个字符 |
|
||||||
|
|
||||||
|
> **注意:** `allowedTools` 不支持 `"*"` 通配所有工具。
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### `toolsSettings`
|
||||||
|
|
||||||
|
对特定工具进行专项配置。
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"toolsSettings": {
|
||||||
|
"write": {
|
||||||
|
"allowedPaths": ["src/**", "tests/**"]
|
||||||
|
},
|
||||||
|
"shell": {
|
||||||
|
"allowedCommands": ["git status", "git fetch"],
|
||||||
|
"deniedCommands": ["git commit .*", "git push .*"],
|
||||||
|
"autoAllowReadonly": true
|
||||||
|
},
|
||||||
|
"@git/git_status": {
|
||||||
|
"git_user": "$GIT_USER"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### `resources`
|
||||||
|
|
||||||
|
Agent 可访问的本地资源,支持三种类型。
|
||||||
|
|
||||||
|
#### 文件资源(`file://`)
|
||||||
|
|
||||||
|
启动时直接加载到上下文。
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"resources": [
|
||||||
|
"file://README.md",
|
||||||
|
"file://docs/**/*.md"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Skill 资源(`skill://`)
|
||||||
|
|
||||||
|
启动时仅加载元数据(name/description),按需加载完整内容,保持上下文精简。
|
||||||
|
|
||||||
|
Skill 文件须以 YAML frontmatter 开头:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
---
|
||||||
|
name: dynamodb-data-modeling
|
||||||
|
description: Guide for DynamoDB data modeling best practices.
|
||||||
|
---
|
||||||
|
|
||||||
|
# DynamoDB Data Modeling
|
||||||
|
...
|
||||||
|
```
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"resources": [
|
||||||
|
"skill://.kiro/skills/**/SKILL.md"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 知识库资源(`knowledgeBase`)
|
||||||
|
|
||||||
|
支持对大量文档进行索引检索。
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"resources": [
|
||||||
|
{
|
||||||
|
"type": "knowledgeBase",
|
||||||
|
"source": "file://./docs",
|
||||||
|
"name": "ProjectDocs",
|
||||||
|
"description": "Project documentation and guides",
|
||||||
|
"indexType": "best",
|
||||||
|
"autoUpdate": true
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
| 字段 | 必填 | 说明 |
|
||||||
|
|------|------|------|
|
||||||
|
| `type` | 是 | 固定为 `"knowledgeBase"` |
|
||||||
|
| `source` | 是 | 索引路径,使用 `file://` 前缀 |
|
||||||
|
| `name` | 是 | 显示名称 |
|
||||||
|
| `description` | 否 | 内容描述 |
|
||||||
|
| `indexType` | 否 | `"best"`(默认,质量更高)或 `"fast"` |
|
||||||
|
| `autoUpdate` | 否 | Agent 启动时重新索引,默认 `false` |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### `hooks`
|
||||||
|
|
||||||
|
在 Agent 生命周期特定时机执行命令。
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"hooks": {
|
||||||
|
"agentSpawn": [
|
||||||
|
{ "command": "git status" }
|
||||||
|
],
|
||||||
|
"userPromptSubmit": [
|
||||||
|
{ "command": "ls -la" }
|
||||||
|
],
|
||||||
|
"preToolUse": [
|
||||||
|
{
|
||||||
|
"matcher": "execute_bash",
|
||||||
|
"command": "{ echo \"$(date) - Bash:\"; cat; } >> /tmp/audit.log"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"postToolUse": [
|
||||||
|
{
|
||||||
|
"matcher": "fs_write",
|
||||||
|
"command": "cargo fmt --all"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"stop": [
|
||||||
|
{ "command": "npm test" }
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**触发时机:**
|
||||||
|
|
||||||
|
| 钩子 | 触发时机 |
|
||||||
|
|------|----------|
|
||||||
|
| `agentSpawn` | Agent 初始化时 |
|
||||||
|
| `userPromptSubmit` | 用户提交消息时 |
|
||||||
|
| `preToolUse` | 工具执行前(可阻断) |
|
||||||
|
| `postToolUse` | 工具执行后 |
|
||||||
|
| `stop` | 助手完成响应时 |
|
||||||
|
|
||||||
|
每个 hook 条目:
|
||||||
|
- `command`(必填):要执行的命令
|
||||||
|
- `matcher`(可选):用于 `preToolUse`/`postToolUse` 的工具名匹配模式,使用内部工具名(如 `fs_read`、`fs_write`、`execute_bash`、`use_aws`)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### `includeMcpJson`
|
||||||
|
|
||||||
|
是否引入 `~/.kiro/settings/mcp.json`(全局)和 `<cwd>/.kiro/settings/mcp.json`(工作区)中定义的 MCP 服务器。
|
||||||
|
|
||||||
|
```json
|
||||||
|
{ "includeMcpJson": true }
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### `model`
|
||||||
|
|
||||||
|
指定该 Agent 使用的模型 ID。未指定或不可用时回退到默认模型。
|
||||||
|
|
||||||
|
```json
|
||||||
|
{ "model": "claude-sonnet-4" }
|
||||||
|
```
|
||||||
|
|
||||||
|
可通过 `/model` 命令查看可用模型列表。
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### `keyboardShortcut`
|
||||||
|
|
||||||
|
快速切换到该 Agent 的键盘快捷键。
|
||||||
|
|
||||||
|
```json
|
||||||
|
{ "keyboardShortcut": "ctrl+a" }
|
||||||
|
```
|
||||||
|
|
||||||
|
**格式:** `[modifier+]key`
|
||||||
|
**修饰键:** `ctrl`、`shift`
|
||||||
|
**按键:** `a-z`、`0-9`
|
||||||
|
|
||||||
|
- 当前不在该 Agent:切换到该 Agent
|
||||||
|
- 已在该 Agent:切换回上一个 Agent
|
||||||
|
- 多个 Agent 快捷键冲突时,快捷键被禁用并输出警告
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### `welcomeMessage`
|
||||||
|
|
||||||
|
切换到该 Agent 时显示的欢迎语。
|
||||||
|
|
||||||
|
```json
|
||||||
|
{ "welcomeMessage": "What would you like to build today?" }
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 完整示例
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"name": "aws-rust-agent",
|
||||||
|
"description": "Specialized agent for AWS and Rust development",
|
||||||
|
"prompt": "file://./prompts/aws-rust-expert.md",
|
||||||
|
"mcpServers": {
|
||||||
|
"fetch": { "command": "fetch-server", "args": [] },
|
||||||
|
"git": { "command": "git-mcp", "args": [] }
|
||||||
|
},
|
||||||
|
"tools": ["read", "write", "shell", "aws", "@git", "@fetch/fetch_url"],
|
||||||
|
"toolAliases": {
|
||||||
|
"@git/git_status": "status",
|
||||||
|
"@fetch/fetch_url": "get"
|
||||||
|
},
|
||||||
|
"allowedTools": ["read", "@git/git_status"],
|
||||||
|
"toolsSettings": {
|
||||||
|
"write": { "allowedPaths": ["src/**", "tests/**", "Cargo.toml"] },
|
||||||
|
"aws": { "allowedServices": ["s3", "lambda"], "autoAllowReadonly": true }
|
||||||
|
},
|
||||||
|
"resources": [
|
||||||
|
"file://README.md",
|
||||||
|
"file://docs/**/*.md"
|
||||||
|
],
|
||||||
|
"hooks": {
|
||||||
|
"agentSpawn": [{ "command": "git status" }],
|
||||||
|
"postToolUse": [{ "matcher": "fs_write", "command": "cargo fmt --all" }]
|
||||||
|
},
|
||||||
|
"model": "claude-sonnet-4",
|
||||||
|
"keyboardShortcut": "ctrl+shift+r",
|
||||||
|
"welcomeMessage": "Ready to help with AWS and Rust development!"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 最佳实践
|
||||||
|
|
||||||
|
### 本地 vs 全局 Agent
|
||||||
|
|
||||||
|
| 本地 Agent | 全局 Agent |
|
||||||
|
|-----------|-----------|
|
||||||
|
| 项目专属配置 | 跨项目通用 Agent |
|
||||||
|
| 需要访问项目文件/工具 | 个人效率工具 |
|
||||||
|
| 通过版本控制与团队共享 | 常用工具和工作流 |
|
||||||
|
|
||||||
|
### 安全建议
|
||||||
|
|
||||||
|
- 仔细审查 `allowedTools`,优先使用精确匹配而非通配符
|
||||||
|
- 对敏感操作配置 `toolsSettings`(如限制 `allowedPaths`)
|
||||||
|
- 启用写工具(`write`、`shell`)时,Agent 拥有与当前用户相同的文件系统权限,可读写 `~/.kiro` 下所有内容
|
||||||
|
- 使用 `preToolUse` hooks 审计或阻断敏感操作
|
||||||
|
- 在安全环境中充分测试后再共享 Agent
|
||||||
|
|
||||||
|
### 组织建议
|
||||||
|
|
||||||
|
- 使用描述性名称
|
||||||
|
- 在 `description` 中说明用途
|
||||||
|
- 将 prompt 文件单独维护
|
||||||
|
- 本地 Agent 随项目纳入版本控制
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 相关文档
|
||||||
|
|
||||||
|
- [创建自定义 Agent](https://kiro.dev/docs/cli/custom-agents/creating/)
|
||||||
|
- [内置工具参考](https://kiro.dev/docs/cli/reference/built-in-tools/)
|
||||||
|
- [Hooks 文档](https://kiro.dev/docs/cli/hooks)
|
||||||
|
- [Agent 示例](https://kiro.dev/docs/cli/custom-agents/examples/)
|
||||||
@@ -0,0 +1,275 @@
|
|||||||
|
> ## Documentation Index
|
||||||
|
> Fetch the complete documentation index at: https://agentskills.io/llms.txt
|
||||||
|
> Use this file to discover all available pages before exploring further.
|
||||||
|
|
||||||
|
# Specification
|
||||||
|
|
||||||
|
> The complete format specification for Agent Skills.
|
||||||
|
|
||||||
|
## Directory structure
|
||||||
|
|
||||||
|
A skill is a directory containing, at minimum, a `SKILL.md` file:
|
||||||
|
|
||||||
|
```
|
||||||
|
skill-name/
|
||||||
|
├── SKILL.md # Required: metadata + instructions
|
||||||
|
├── scripts/ # Optional: executable code
|
||||||
|
├── references/ # Optional: documentation
|
||||||
|
├── assets/ # Optional: templates, resources
|
||||||
|
└── ... # Any additional files or directories
|
||||||
|
```
|
||||||
|
|
||||||
|
## `SKILL.md` format
|
||||||
|
|
||||||
|
The `SKILL.md` file must contain YAML frontmatter followed by Markdown content.
|
||||||
|
|
||||||
|
### Frontmatter
|
||||||
|
|
||||||
|
| Field | Required | Constraints |
|
||||||
|
| --------------- | -------- | ----------------------------------------------------------------------------------------------------------------- |
|
||||||
|
| `name` | Yes | Max 64 characters. Lowercase letters, numbers, and hyphens only. Must not start or end with a hyphen. |
|
||||||
|
| `description` | Yes | Max 1024 characters. Non-empty. Describes what the skill does and when to use it. |
|
||||||
|
| `license` | No | License name or reference to a bundled license file. |
|
||||||
|
| `compatibility` | No | Max 500 characters. Indicates environment requirements (intended product, system packages, network access, etc.). |
|
||||||
|
| `metadata` | No | Arbitrary key-value mapping for additional metadata. |
|
||||||
|
| `allowed-tools` | No | Space-delimited list of pre-approved tools the skill may use. (Experimental) |
|
||||||
|
|
||||||
|
<Card>
|
||||||
|
**Minimal example:**
|
||||||
|
|
||||||
|
```markdown SKILL.md theme={null}
|
||||||
|
---
|
||||||
|
name: skill-name
|
||||||
|
description: A description of what this skill does and when to use it.
|
||||||
|
---
|
||||||
|
```
|
||||||
|
|
||||||
|
**Example with optional fields:**
|
||||||
|
|
||||||
|
```markdown SKILL.md theme={null}
|
||||||
|
---
|
||||||
|
name: pdf-processing
|
||||||
|
description: Extract PDF text, fill forms, merge files. Use when handling PDFs.
|
||||||
|
license: Apache-2.0
|
||||||
|
metadata:
|
||||||
|
author: example-org
|
||||||
|
version: "1.0"
|
||||||
|
---
|
||||||
|
```
|
||||||
|
</Card>
|
||||||
|
|
||||||
|
#### `name` field
|
||||||
|
|
||||||
|
The required `name` field:
|
||||||
|
|
||||||
|
* Must be 1-64 characters
|
||||||
|
* May only contain unicode lowercase alphanumeric characters (`a-z`) and hyphens (`-`)
|
||||||
|
* Must not start or end with a hyphen (`-`)
|
||||||
|
* Must not contain consecutive hyphens (`--`)
|
||||||
|
* Must match the parent directory name
|
||||||
|
|
||||||
|
<Card>
|
||||||
|
**Valid examples:**
|
||||||
|
|
||||||
|
```yaml theme={null}
|
||||||
|
name: pdf-processing
|
||||||
|
```
|
||||||
|
|
||||||
|
```yaml theme={null}
|
||||||
|
name: data-analysis
|
||||||
|
```
|
||||||
|
|
||||||
|
```yaml theme={null}
|
||||||
|
name: code-review
|
||||||
|
```
|
||||||
|
|
||||||
|
**Invalid examples:**
|
||||||
|
|
||||||
|
```yaml theme={null}
|
||||||
|
name: PDF-Processing # uppercase not allowed
|
||||||
|
```
|
||||||
|
|
||||||
|
```yaml theme={null}
|
||||||
|
name: -pdf # cannot start with hyphen
|
||||||
|
```
|
||||||
|
|
||||||
|
```yaml theme={null}
|
||||||
|
name: pdf--processing # consecutive hyphens not allowed
|
||||||
|
```
|
||||||
|
</Card>
|
||||||
|
|
||||||
|
#### `description` field
|
||||||
|
|
||||||
|
The required `description` field:
|
||||||
|
|
||||||
|
* Must be 1-1024 characters
|
||||||
|
* Should describe both what the skill does and when to use it
|
||||||
|
* Should include specific keywords that help agents identify relevant tasks
|
||||||
|
|
||||||
|
<Card>
|
||||||
|
**Good example:**
|
||||||
|
|
||||||
|
```yaml theme={null}
|
||||||
|
description: Extracts text and tables from PDF files, fills PDF forms, and merges multiple PDFs. Use when working with PDF documents or when the user mentions PDFs, forms, or document extraction.
|
||||||
|
```
|
||||||
|
|
||||||
|
**Poor example:**
|
||||||
|
|
||||||
|
```yaml theme={null}
|
||||||
|
description: Helps with PDFs.
|
||||||
|
```
|
||||||
|
</Card>
|
||||||
|
|
||||||
|
#### `license` field
|
||||||
|
|
||||||
|
The optional `license` field:
|
||||||
|
|
||||||
|
* Specifies the license applied to the skill
|
||||||
|
* We recommend keeping it short (either the name of a license or the name of a bundled license file)
|
||||||
|
|
||||||
|
<Card>
|
||||||
|
**Example:**
|
||||||
|
|
||||||
|
```yaml theme={null}
|
||||||
|
license: Proprietary. LICENSE.txt has complete terms
|
||||||
|
```
|
||||||
|
</Card>
|
||||||
|
|
||||||
|
#### `compatibility` field
|
||||||
|
|
||||||
|
The optional `compatibility` field:
|
||||||
|
|
||||||
|
* Must be 1-500 characters if provided
|
||||||
|
* Should only be included if your skill has specific environment requirements
|
||||||
|
* Can indicate intended product, required system packages, network access needs, etc.
|
||||||
|
|
||||||
|
<Card>
|
||||||
|
**Examples:**
|
||||||
|
|
||||||
|
```yaml theme={null}
|
||||||
|
compatibility: Designed for Claude Code (or similar products)
|
||||||
|
```
|
||||||
|
|
||||||
|
```yaml theme={null}
|
||||||
|
compatibility: Requires git, docker, jq, and access to the internet
|
||||||
|
```
|
||||||
|
|
||||||
|
```yaml theme={null}
|
||||||
|
compatibility: Requires Python 3.14+ and uv
|
||||||
|
```
|
||||||
|
</Card>
|
||||||
|
|
||||||
|
<Note>
|
||||||
|
Most skills do not need the `compatibility` field.
|
||||||
|
</Note>
|
||||||
|
|
||||||
|
#### `metadata` field
|
||||||
|
|
||||||
|
The optional `metadata` field:
|
||||||
|
|
||||||
|
* A map from string keys to string values
|
||||||
|
* Clients can use this to store additional properties not defined by the Agent Skills spec
|
||||||
|
* We recommend making your key names reasonably unique to avoid accidental conflicts
|
||||||
|
|
||||||
|
<Card>
|
||||||
|
**Example:**
|
||||||
|
|
||||||
|
```yaml theme={null}
|
||||||
|
metadata:
|
||||||
|
author: example-org
|
||||||
|
version: "1.0"
|
||||||
|
```
|
||||||
|
</Card>
|
||||||
|
|
||||||
|
#### `allowed-tools` field
|
||||||
|
|
||||||
|
The optional `allowed-tools` field:
|
||||||
|
|
||||||
|
* A space-delimited list of tools that are pre-approved to run
|
||||||
|
* Experimental. Support for this field may vary between agent implementations
|
||||||
|
|
||||||
|
<Card>
|
||||||
|
**Example:**
|
||||||
|
|
||||||
|
```yaml theme={null}
|
||||||
|
allowed-tools: Bash(git:*) Bash(jq:*) Read
|
||||||
|
```
|
||||||
|
</Card>
|
||||||
|
|
||||||
|
### Body content
|
||||||
|
|
||||||
|
The Markdown body after the frontmatter contains the skill instructions. There are no format restrictions. Write whatever helps agents perform the task effectively.
|
||||||
|
|
||||||
|
Recommended sections:
|
||||||
|
|
||||||
|
* Step-by-step instructions
|
||||||
|
* Examples of inputs and outputs
|
||||||
|
* Common edge cases
|
||||||
|
|
||||||
|
Note that the agent will load this entire file once it's decided to activate a skill. Consider splitting longer `SKILL.md` content into referenced files.
|
||||||
|
|
||||||
|
## Optional directories
|
||||||
|
|
||||||
|
### `scripts/`
|
||||||
|
|
||||||
|
Contains executable code that agents can run. Scripts should:
|
||||||
|
|
||||||
|
* Be self-contained or clearly document dependencies
|
||||||
|
* Include helpful error messages
|
||||||
|
* Handle edge cases gracefully
|
||||||
|
|
||||||
|
Supported languages depend on the agent implementation. Common options include Python, Bash, and JavaScript.
|
||||||
|
|
||||||
|
### `references/`
|
||||||
|
|
||||||
|
Contains additional documentation that agents can read when needed:
|
||||||
|
|
||||||
|
* `REFERENCE.md` - Detailed technical reference
|
||||||
|
* `FORMS.md` - Form templates or structured data formats
|
||||||
|
* Domain-specific files (`finance.md`, `legal.md`, etc.)
|
||||||
|
|
||||||
|
Keep individual [reference files](#file-references) focused. Agents load these on demand, so smaller files mean less use of context.
|
||||||
|
|
||||||
|
### `assets/`
|
||||||
|
|
||||||
|
Contains static resources:
|
||||||
|
|
||||||
|
* Templates (document templates, configuration templates)
|
||||||
|
* Images (diagrams, examples)
|
||||||
|
* Data files (lookup tables, schemas)
|
||||||
|
|
||||||
|
## Progressive disclosure
|
||||||
|
|
||||||
|
Skills should be structured for efficient use of context:
|
||||||
|
|
||||||
|
1. **Metadata** (\~100 tokens): The `name` and `description` fields are loaded at startup for all skills
|
||||||
|
2. **Instructions** (\< 5000 tokens recommended): The full `SKILL.md` body is loaded when the skill is activated
|
||||||
|
3. **Resources** (as needed): Files (e.g. those in `scripts/`, `references/`, or `assets/`) are loaded only when required
|
||||||
|
|
||||||
|
Keep your main `SKILL.md` under 500 lines. Move detailed reference material to separate files.
|
||||||
|
|
||||||
|
## File references
|
||||||
|
|
||||||
|
When referencing other files in your skill, use relative paths from the skill root:
|
||||||
|
|
||||||
|
```markdown SKILL.md theme={null}
|
||||||
|
See [the reference guide](references/REFERENCE.md) for details.
|
||||||
|
|
||||||
|
Run the extraction script:
|
||||||
|
scripts/extract.py
|
||||||
|
```
|
||||||
|
|
||||||
|
Keep file references one level deep from `SKILL.md`. Avoid deeply nested reference chains.
|
||||||
|
|
||||||
|
## Validation
|
||||||
|
|
||||||
|
Use the [skills-ref](https://github.com/agentskills/agentskills/tree/main/skills-ref) reference library to validate your skills:
|
||||||
|
|
||||||
|
```bash theme={null}
|
||||||
|
skills-ref validate ./my-skill
|
||||||
|
```
|
||||||
|
|
||||||
|
This checks that your `SKILL.md` frontmatter is valid and follows all naming conventions.
|
||||||
|
|
||||||
|
|
||||||
|
Built with [Mintlify](https://mintlify.com).
|
||||||
@@ -0,0 +1,303 @@
|
|||||||
|
> ## Documentation Index
|
||||||
|
> Fetch the complete documentation index at: https://agentskills.io/llms.txt
|
||||||
|
> Use this file to discover all available pages before exploring further.
|
||||||
|
|
||||||
|
# Evaluating skill output quality
|
||||||
|
|
||||||
|
> How to test whether your skill produces good outputs using eval-driven iteration.
|
||||||
|
|
||||||
|
You wrote a skill, tried it on a prompt, and it seemed to work. But does it work reliably — across varied prompts, in edge cases, better than no skill at all? Running structured evaluations (evals) answers these questions and gives you a feedback loop for improving the skill systematically.
|
||||||
|
|
||||||
|
## Designing test cases
|
||||||
|
|
||||||
|
A test case has three parts:
|
||||||
|
|
||||||
|
* **Prompt**: a realistic user message — the kind of thing someone would actually type.
|
||||||
|
* **Expected output**: a human-readable description of what success looks like.
|
||||||
|
* **Input files** (optional): files the skill needs to work with.
|
||||||
|
|
||||||
|
Store test cases in `evals/evals.json` inside your skill directory:
|
||||||
|
|
||||||
|
```json evals/evals.json theme={null}
|
||||||
|
{
|
||||||
|
"skill_name": "csv-analyzer",
|
||||||
|
"evals": [
|
||||||
|
{
|
||||||
|
"id": 1,
|
||||||
|
"prompt": "I have a CSV of monthly sales data in data/sales_2025.csv. Can you find the top 3 months by revenue and make a bar chart?",
|
||||||
|
"expected_output": "A bar chart image showing the top 3 months by revenue, with labeled axes and values.",
|
||||||
|
"files": ["evals/files/sales_2025.csv"]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": 2,
|
||||||
|
"prompt": "there's a csv in my downloads called customers.csv, some rows have missing emails — can you clean it up and tell me how many were missing?",
|
||||||
|
"expected_output": "A cleaned CSV with missing emails handled, plus a count of how many were missing.",
|
||||||
|
"files": ["evals/files/customers.csv"]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Tips for writing good test prompts:**
|
||||||
|
|
||||||
|
* **Start with 2-3 test cases.** Don't over-invest before you've seen your first round of results. You can expand the set later.
|
||||||
|
* **Vary the prompts.** Use different phrasings, levels of detail, and formality. Some prompts should be casual ("hey can you clean up this csv"), others precise ("Parse the CSV at data/input.csv, drop rows where column B is null, and write the result to data/output.csv").
|
||||||
|
* **Cover edge cases.** Include at least one prompt that tests a boundary condition — a malformed input, an unusual request, or a case where the skill's instructions might be ambiguous.
|
||||||
|
* **Use realistic context.** Real users mention file paths, column names, and personal context. Prompts like "process this data" are too vague to test anything useful.
|
||||||
|
|
||||||
|
Don't worry about defining specific pass/fail checks yet — just the prompts and expected outputs. You'll add detailed checks (called assertions) after you see what the first run produces.
|
||||||
|
|
||||||
|
## Running evals
|
||||||
|
|
||||||
|
The core pattern is to run each test case twice: once **with the skill** and once **without it** (or with a previous version). This gives you a baseline to compare against.
|
||||||
|
|
||||||
|
### Workspace structure
|
||||||
|
|
||||||
|
Organize eval results in a workspace directory alongside your skill directory. Each pass through the full eval loop gets its own `iteration-N/` directory. Within that, each test case gets an eval directory with `with_skill/` and `without_skill/` subdirectories:
|
||||||
|
|
||||||
|
```
|
||||||
|
csv-analyzer/
|
||||||
|
├── SKILL.md
|
||||||
|
└── evals/
|
||||||
|
└── evals.json
|
||||||
|
csv-analyzer-workspace/
|
||||||
|
└── iteration-1/
|
||||||
|
├── eval-top-months-chart/
|
||||||
|
│ ├── with_skill/
|
||||||
|
│ │ ├── outputs/ # Files produced by the run
|
||||||
|
│ │ ├── timing.json # Tokens and duration
|
||||||
|
│ │ └── grading.json # Assertion results
|
||||||
|
│ └── without_skill/
|
||||||
|
│ ├── outputs/
|
||||||
|
│ ├── timing.json
|
||||||
|
│ └── grading.json
|
||||||
|
├── eval-clean-missing-emails/
|
||||||
|
│ ├── with_skill/
|
||||||
|
│ │ ├── outputs/
|
||||||
|
│ │ ├── timing.json
|
||||||
|
│ │ └── grading.json
|
||||||
|
│ └── without_skill/
|
||||||
|
│ ├── outputs/
|
||||||
|
│ ├── timing.json
|
||||||
|
│ └── grading.json
|
||||||
|
└── benchmark.json # Aggregated statistics
|
||||||
|
```
|
||||||
|
|
||||||
|
The main file you author by hand is `evals/evals.json`. The other JSON files (`grading.json`, `timing.json`, `benchmark.json`) are produced during the eval process — by the agent, by scripts, or by you.
|
||||||
|
|
||||||
|
### Spawning runs
|
||||||
|
|
||||||
|
Each eval run should start with a clean context — no leftover state from previous runs or from the skill development process. This ensures the agent follows only what the `SKILL.md` tells it. In environments that support subagents (Claude Code, for example), this isolation comes naturally: each child task starts fresh. Without subagents, use a separate session for each run.
|
||||||
|
|
||||||
|
For each run, provide:
|
||||||
|
|
||||||
|
* The skill path (or no skill for the baseline)
|
||||||
|
* The test prompt
|
||||||
|
* Any input files
|
||||||
|
* The output directory
|
||||||
|
|
||||||
|
Here's an example of the instructions you'd give the agent for a single with-skill run:
|
||||||
|
|
||||||
|
```
|
||||||
|
Execute this task:
|
||||||
|
- Skill path: /path/to/csv-analyzer
|
||||||
|
- Task: I have a CSV of monthly sales data in data/sales_2025.csv.
|
||||||
|
Can you find the top 3 months by revenue and make a bar chart?
|
||||||
|
- Input files: evals/files/sales_2025.csv
|
||||||
|
- Save outputs to: csv-analyzer-workspace/iteration-1/eval-top-months-chart/with_skill/outputs/
|
||||||
|
```
|
||||||
|
|
||||||
|
For the baseline, use the same prompt but without the skill path, saving to `without_skill/outputs/`.
|
||||||
|
|
||||||
|
When improving an existing skill, use the previous version as your baseline. Snapshot it before editing (`cp -r <skill-path> <workspace>/skill-snapshot/`), point the baseline run at the snapshot, and save to `old_skill/outputs/` instead of `without_skill/`.
|
||||||
|
|
||||||
|
### Capturing timing data
|
||||||
|
|
||||||
|
Timing data lets you compare how much time and tokens the skill costs relative to the baseline — a skill that dramatically improves output quality but triples token usage is a different trade-off than one that's both better and cheaper. When each run completes, record the token count and duration:
|
||||||
|
|
||||||
|
```json timing.json theme={null}
|
||||||
|
{
|
||||||
|
"total_tokens": 84852,
|
||||||
|
"duration_ms": 23332
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
<Tip>
|
||||||
|
In Claude Code, when a subagent task finishes, the [task completion notification](https://platform.claude.com/docs/en/agent-sdk/typescript#sdk-task-notification-message) includes `total_tokens` and `duration_ms`. Save these values immediately — they aren't persisted anywhere else.
|
||||||
|
</Tip>
|
||||||
|
|
||||||
|
## Writing assertions
|
||||||
|
|
||||||
|
Assertions are verifiable statements about what the output should contain or achieve. Add them after you see your first round of outputs — you often don't know what "good" looks like until the skill has run.
|
||||||
|
|
||||||
|
Good assertions:
|
||||||
|
|
||||||
|
* `"The output file is valid JSON"` — programmatically verifiable.
|
||||||
|
* `"The bar chart has labeled axes"` — specific and observable.
|
||||||
|
* `"The report includes at least 3 recommendations"` — countable.
|
||||||
|
|
||||||
|
Weak assertions:
|
||||||
|
|
||||||
|
* `"The output is good"` — too vague to grade.
|
||||||
|
* `"The output uses exactly the phrase 'Total Revenue: $X'"` — too brittle; correct output with different wording would fail.
|
||||||
|
|
||||||
|
Not everything needs an assertion. Some qualities — writing style, visual design, whether the output "feels right" — are hard to decompose into pass/fail checks. These are better caught during [human review](#reviewing-results-with-a-human). Reserve assertions for things that can be checked objectively.
|
||||||
|
|
||||||
|
Add assertions to each test case in `evals/evals.json`:
|
||||||
|
|
||||||
|
```json evals/evals.json highlight={9-14} theme={null}
|
||||||
|
{
|
||||||
|
"skill_name": "csv-analyzer",
|
||||||
|
"evals": [
|
||||||
|
{
|
||||||
|
"id": 1,
|
||||||
|
"prompt": "I have a CSV of monthly sales data in data/sales_2025.csv. Can you find the top 3 months by revenue and make a bar chart?",
|
||||||
|
"expected_output": "A bar chart image showing the top 3 months by revenue, with labeled axes and values.",
|
||||||
|
"files": ["evals/files/sales_2025.csv"],
|
||||||
|
"assertions": [
|
||||||
|
"The output includes a bar chart image file",
|
||||||
|
"The chart shows exactly 3 months",
|
||||||
|
"Both axes are labeled",
|
||||||
|
"The chart title or caption mentions revenue"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Grading outputs
|
||||||
|
|
||||||
|
Grading means evaluating each assertion against the actual outputs and recording **PASS** or **FAIL** with specific evidence. The evidence should quote or reference the output, not just state an opinion.
|
||||||
|
|
||||||
|
The simplest approach is to give the outputs and assertions to an LLM and ask it to evaluate each one. For assertions that can be checked by code (valid JSON, correct row count, file exists with expected dimensions), use a verification script — scripts are more reliable than LLM judgment for mechanical checks and reusable across iterations.
|
||||||
|
|
||||||
|
```json grading.json theme={null}
|
||||||
|
{
|
||||||
|
"assertion_results": [
|
||||||
|
{
|
||||||
|
"text": "The output includes a bar chart image file",
|
||||||
|
"passed": true,
|
||||||
|
"evidence": "Found chart.png (45KB) in outputs directory"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"text": "The chart shows exactly 3 months",
|
||||||
|
"passed": true,
|
||||||
|
"evidence": "Chart displays bars for March, July, and November"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"text": "Both axes are labeled",
|
||||||
|
"passed": false,
|
||||||
|
"evidence": "Y-axis is labeled 'Revenue ($)' but X-axis has no label"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"text": "The chart title or caption mentions revenue",
|
||||||
|
"passed": true,
|
||||||
|
"evidence": "Chart title reads 'Top 3 Months by Revenue'"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"summary": {
|
||||||
|
"passed": 3,
|
||||||
|
"failed": 1,
|
||||||
|
"total": 4,
|
||||||
|
"pass_rate": 0.75
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Grading principles
|
||||||
|
|
||||||
|
* **Require concrete evidence for a PASS.** Don't give the benefit of the doubt. If an assertion says "includes a summary" and the output has a section titled "Summary" with one vague sentence, that's a FAIL — the label is there but the substance isn't.
|
||||||
|
* **Review the assertions themselves, not just the results.** While grading, notice when assertions are too easy (always pass regardless of skill quality), too hard (always fail even when the output is good), or unverifiable (can't be checked from the output alone). Fix these for the next iteration.
|
||||||
|
|
||||||
|
<Tip>
|
||||||
|
For comparing two skill versions, try **blind comparison**: present both outputs to an LLM judge without revealing which came from which version. The judge scores holistic qualities — organization, formatting, usability, polish — on its own rubric, free from bias about which version "should" be better. This complements assertion grading: two outputs might both pass all assertions but differ significantly in overall quality.
|
||||||
|
</Tip>
|
||||||
|
|
||||||
|
## Aggregating results
|
||||||
|
|
||||||
|
Once every run in the iteration is graded, compute summary statistics per configuration and save them to `benchmark.json` alongside the eval directories (e.g., `csv-analyzer-workspace/iteration-1/benchmark.json`):
|
||||||
|
|
||||||
|
```json benchmark.json theme={null}
|
||||||
|
{
|
||||||
|
"run_summary": {
|
||||||
|
"with_skill": {
|
||||||
|
"pass_rate": { "mean": 0.83, "stddev": 0.06 },
|
||||||
|
"time_seconds": { "mean": 45.0, "stddev": 12.0 },
|
||||||
|
"tokens": { "mean": 3800, "stddev": 400 }
|
||||||
|
},
|
||||||
|
"without_skill": {
|
||||||
|
"pass_rate": { "mean": 0.33, "stddev": 0.10 },
|
||||||
|
"time_seconds": { "mean": 32.0, "stddev": 8.0 },
|
||||||
|
"tokens": { "mean": 2100, "stddev": 300 }
|
||||||
|
},
|
||||||
|
"delta": {
|
||||||
|
"pass_rate": 0.50,
|
||||||
|
"time_seconds": 13.0,
|
||||||
|
"tokens": 1700
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
The `delta` tells you what the skill costs (more time, more tokens) and what it buys (higher pass rate). A skill that adds 13 seconds but improves pass rate by 50 percentage points is probably worth it. A skill that doubles token usage for a 2-point improvement might not be.
|
||||||
|
|
||||||
|
<Note>
|
||||||
|
Standard deviation (`stddev`) is only meaningful with multiple runs per eval. In early iterations with just 2-3 test cases and single runs, focus on the raw pass counts and the delta — the statistical measures become useful as you expand the test set and run each eval multiple times.
|
||||||
|
</Note>
|
||||||
|
|
||||||
|
## Analyzing patterns
|
||||||
|
|
||||||
|
Aggregate statistics can hide important patterns. After computing the benchmarks:
|
||||||
|
|
||||||
|
* **Remove or replace assertions that always pass in both configurations.** These don't tell you anything useful — the model handles them fine without the skill. They inflate the with-skill pass rate without reflecting actual skill value.
|
||||||
|
* **Investigate assertions that always fail in both configurations.** Either the assertion is broken (asking for something the model can't do), the test case is too hard, or the assertion is checking for the wrong thing. Fix these before the next iteration.
|
||||||
|
* **Study assertions that pass with the skill but fail without.** This is where the skill is clearly adding value. Understand *why* — which instructions or scripts made the difference?
|
||||||
|
* **Tighten instructions when results are inconsistent across runs.** If the same eval passes sometimes and fails others (reflected as high `stddev` in the benchmark), the eval may be flaky (sensitive to model randomness), or the skill's instructions may be ambiguous enough that the model interprets them differently each time. Add examples or more specific guidance to reduce ambiguity.
|
||||||
|
* **Check time and token outliers.** If one eval takes 3x longer than the others, read its execution transcript (the full log of what the model did during the run) to find the bottleneck.
|
||||||
|
|
||||||
|
## Reviewing results with a human
|
||||||
|
|
||||||
|
Assertion grading and pattern analysis catch a lot, but they only check what you thought to write assertions for. A human reviewer brings a fresh perspective — catching issues you didn't anticipate, noticing when the output is technically correct but misses the point, or spotting problems that are hard to express as pass/fail checks. For each test case, review the actual outputs alongside the grades.
|
||||||
|
|
||||||
|
Record specific feedback for each test case and save it in the workspace (e.g., as a `feedback.json` alongside the eval directories):
|
||||||
|
|
||||||
|
```json feedback.json theme={null}
|
||||||
|
{
|
||||||
|
"eval-top-months-chart": "The chart is missing axis labels and the months are in alphabetical order instead of chronological.",
|
||||||
|
"eval-clean-missing-emails": ""
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
"The chart is missing axis labels" is actionable; "looks bad" is not. Empty feedback means the output looked fine — that test case passed your review. During the [iteration step](#iterating-on-the-skill), focus your improvements on the test cases where you had specific complaints.
|
||||||
|
|
||||||
|
## Iterating on the skill
|
||||||
|
|
||||||
|
After grading and reviewing, you have three sources of signal:
|
||||||
|
|
||||||
|
* **Failed assertions** point to specific gaps — a missing step, an unclear instruction, or a case the skill doesn't handle.
|
||||||
|
* **Human feedback** points to broader quality issues — the approach was wrong, the output was poorly structured, or the skill produced a technically correct but unhelpful result.
|
||||||
|
* **Execution transcripts** reveal *why* things went wrong. If the agent ignored an instruction, the instruction may be ambiguous. If the agent spent time on unproductive steps, those instructions may need to be simplified or removed.
|
||||||
|
|
||||||
|
The most effective way to turn these signals into skill improvements is to give all three — along with the current `SKILL.md` — to an LLM and ask it to propose changes. The LLM can synthesize patterns across failed assertions, reviewer complaints, and transcript behavior that would be tedious to connect manually. When prompting the LLM, include these guidelines:
|
||||||
|
|
||||||
|
* **Generalize from feedback.** The skill will be used across many different prompts, not just the test cases. Fixes should address underlying issues broadly rather than adding narrow patches for specific examples.
|
||||||
|
* **Keep the skill lean.** Fewer, better instructions often outperform exhaustive rules. If transcripts show wasted work (unnecessary validation, unneeded intermediate outputs), remove those instructions. If pass rates plateau despite adding more rules, the skill may be over-constrained — try removing instructions and see if results hold or improve.
|
||||||
|
* **Explain the why.** Reasoning-based instructions ("Do X because Y tends to cause Z") work better than rigid directives ("ALWAYS do X, NEVER do Y"). Models follow instructions more reliably when they understand the purpose.
|
||||||
|
* **Bundle repeated work.** If every test run independently wrote a similar helper script (a chart builder, a data parser), that's a signal to bundle the script into the skill's `scripts/` directory. See [Using scripts](/skill-creation/using-scripts) for how to do this.
|
||||||
|
|
||||||
|
### The loop
|
||||||
|
|
||||||
|
1. Give the eval signals and current `SKILL.md` to an LLM and ask it to propose improvements.
|
||||||
|
2. Review and apply the changes.
|
||||||
|
3. Rerun all test cases in a new `iteration-<N+1>/` directory.
|
||||||
|
4. Grade and aggregate the new results.
|
||||||
|
5. Review with a human. Repeat.
|
||||||
|
|
||||||
|
Stop when you're satisfied with the results, feedback is consistently empty, or you're no longer seeing meaningful improvement between iterations.
|
||||||
|
|
||||||
|
<Tip>
|
||||||
|
The [`skill-creator`](https://github.com/anthropics/skills/tree/main/skills/skill-creator) Skill automates much of this workflow — running evals, grading assertions, aggregating benchmarks, and presenting results for human review.
|
||||||
|
</Tip>
|
||||||
|
|
||||||
|
|
||||||
|
Built with [Mintlify](https://mintlify.com).
|
||||||
@@ -0,0 +1,98 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
# Install agents (and their skills) from common-skills into a target project.
|
||||||
|
#
|
||||||
|
# Usage:
|
||||||
|
# bash scripts/install-agents.sh # install all agents to ./.kiro/agents
|
||||||
|
# bash scripts/install-agents.sh sdlc commit-message # install specific agents
|
||||||
|
# TARGET_DIR=/path/to/project bash scripts/install-agents.sh
|
||||||
|
# bash scripts/install-agents.sh --target /path/to/dir [agents...]
|
||||||
|
#
|
||||||
|
# Environment:
|
||||||
|
# COMMON_SKILLS_DIR source repo path (default: ~/common-skills)
|
||||||
|
# TARGET_DIR install destination (default: ./.kiro/agents)
|
||||||
|
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
COMMON_SKILLS_DIR="${COMMON_SKILLS_DIR:-$HOME/common-skills}"
|
||||||
|
TARGET_DIR="${TARGET_DIR:-.kiro/agents}"
|
||||||
|
|
||||||
|
# Parse --target flag
|
||||||
|
if [[ "${1:-}" == "--target" ]]; then
|
||||||
|
TARGET_DIR="$2"
|
||||||
|
shift 2
|
||||||
|
fi
|
||||||
|
|
||||||
|
AGENTS_SRC="$COMMON_SKILLS_DIR/.kiro/agents"
|
||||||
|
SKILLS_SRC="$COMMON_SKILLS_DIR/skills"
|
||||||
|
SKILLS_DST="$(dirname "$TARGET_DIR")/skills"
|
||||||
|
|
||||||
|
if [[ ! -d "$AGENTS_SRC" ]]; then
|
||||||
|
echo "❌ common-skills not found at $COMMON_SKILLS_DIR"
|
||||||
|
echo " Clone it first: git clone <repo> ~/common-skills"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Pull latest
|
||||||
|
git -C "$COMMON_SKILLS_DIR" pull --ff-only 2>/dev/null || true
|
||||||
|
|
||||||
|
mkdir -p "$TARGET_DIR"
|
||||||
|
mkdir -p "$SKILLS_DST"
|
||||||
|
|
||||||
|
# Determine which agents to install
|
||||||
|
if [[ $# -gt 0 ]]; then
|
||||||
|
agents=("$@")
|
||||||
|
else
|
||||||
|
agents=($(ls "$AGENTS_SRC" | sed 's/\.json$//'))
|
||||||
|
fi
|
||||||
|
|
||||||
|
installed_skills=()
|
||||||
|
|
||||||
|
for agent in "${agents[@]}"; do
|
||||||
|
src="$AGENTS_SRC/${agent}.json"
|
||||||
|
if [[ ! -f "$src" ]]; then
|
||||||
|
echo "⚠️ Agent not found: $agent"
|
||||||
|
continue
|
||||||
|
fi
|
||||||
|
|
||||||
|
cp "$src" "$TARGET_DIR/${agent}.json"
|
||||||
|
echo "✅ Agent installed: $agent"
|
||||||
|
|
||||||
|
# Copy prompt file if referenced via file://prompts/
|
||||||
|
prompt_file=$(grep -oP '(?<=file://prompts/)[^"]+' "$src" || true)
|
||||||
|
if [[ -n "$prompt_file" && -f "$AGENTS_SRC/prompts/$prompt_file" ]]; then
|
||||||
|
mkdir -p "$TARGET_DIR/prompts"
|
||||||
|
cp "$AGENTS_SRC/prompts/$prompt_file" "$TARGET_DIR/prompts/$prompt_file"
|
||||||
|
echo " ↳ Prompt copied: prompts/$prompt_file"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Extract skill names from resources: supports file://.kiro/skills/<name>/... and skill://.kiro/skills/<name>/...
|
||||||
|
skill_refs=$(grep -oP '(?:file|skill)://\.kiro/skills/\K[^/"]+' "$src" | sort -u || true)
|
||||||
|
for skill in $skill_refs; do
|
||||||
|
if [[ "$skill" == "**" ]]; then
|
||||||
|
# wildcard — install all skills
|
||||||
|
for skill_dir in "$SKILLS_SRC"/*/; do
|
||||||
|
skill_name=$(basename "$skill_dir")
|
||||||
|
rm -rf "$SKILLS_DST/$skill_name"
|
||||||
|
cp -r "$skill_dir" "$SKILLS_DST/$skill_name"
|
||||||
|
installed_skills+=("$skill_name")
|
||||||
|
done
|
||||||
|
elif [[ -d "$SKILLS_SRC/$skill" ]]; then
|
||||||
|
rm -rf "$SKILLS_DST/$skill"
|
||||||
|
cp -r "$SKILLS_SRC/$skill" "$SKILLS_DST/$skill"
|
||||||
|
installed_skills+=("$skill")
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
done
|
||||||
|
|
||||||
|
# Deduplicate and report skills
|
||||||
|
if [[ ${#installed_skills[@]} -gt 0 ]]; then
|
||||||
|
unique_skills=($(printf '%s\n' "${installed_skills[@]}" | sort -u))
|
||||||
|
for s in "${unique_skills[@]}"; do
|
||||||
|
echo " ↳ Skill synced: $s"
|
||||||
|
done
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "Done."
|
||||||
|
echo " Agents → $TARGET_DIR/"
|
||||||
|
echo " Skills → $SKILLS_DST/"
|
||||||
@@ -0,0 +1,393 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
orchestrate.py — Agent & Skill Orchestrator for common-skills
|
||||||
|
|
||||||
|
Dynamically discovers agents (.kiro/agents/*.json) and skills (skills/*/SKILL.md),
|
||||||
|
then routes user requests through the appropriate agent+skill pipeline with full
|
||||||
|
observability: structured logging, timing, token estimation, and trace output.
|
||||||
|
|
||||||
|
Usage:
|
||||||
|
python scripts/orchestrate.py "review my code for bugs"
|
||||||
|
python scripts/orchestrate.py "write a commit message" --agent commit-message
|
||||||
|
python scripts/orchestrate.py --list
|
||||||
|
python scripts/orchestrate.py --list-skills
|
||||||
|
python scripts/orchestrate.py "..." --dry-run
|
||||||
|
python scripts/orchestrate.py "..." --trace
|
||||||
|
python scripts/orchestrate.py "..." --log-file run.jsonl
|
||||||
|
"""
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import json
|
||||||
|
import re
|
||||||
|
import subprocess
|
||||||
|
import sys
|
||||||
|
import time
|
||||||
|
import uuid
|
||||||
|
from dataclasses import dataclass, field, asdict
|
||||||
|
from datetime import datetime, timezone
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Optional
|
||||||
|
|
||||||
|
# ─── Paths ────────────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
REPO_ROOT = Path(__file__).parent.parent
|
||||||
|
AGENTS_DIR = REPO_ROOT / ".kiro" / "agents"
|
||||||
|
SKILLS_DIR = REPO_ROOT / "skills"
|
||||||
|
|
||||||
|
# ─── Data models ──────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class SkillMeta:
|
||||||
|
name: str
|
||||||
|
description: str
|
||||||
|
path: Path
|
||||||
|
frontmatter: dict = field(default_factory=dict)
|
||||||
|
|
||||||
|
def summary(self) -> str:
|
||||||
|
return f"[skill:{self.name}] {self.description}"
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class AgentMeta:
|
||||||
|
name: str
|
||||||
|
description: str
|
||||||
|
path: Path
|
||||||
|
prompt: str
|
||||||
|
tools: list[str]
|
||||||
|
resources: list[str]
|
||||||
|
raw: dict = field(default_factory=dict)
|
||||||
|
|
||||||
|
def summary(self) -> str:
|
||||||
|
return f"[agent:{self.name}] {self.description}"
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class TraceEvent:
|
||||||
|
"""One structured log entry in the execution trace."""
|
||||||
|
trace_id: str
|
||||||
|
timestamp: str
|
||||||
|
event: str # e.g. "route", "invoke", "grade", "error"
|
||||||
|
agent: Optional[str] = None
|
||||||
|
skill: Optional[str] = None
|
||||||
|
detail: Optional[dict] = None
|
||||||
|
|
||||||
|
def to_json(self) -> str:
|
||||||
|
return json.dumps(asdict(self), ensure_ascii=False)
|
||||||
|
|
||||||
|
|
||||||
|
# ─── Registry ─────────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
class Registry:
|
||||||
|
"""Dynamically discovers all agents and skills from disk."""
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
self.agents: dict[str, AgentMeta] = {}
|
||||||
|
self.skills: dict[str, SkillMeta] = {}
|
||||||
|
self._load_agents()
|
||||||
|
self._load_skills()
|
||||||
|
|
||||||
|
def _load_agents(self):
|
||||||
|
if not AGENTS_DIR.exists():
|
||||||
|
return
|
||||||
|
for f in sorted(AGENTS_DIR.glob("*.json")):
|
||||||
|
try:
|
||||||
|
raw = json.loads(f.read_text())
|
||||||
|
self.agents[raw["name"]] = AgentMeta(
|
||||||
|
name=raw["name"],
|
||||||
|
description=raw.get("description", ""),
|
||||||
|
path=f,
|
||||||
|
prompt=raw.get("prompt", ""),
|
||||||
|
tools=raw.get("tools", raw.get("allowedTools", [])),
|
||||||
|
resources=raw.get("resources", []),
|
||||||
|
raw=raw,
|
||||||
|
)
|
||||||
|
except Exception as e:
|
||||||
|
print(f"⚠️ Could not load agent {f.name}: {e}", file=sys.stderr)
|
||||||
|
|
||||||
|
def _load_skills(self):
|
||||||
|
if not SKILLS_DIR.exists():
|
||||||
|
return
|
||||||
|
for skill_dir in sorted(SKILLS_DIR.iterdir()):
|
||||||
|
skill_md = skill_dir / "SKILL.md"
|
||||||
|
if not skill_md.exists():
|
||||||
|
continue
|
||||||
|
try:
|
||||||
|
content = skill_md.read_text()
|
||||||
|
fm = _parse_frontmatter(content)
|
||||||
|
name = fm.get("name", skill_dir.name)
|
||||||
|
self.skills[skill_dir.name] = SkillMeta(
|
||||||
|
name=name,
|
||||||
|
description=fm.get("description", ""),
|
||||||
|
path=skill_md,
|
||||||
|
frontmatter=fm,
|
||||||
|
)
|
||||||
|
except Exception as e:
|
||||||
|
print(f"⚠️ Could not load skill {skill_dir.name}: {e}", file=sys.stderr)
|
||||||
|
|
||||||
|
def list_agents(self):
|
||||||
|
for a in self.agents.values():
|
||||||
|
print(f" {a.summary()}")
|
||||||
|
print(f" tools : {', '.join(a.tools)}")
|
||||||
|
print(f" resources : {', '.join(a.resources)}")
|
||||||
|
|
||||||
|
def list_skills(self):
|
||||||
|
for s in self.skills.values():
|
||||||
|
print(f" {s.summary()}")
|
||||||
|
print(f" path : {s.path.relative_to(REPO_ROOT)}")
|
||||||
|
|
||||||
|
|
||||||
|
def _parse_frontmatter(content: str) -> dict:
|
||||||
|
"""Extract YAML-like frontmatter between --- delimiters."""
|
||||||
|
m = re.match(r"^---\s*\n(.*?)\n---", content, re.DOTALL)
|
||||||
|
if not m:
|
||||||
|
return {}
|
||||||
|
result = {}
|
||||||
|
for line in m.group(1).splitlines():
|
||||||
|
if ":" in line:
|
||||||
|
k, _, v = line.partition(":")
|
||||||
|
result[k.strip()] = v.strip()
|
||||||
|
return result
|
||||||
|
|
||||||
|
|
||||||
|
# ─── Router ───────────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
class Router:
|
||||||
|
"""
|
||||||
|
Routes a user prompt to the best agent.
|
||||||
|
|
||||||
|
Strategy (in order):
|
||||||
|
1. Explicit --agent flag → use that agent directly
|
||||||
|
2. Keyword match against agent routing rules embedded in prompt field
|
||||||
|
3. Fallback to 'g-assistent' (general assistant) if present
|
||||||
|
4. Fallback to first available agent
|
||||||
|
"""
|
||||||
|
|
||||||
|
# Simple keyword → skill hints extracted from g-assistent routing rules
|
||||||
|
KEYWORD_HINTS: list[tuple[list[str], str]] = [
|
||||||
|
(["commit", "git commit", "提交"], "commit-message"),
|
||||||
|
(["review", "bug", "anti-pattern", "代码审查"], "codereview"),
|
||||||
|
(["python", "py "], "python"),
|
||||||
|
(["typescript", "ts ", ".ts"], "typescript"),
|
||||||
|
(["test", "测试", "unit test"], "testing"),
|
||||||
|
(["doc", "search doc", "文档"], "docs-rag"),
|
||||||
|
(["deep dive", "分析", "解释", "how does", "understand"], "deep-dive"),
|
||||||
|
(["build", "design", "sdlc", "需求", "系统设计", "任务分解"], "sdlc"),
|
||||||
|
]
|
||||||
|
|
||||||
|
def __init__(self, registry: Registry):
|
||||||
|
self.registry = registry
|
||||||
|
|
||||||
|
def route(self, prompt: str, agent_override: Optional[str] = None) -> tuple[AgentMeta, Optional[SkillMeta]]:
|
||||||
|
"""Return (agent, skill_hint) for the given prompt."""
|
||||||
|
# 1. Explicit override
|
||||||
|
if agent_override:
|
||||||
|
agent = self.registry.agents.get(agent_override)
|
||||||
|
if not agent:
|
||||||
|
raise ValueError(f"Agent '{agent_override}' not found. Available: {list(self.registry.agents)}")
|
||||||
|
skill = self._skill_for_agent(agent)
|
||||||
|
return agent, skill
|
||||||
|
|
||||||
|
# 2. Keyword routing
|
||||||
|
prompt_lower = prompt.lower()
|
||||||
|
for keywords, skill_name in self.KEYWORD_HINTS:
|
||||||
|
if any(kw in prompt_lower for kw in keywords):
|
||||||
|
# Find an agent that references this skill
|
||||||
|
agent = self._agent_for_skill(skill_name) or self._default_agent()
|
||||||
|
skill = self.registry.skills.get(skill_name)
|
||||||
|
return agent, skill
|
||||||
|
|
||||||
|
# 3. Default agent
|
||||||
|
return self._default_agent(), None
|
||||||
|
|
||||||
|
def _agent_for_skill(self, skill_name: str) -> Optional[AgentMeta]:
|
||||||
|
for agent in self.registry.agents.values():
|
||||||
|
if any(skill_name in r for r in agent.resources):
|
||||||
|
return agent
|
||||||
|
return None
|
||||||
|
|
||||||
|
def _skill_for_agent(self, agent: AgentMeta) -> Optional[SkillMeta]:
|
||||||
|
for resource in agent.resources:
|
||||||
|
# e.g. "file://.kiro/skills/commit-message/SKILL.md"
|
||||||
|
m = re.search(r"skills/([^/\"*]+)/", resource)
|
||||||
|
if m:
|
||||||
|
skill_name = m.group(1)
|
||||||
|
if skill_name in self.registry.skills:
|
||||||
|
return self.registry.skills[skill_name]
|
||||||
|
return None
|
||||||
|
|
||||||
|
def _default_agent(self) -> AgentMeta:
|
||||||
|
for name in ("g-assistent", "main"):
|
||||||
|
if name in self.registry.agents:
|
||||||
|
return self.registry.agents[name]
|
||||||
|
if self.registry.agents:
|
||||||
|
return next(iter(self.registry.agents.values()))
|
||||||
|
raise RuntimeError("No agents found in .kiro/agents/")
|
||||||
|
|
||||||
|
|
||||||
|
# ─── Executor ─────────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
class Executor:
|
||||||
|
"""Invokes kiro-cli and captures structured output."""
|
||||||
|
|
||||||
|
def __init__(self, trace_id: str, trace: bool = False, log_file: Optional[Path] = None):
|
||||||
|
self.trace_id = trace_id
|
||||||
|
self.trace = trace
|
||||||
|
self.log_file = log_file
|
||||||
|
self._events: list[TraceEvent] = []
|
||||||
|
|
||||||
|
def emit(self, event: str, agent: str = None, skill: str = None, detail: dict = None):
|
||||||
|
e = TraceEvent(
|
||||||
|
trace_id=self.trace_id,
|
||||||
|
timestamp=datetime.now(timezone.utc).isoformat(),
|
||||||
|
event=event,
|
||||||
|
agent=agent,
|
||||||
|
skill=skill,
|
||||||
|
detail=detail or {},
|
||||||
|
)
|
||||||
|
self._events.append(e)
|
||||||
|
if self.trace:
|
||||||
|
print(f" TRACE {e.to_json()}", file=sys.stderr)
|
||||||
|
if self.log_file:
|
||||||
|
with open(self.log_file, "a") as f:
|
||||||
|
f.write(e.to_json() + "\n")
|
||||||
|
|
||||||
|
def run(self, prompt: str, agent: AgentMeta, skill: Optional[SkillMeta], dry_run: bool = False) -> dict:
|
||||||
|
self.emit("route", agent=agent.name, skill=skill.name if skill else None, detail={
|
||||||
|
"prompt_preview": prompt[:120],
|
||||||
|
"agent_tools": agent.tools,
|
||||||
|
"skill_path": str(skill.path.relative_to(REPO_ROOT)) if skill else None,
|
||||||
|
})
|
||||||
|
|
||||||
|
if dry_run:
|
||||||
|
self.emit("dry_run", agent=agent.name, skill=skill.name if skill else None)
|
||||||
|
return {
|
||||||
|
"dry_run": True, "trace_id": self.trace_id,
|
||||||
|
"agent": agent.name, "skill": skill.name if skill else None,
|
||||||
|
"elapsed_s": 0, "token_estimate": 0, "exit_code": None,
|
||||||
|
}
|
||||||
|
|
||||||
|
cmd = ["kiro-cli", "chat", "--agent", agent.name, "--no-interactive", prompt]
|
||||||
|
self.emit("invoke", agent=agent.name, skill=skill.name if skill else None, detail={"cmd": cmd})
|
||||||
|
|
||||||
|
start = time.perf_counter()
|
||||||
|
try:
|
||||||
|
result = subprocess.run(cmd, capture_output=True, text=True, timeout=120)
|
||||||
|
except subprocess.TimeoutExpired:
|
||||||
|
self.emit("error", agent=agent.name, detail={"reason": "timeout"})
|
||||||
|
raise
|
||||||
|
elapsed = round(time.perf_counter() - start, 3)
|
||||||
|
|
||||||
|
response = _strip_ansi(result.stdout).strip()
|
||||||
|
stderr = _strip_ansi(result.stderr).strip()
|
||||||
|
|
||||||
|
# Estimate tokens (rough: 1 token ≈ 4 chars)
|
||||||
|
token_est = len(prompt) // 4 + len(response) // 4
|
||||||
|
|
||||||
|
self.emit("response", agent=agent.name, skill=skill.name if skill else None, detail={
|
||||||
|
"elapsed_s": elapsed,
|
||||||
|
"exit_code": result.returncode,
|
||||||
|
"response_chars": len(response),
|
||||||
|
"token_estimate": token_est,
|
||||||
|
"has_stderr": bool(stderr),
|
||||||
|
})
|
||||||
|
|
||||||
|
return {
|
||||||
|
"trace_id": self.trace_id,
|
||||||
|
"agent": agent.name,
|
||||||
|
"skill": skill.name if skill else None,
|
||||||
|
"prompt": prompt,
|
||||||
|
"response": response,
|
||||||
|
"stderr": stderr,
|
||||||
|
"elapsed_s": elapsed,
|
||||||
|
"token_estimate": token_est,
|
||||||
|
"exit_code": result.returncode,
|
||||||
|
}
|
||||||
|
|
||||||
|
def print_summary(self, result: dict):
|
||||||
|
"""Human-readable execution summary."""
|
||||||
|
print("\n" + "─" * 60)
|
||||||
|
print(f" trace_id : {result.get('trace_id')}")
|
||||||
|
print(f" agent : {result.get('agent')}")
|
||||||
|
print(f" skill : {result.get('skill') or '(none)'}")
|
||||||
|
print(f" elapsed : {result.get('elapsed_s')}s")
|
||||||
|
print(f" tokens~ : {result.get('token_estimate')}")
|
||||||
|
print(f" exit : {result.get('exit_code')}")
|
||||||
|
print("─" * 60)
|
||||||
|
if result.get("dry_run"):
|
||||||
|
print(" [dry-run] No invocation made.")
|
||||||
|
return
|
||||||
|
print("\n" + result.get("response", ""))
|
||||||
|
if result.get("stderr"):
|
||||||
|
print(f"\n[stderr]\n{result['stderr']}", file=sys.stderr)
|
||||||
|
|
||||||
|
def print_trace(self):
|
||||||
|
"""Print all trace events as a timeline."""
|
||||||
|
print("\n── Execution Trace ──────────────────────────────────────")
|
||||||
|
for e in self._events:
|
||||||
|
ts = e.timestamp[11:23] # HH:MM:SS.mmm
|
||||||
|
detail_str = json.dumps(e.detail, ensure_ascii=False) if e.detail else ""
|
||||||
|
print(f" {ts} [{e.event:<10}] agent={e.agent or '-':20} skill={e.skill or '-':20} {detail_str}")
|
||||||
|
|
||||||
|
|
||||||
|
def _strip_ansi(text: str) -> str:
|
||||||
|
return re.sub(r"\x1b\[[0-9;]*[A-Za-z]", "", text)
|
||||||
|
|
||||||
|
|
||||||
|
# ─── CLI ──────────────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
def main():
|
||||||
|
parser = argparse.ArgumentParser(
|
||||||
|
description="Orchestrate kiro agents and skills",
|
||||||
|
formatter_class=argparse.RawDescriptionHelpFormatter,
|
||||||
|
epilog=__doc__,
|
||||||
|
)
|
||||||
|
parser.add_argument("prompt", nargs="?", help="User prompt to route and execute")
|
||||||
|
parser.add_argument("--agent", help="Force a specific agent by name")
|
||||||
|
parser.add_argument("--dry-run", action="store_true", help="Show routing decision without invoking")
|
||||||
|
parser.add_argument("--trace", action="store_true", help="Print trace events to stderr in real time")
|
||||||
|
parser.add_argument("--log-file", type=Path, help="Append JSONL trace events to this file")
|
||||||
|
parser.add_argument("--list", action="store_true", help="List all discovered agents")
|
||||||
|
parser.add_argument("--list-skills", action="store_true", help="List all discovered skills")
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
registry = Registry()
|
||||||
|
|
||||||
|
if args.list:
|
||||||
|
print(f"\nAgents ({len(registry.agents)}) — from {AGENTS_DIR.relative_to(REPO_ROOT)}")
|
||||||
|
registry.list_agents()
|
||||||
|
return
|
||||||
|
|
||||||
|
if args.list_skills:
|
||||||
|
print(f"\nSkills ({len(registry.skills)}) — from {SKILLS_DIR.relative_to(REPO_ROOT)}")
|
||||||
|
registry.list_skills()
|
||||||
|
return
|
||||||
|
|
||||||
|
if not args.prompt:
|
||||||
|
parser.print_help()
|
||||||
|
return
|
||||||
|
|
||||||
|
trace_id = uuid.uuid4().hex[:12]
|
||||||
|
router = Router(registry)
|
||||||
|
executor = Executor(trace_id, trace=args.trace, log_file=args.log_file)
|
||||||
|
|
||||||
|
try:
|
||||||
|
agent, skill = router.route(args.prompt, agent_override=args.agent)
|
||||||
|
except (ValueError, RuntimeError) as e:
|
||||||
|
print(f"❌ Routing error: {e}", file=sys.stderr)
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
# Show routing decision
|
||||||
|
print(f"\n→ agent : {agent.name}")
|
||||||
|
print(f"→ skill : {skill.name if skill else '(none — general assistant)'}")
|
||||||
|
if skill:
|
||||||
|
print(f"→ skill description: {skill.description}")
|
||||||
|
|
||||||
|
result = executor.run(args.prompt, agent, skill, dry_run=args.dry_run)
|
||||||
|
executor.print_summary(result)
|
||||||
|
|
||||||
|
if args.trace or args.dry_run:
|
||||||
|
executor.print_trace()
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
@@ -0,0 +1,39 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
# puml2svg.sh — Convert all .puml files under skills/ to SVG
|
||||||
|
# Usage:
|
||||||
|
# bash scripts/puml2svg.sh # convert all
|
||||||
|
# bash scripts/puml2svg.sh commit-message deep-dive # convert specific skills
|
||||||
|
|
||||||
|
set -uo pipefail
|
||||||
|
|
||||||
|
PLANTUML_JAR="${PLANTUML_JAR:-/home/xrv/.vscode-server/extensions/jebbs.plantuml-2.18.1/plantuml.jar}"
|
||||||
|
SKILLS_DIR="$(cd "$(dirname "$0")/../skills" && pwd)"
|
||||||
|
|
||||||
|
if [[ ! -f "$PLANTUML_JAR" ]]; then
|
||||||
|
echo "ERROR: plantuml.jar not found at $PLANTUML_JAR"
|
||||||
|
echo "Set PLANTUML_JAR=/path/to/plantuml.jar and retry."
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Build list of target skill dirs
|
||||||
|
if [[ $# -gt 0 ]]; then
|
||||||
|
targets=("$@")
|
||||||
|
else
|
||||||
|
targets=()
|
||||||
|
for d in "$SKILLS_DIR"/*/; do
|
||||||
|
targets+=("$(basename "$d")")
|
||||||
|
done
|
||||||
|
fi
|
||||||
|
|
||||||
|
converted=0
|
||||||
|
for skill in "${targets[@]}"; do
|
||||||
|
assets_dir="$SKILLS_DIR/$skill/assets"
|
||||||
|
mapfile -t puml_files < <(find "$assets_dir" -name "*.puml" 2>/dev/null)
|
||||||
|
for puml in "${puml_files[@]}"; do
|
||||||
|
echo " → $puml"
|
||||||
|
java -jar "$PLANTUML_JAR" -tsvg "$puml" 2>&1
|
||||||
|
((converted++)) || true
|
||||||
|
done
|
||||||
|
done
|
||||||
|
|
||||||
|
echo "Done. $converted file(s) converted."
|
||||||
@@ -24,7 +24,7 @@ def run_prompt(prompt: str, with_skill: bool) -> tuple[str, float]:
|
|||||||
agent = "main" if with_skill else "default"
|
agent = "main" if with_skill else "default"
|
||||||
start = time.time()
|
start = time.time()
|
||||||
result = subprocess.run(
|
result = subprocess.run(
|
||||||
["kiro-cli", "chat", "--agent", agent, "--no-interactive", "--message", prompt],
|
["kiro-cli", "chat", "--agent", agent, "--no-interactive", prompt],
|
||||||
capture_output=True, text=True, timeout=90,
|
capture_output=True, text=True, timeout=90,
|
||||||
)
|
)
|
||||||
elapsed = round(time.time() - start, 2)
|
elapsed = round(time.time() - start, 2)
|
||||||
|
|||||||
@@ -1,44 +0,0 @@
|
|||||||
---
|
|
||||||
name: codereview-skill
|
|
||||||
description: Code review best practices and checklist. Use when reviewing PRs, analyzing code quality, or checking for bugs and anti-patterns.
|
|
||||||
---
|
|
||||||
|
|
||||||
# Code Review Skill
|
|
||||||
|
|
||||||
## Review Checklist
|
|
||||||
|
|
||||||
When reviewing code, check the following:
|
|
||||||
|
|
||||||
### Correctness
|
|
||||||
- Logic is correct and handles edge cases
|
|
||||||
- No off-by-one errors in loops
|
|
||||||
- Null/None checks where needed
|
|
||||||
|
|
||||||
### Readability
|
|
||||||
- Variable and function names are descriptive
|
|
||||||
- Functions do one thing (single responsibility)
|
|
||||||
- No magic numbers — use named constants
|
|
||||||
|
|
||||||
### Security
|
|
||||||
- No hardcoded secrets or credentials
|
|
||||||
- User inputs are validated/sanitized
|
|
||||||
- No SQL injection or command injection risks
|
|
||||||
|
|
||||||
## Example: Bad vs Good
|
|
||||||
|
|
||||||
```python
|
|
||||||
# Bad
|
|
||||||
def f(x):
|
|
||||||
return x * 86400 # magic number
|
|
||||||
|
|
||||||
# Good
|
|
||||||
SECONDS_PER_DAY = 86400
|
|
||||||
|
|
||||||
def to_seconds(days: int) -> int:
|
|
||||||
return days * SECONDS_PER_DAY
|
|
||||||
```
|
|
||||||
|
|
||||||
## Common Anti-patterns to Flag
|
|
||||||
- Functions longer than 40 lines → suggest splitting
|
|
||||||
- Deeply nested conditionals (>3 levels) → suggest early return
|
|
||||||
- Duplicate code blocks → suggest extracting to function
|
|
||||||
@@ -1,20 +0,0 @@
|
|||||||
{
|
|
||||||
"skill_name": "codereview",
|
|
||||||
"evals": [
|
|
||||||
{
|
|
||||||
"id": 1,
|
|
||||||
"prompt": "Review this Python function for issues:\ndef calc(x): return x*86400",
|
|
||||||
"expected_output": "Identifies the magic number 86400 and suggests extracting it as a named constant like SECONDS_PER_DAY."
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"id": 2,
|
|
||||||
"prompt": "Is this code okay?\ndef get_user(db, id):\n return db.execute('SELECT * FROM users WHERE id=' + id)",
|
|
||||||
"expected_output": "Flags SQL injection vulnerability and recommends parameterized queries."
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"id": 3,
|
|
||||||
"prompt": "Review this function:\ndef process(a,b,c,d,e,f,g): return a+b+c+d+e+f+g",
|
|
||||||
"expected_output": "Flags too many parameters and suggests refactoring to use a data structure or fewer arguments."
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
@@ -0,0 +1,65 @@
|
|||||||
|
# commit-message
|
||||||
|
|
||||||
|
Generates professional git commit messages following the [Conventional Commits](references/conventional-commits.md) standard.
|
||||||
|
|
||||||
|
## Architecture
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
## Workflow
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
## When to Use
|
||||||
|
|
||||||
|
- You're ready to commit and want a well-structured message
|
||||||
|
- You want a suggestion before running `git commit`
|
||||||
|
- Trigger phrases: "commit these changes", "give me a commit message", "wrap up my work"
|
||||||
|
|
||||||
|
## How It Works
|
||||||
|
|
||||||
|
1. Runs `git status` to find staged files
|
||||||
|
2. Runs `git diff --cached` to analyze the changes
|
||||||
|
3. Drafts a Conventional Commits message (type, scope, description, body, breaking changes)
|
||||||
|
4. Presents the message and asks for confirmation or adjustments
|
||||||
|
|
||||||
|
## Commit Format
|
||||||
|
|
||||||
|
```
|
||||||
|
<type>(<scope>): <description>
|
||||||
|
|
||||||
|
[optional body]
|
||||||
|
|
||||||
|
[optional footer / BREAKING CHANGE]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Types:** `feat`, `fix`, `docs`, `style`, `refactor`, `perf`, `test`, `build`, `ci`, `chore`, `revert`
|
||||||
|
|
||||||
|
## Examples
|
||||||
|
|
||||||
|
| Staged changes | Suggested message |
|
||||||
|
|---|---|
|
||||||
|
| New JWT auth in `src/auth.ts` | `feat(auth): add JWT-based session management` |
|
||||||
|
| Updated API docs | `docs: update API endpoints for user registration` |
|
||||||
|
| Breaking API change | `feat(api)!: rename /users to /accounts` |
|
||||||
|
|
||||||
|
## File Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
skills/commit-message/
|
||||||
|
├── SKILL.md
|
||||||
|
├── README.md # this file
|
||||||
|
├── assets/
|
||||||
|
│ ├── workflow.puml
|
||||||
|
│ └── commit-message-workflow.svg
|
||||||
|
├── evals/
|
||||||
|
│ └── evals.json
|
||||||
|
└── references/
|
||||||
|
└── conventional-commits.md
|
||||||
|
```
|
||||||
|
|
||||||
|
## Evals
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python scripts/run_evals.py commit-message
|
||||||
|
```
|
||||||
@@ -0,0 +1,24 @@
|
|||||||
|
@startuml commit-message-architecture
|
||||||
|
skinparam componentStyle rectangle
|
||||||
|
skinparam defaultFontName Arial
|
||||||
|
skinparam backgroundColor #FAFAFA
|
||||||
|
|
||||||
|
package "commit-message Skill" {
|
||||||
|
component "SKILL.md\n(instructions)" as SKILL
|
||||||
|
component "references/\nconventional-commits.md" as REF
|
||||||
|
component "evals/evals.json" as EVALS
|
||||||
|
}
|
||||||
|
|
||||||
|
package "Git Environment" {
|
||||||
|
database "Staged Changes\n(git index)" as INDEX
|
||||||
|
component "git commit" as GIT
|
||||||
|
}
|
||||||
|
|
||||||
|
actor Developer
|
||||||
|
|
||||||
|
Developer --> SKILL : triggers skill
|
||||||
|
SKILL --> REF : loads commit format rules
|
||||||
|
SKILL --> INDEX : reads via git diff --cached
|
||||||
|
SKILL --> Developer : proposes message
|
||||||
|
Developer --> GIT : confirms & commits
|
||||||
|
@enduml
|
||||||
|
After Width: | Height: | Size: 7.4 KiB |
|
After Width: | Height: | Size: 8.1 KiB |
@@ -0,0 +1,22 @@
|
|||||||
|
@startuml commit-message-workflow
|
||||||
|
skinparam defaultFontName Arial
|
||||||
|
skinparam backgroundColor #FAFAFA
|
||||||
|
|
||||||
|
actor Developer
|
||||||
|
participant "commit-message\nSkill" as SKILL
|
||||||
|
participant "git" as GIT
|
||||||
|
|
||||||
|
Developer -> SKILL : "commit these changes"
|
||||||
|
SKILL -> GIT : git status
|
||||||
|
GIT --> SKILL : staged files list
|
||||||
|
SKILL -> GIT : git diff --cached
|
||||||
|
GIT --> SKILL : diff output
|
||||||
|
SKILL -> SKILL : draft Conventional Commits message
|
||||||
|
SKILL --> Developer : proposed message
|
||||||
|
alt confirmed
|
||||||
|
Developer -> GIT : git commit -m "..."
|
||||||
|
else refine
|
||||||
|
Developer -> SKILL : feedback
|
||||||
|
SKILL --> Developer : revised message
|
||||||
|
end
|
||||||
|
@enduml
|
||||||
@@ -0,0 +1,82 @@
|
|||||||
|
# deep-dive
|
||||||
|
|
||||||
|
A Kiro agent skill that analyzes codebases, documentation, APIs, or product specs and produces a structured technical report for developers.
|
||||||
|
|
||||||
|
## Architecture
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
## Workflow
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
## What it does
|
||||||
|
|
||||||
|
Given any technical material — source code, README, OpenAPI spec, pasted docs, or just a topic name — the agent produces a detailed Markdown report covering:
|
||||||
|
|
||||||
|
- System overview and design philosophy
|
||||||
|
- Architecture diagram (PlantUML)
|
||||||
|
- Key concepts & terminology glossary
|
||||||
|
- Data model with ER diagram
|
||||||
|
- Core flows with sequence diagrams
|
||||||
|
- API / interface reference
|
||||||
|
- Configuration & deployment notes
|
||||||
|
- Extension and integration points
|
||||||
|
- Observability (logging, metrics, tracing)
|
||||||
|
- Known limitations and trade-offs
|
||||||
|
- Actionable further reading recommendations
|
||||||
|
|
||||||
|
## When to Use
|
||||||
|
|
||||||
|
Activate this skill when a developer says things like:
|
||||||
|
- "help me understand this codebase"
|
||||||
|
- "deep dive into X"
|
||||||
|
- "onboard me to this service"
|
||||||
|
- "how does X work"
|
||||||
|
- "analyze this doc / spec"
|
||||||
|
- "详细分析 X 架构 / 部署流程"
|
||||||
|
|
||||||
|
## Accepted Inputs
|
||||||
|
|
||||||
|
| Input type | Example |
|
||||||
|
|---|---|
|
||||||
|
| File path(s) | `src/`, `docs/api.yaml`, `main.go` |
|
||||||
|
| Pasted text | README content, architecture notes |
|
||||||
|
| Topic name | "Kafka consumer groups", "Redis internals" |
|
||||||
|
| URL | Link to documentation or spec |
|
||||||
|
|
||||||
|
When given a directory, the skill automatically scans `README*`, `docs/`, entry-point files, and package manifests.
|
||||||
|
|
||||||
|
## Example Prompts
|
||||||
|
|
||||||
|
```
|
||||||
|
Give me a deep dive on the Kafka consumer group rebalancing protocol.
|
||||||
|
```
|
||||||
|
|
||||||
|
```
|
||||||
|
Analyze this FastAPI service and explain how it works: [paste README]
|
||||||
|
```
|
||||||
|
|
||||||
|
```
|
||||||
|
Help me understand the worker pool in src/worker/pool.go
|
||||||
|
```
|
||||||
|
|
||||||
|
## File Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
skills/deep-dive/
|
||||||
|
├── SKILL.md
|
||||||
|
├── README.md # this file
|
||||||
|
├── assets/
|
||||||
|
│ ├── report-template.md
|
||||||
|
│ ├── workflow.puml
|
||||||
|
│ └── deep-dive-workflow.svg
|
||||||
|
└── evals/
|
||||||
|
└── evals.json
|
||||||
|
```
|
||||||
|
|
||||||
|
## Evals
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python scripts/run_evals.py deep-dive
|
||||||
|
```
|
||||||
@@ -0,0 +1,48 @@
|
|||||||
|
---
|
||||||
|
name: deep-dive
|
||||||
|
description: Analyzes codebases, technical documentation, APIs, product specs, or infrastructure topics and produces a structured deep-dive report for developers. Use when a developer needs to quickly understand an unfamiliar system, library, service, codebase, or deployment architecture. Triggers on phrases like "help me understand", "explain this codebase", "analyze this doc", "how does X work", "onboard me to", "deep dive into", "详细分析", "分析架构", "分析部署", "解释一下", "帮我理解".
|
||||||
|
metadata:
|
||||||
|
author: common-skills
|
||||||
|
version: "1.0"
|
||||||
|
---
|
||||||
|
|
||||||
|
# Deep Dive
|
||||||
|
|
||||||
|
Produce a structured technical report that helps a developer rapidly understand an unfamiliar system. The report should be as detailed as the available material allows — depth is the goal.
|
||||||
|
|
||||||
|
## Inputs
|
||||||
|
|
||||||
|
Accept any combination of:
|
||||||
|
- Local file paths (source code, markdown docs, OpenAPI/Swagger specs, config files)
|
||||||
|
- Pasted text (README, architecture notes, API docs)
|
||||||
|
- A topic or product name (research from general knowledge)
|
||||||
|
- A URL (fetch and analyze if possible)
|
||||||
|
|
||||||
|
If the user provides a directory, scan key files: `README*`, `ARCHITECTURE*`, `docs/`, entry-point source files, config files (`package.json`, `pyproject.toml`, `go.mod`, `Cargo.toml`, `pom.xml`, etc.).
|
||||||
|
|
||||||
|
## Output
|
||||||
|
|
||||||
|
Use the report template at [assets/report-template.md](assets/report-template.md) as the structure for every report.
|
||||||
|
|
||||||
|
**Output location:**
|
||||||
|
- If the user specifies a path, write the report there
|
||||||
|
- Otherwise, write to `./deep-dive-{subject}.md` in the current working directory (replace spaces with hyphens, lowercase)
|
||||||
|
|
||||||
|
Fill in all template sections that are relevant to the material. Skip sections where there is genuinely nothing to say. Always include at least: Overview, Architecture, and Further Reading.
|
||||||
|
|
||||||
|
Section-specific guidance:
|
||||||
|
- **Architecture**: label diagram arrows with protocol/data type; group by layer; include external dependencies
|
||||||
|
- **Data Model**: only include if the system has a meaningful schema or domain model
|
||||||
|
- **Core Flows**: pick the 2–4 most important user journeys; one sequence diagram each
|
||||||
|
- **API Reference**: group endpoints by resource; note auth mechanism, pagination, versioning
|
||||||
|
- **Further Reading**: 5–8 items, ordered most-to-least important, each with a concrete location (file path, URL, or search term)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Quality Standards
|
||||||
|
|
||||||
|
- **Depth over breadth**: detailed analysis of the most important parts beats shallow coverage of everything
|
||||||
|
- **Concrete over abstract**: use actual class names, file paths, endpoint names from the material — not generic placeholders
|
||||||
|
- **Accurate diagrams only**: if you lack enough information to make a diagram correct, omit it and say what's missing
|
||||||
|
- **Honest gaps**: if a section cannot be filled, write one sentence explaining what additional material is needed
|
||||||
|
- **Developer-first language**: assume a competent reader; skip basics, focus on what is non-obvious
|
||||||
@@ -0,0 +1,27 @@
|
|||||||
|
@startuml deep-dive-architecture
|
||||||
|
skinparam componentStyle rectangle
|
||||||
|
skinparam defaultFontName Arial
|
||||||
|
skinparam backgroundColor #FAFAFA
|
||||||
|
|
||||||
|
package "deep-dive Skill" {
|
||||||
|
component "SKILL.md\n(instructions + triggers)" as SKILL
|
||||||
|
component "assets/\nreport-template.md\n(11 section skeletons)" as TMPL
|
||||||
|
component "evals/evals.json" as EVALS
|
||||||
|
}
|
||||||
|
|
||||||
|
package "Input Sources" {
|
||||||
|
component "File path(s)\n(src/, docs/, manifests)" as FILES
|
||||||
|
component "URL\n(fetched content)" as URL
|
||||||
|
component "Pasted text /\nTopic name" as TEXT
|
||||||
|
}
|
||||||
|
|
||||||
|
package "Output" {
|
||||||
|
component "deep-dive-{subject}.md\n(structured report)" as REPORT
|
||||||
|
}
|
||||||
|
|
||||||
|
SKILL --> TMPL : loads 11-section template
|
||||||
|
SKILL --> FILES : reads source files
|
||||||
|
SKILL --> URL : fetches content
|
||||||
|
SKILL --> TEXT : analyzes inline
|
||||||
|
SKILL --> REPORT : writes report
|
||||||
|
@enduml
|
||||||
|
After Width: | Height: | Size: 8.3 KiB |
|
After Width: | Height: | Size: 8.7 KiB |
@@ -0,0 +1,143 @@
|
|||||||
|
# Deep Dive Report: {SUBJECT}
|
||||||
|
|
||||||
|
> Generated by the `deep-dive` skill.
|
||||||
|
> Date: {DATE}
|
||||||
|
> Source: {SOURCE}
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 1. Overview
|
||||||
|
|
||||||
|
- **What it is**:
|
||||||
|
- **Problem it solves**:
|
||||||
|
- **Target users**:
|
||||||
|
- **Design philosophy**:
|
||||||
|
- **Tech stack**:
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 2. Architecture
|
||||||
|
|
||||||
|
{description of high-level structure}
|
||||||
|
|
||||||
|
```plantuml
|
||||||
|
@startuml
|
||||||
|
' Replace with actual components
|
||||||
|
package "Layer A" {
|
||||||
|
[Component 1]
|
||||||
|
}
|
||||||
|
package "Layer B" {
|
||||||
|
[Component 2]
|
||||||
|
[Component 3]
|
||||||
|
}
|
||||||
|
[Component 1] --> [Component 2] : protocol
|
||||||
|
[Component 2] --> [Component 3] : protocol
|
||||||
|
@enduml
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 3. Key Concepts & Terminology
|
||||||
|
|
||||||
|
**Term** — definition and why it matters in this system.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 4. Data Model
|
||||||
|
|
||||||
|
{description of primary entities and relationships}
|
||||||
|
|
||||||
|
```plantuml
|
||||||
|
@startuml
|
||||||
|
entity EntityA {
|
||||||
|
* id : UUID
|
||||||
|
--
|
||||||
|
field : Type
|
||||||
|
}
|
||||||
|
entity EntityB {
|
||||||
|
* id : UUID
|
||||||
|
--
|
||||||
|
field : Type
|
||||||
|
}
|
||||||
|
EntityA ||--o{ EntityB : relationship
|
||||||
|
@enduml
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 5. Core Flows & Sequences
|
||||||
|
|
||||||
|
### Flow 1: {Name}
|
||||||
|
|
||||||
|
{one-paragraph description}
|
||||||
|
|
||||||
|
```plantuml
|
||||||
|
@startuml
|
||||||
|
actor User
|
||||||
|
participant "Component A" as A
|
||||||
|
participant "Component B" as B
|
||||||
|
|
||||||
|
User -> A : action
|
||||||
|
A -> B : call
|
||||||
|
B --> A : response
|
||||||
|
A --> User : result
|
||||||
|
@enduml
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 6. Public API / Interface Reference
|
||||||
|
|
||||||
|
| Method | Path / Signature | Purpose | Key Params | Returns |
|
||||||
|
|--------|-----------------|---------|------------|---------|
|
||||||
|
| GET | /resource | description | param | type |
|
||||||
|
|
||||||
|
**Auth**: {mechanism}
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 7. Configuration & Deployment
|
||||||
|
|
||||||
|
**Key config options:**
|
||||||
|
|
||||||
|
| Variable | Default | Description |
|
||||||
|
|----------|---------|-------------|
|
||||||
|
| `ENV_VAR` | value | what it controls |
|
||||||
|
|
||||||
|
**Run locally:**
|
||||||
|
```bash
|
||||||
|
# minimal steps
|
||||||
|
```
|
||||||
|
|
||||||
|
**Deployment topology**: {description}
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 8. Extension & Integration Points
|
||||||
|
|
||||||
|
- {plugin/hook/middleware description}
|
||||||
|
- {how to add a new feature}
|
||||||
|
- {external integration patterns}
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 9. Observability
|
||||||
|
|
||||||
|
- **Logging**:
|
||||||
|
- **Metrics**:
|
||||||
|
- **Tracing**:
|
||||||
|
- **Health check**:
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 10. Known Limitations & Trade-offs
|
||||||
|
|
||||||
|
- {limitation or trade-off}
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 11. Further Reading
|
||||||
|
|
||||||
|
1. **[Topic]** — why it matters and where to find it
|
||||||
|
2. **[Topic]** — why it matters and where to find it
|
||||||
|
3. **[Topic]** — why it matters and where to find it
|
||||||
@@ -0,0 +1,23 @@
|
|||||||
|
@startuml deep-dive-workflow
|
||||||
|
skinparam defaultFontName Arial
|
||||||
|
skinparam backgroundColor #FAFAFA
|
||||||
|
|
||||||
|
actor Developer
|
||||||
|
participant "deep-dive\nSkill" as SKILL
|
||||||
|
participant "Input Source" as SRC
|
||||||
|
participant "report-template.md" as TMPL
|
||||||
|
|
||||||
|
Developer -> SKILL : "deep dive into X"\n(files / URL / topic / text)
|
||||||
|
SKILL -> SRC : read files / fetch URL / analyze text
|
||||||
|
SRC --> SKILL : raw material
|
||||||
|
|
||||||
|
SKILL -> TMPL : load 11-section template
|
||||||
|
SKILL -> SKILL : analyze architecture,\ndata model, flows, APIs
|
||||||
|
|
||||||
|
loop each relevant section
|
||||||
|
SKILL -> SKILL : generate PlantUML diagram\n(component / ER / sequence)
|
||||||
|
SKILL -> SKILL : fill section content
|
||||||
|
end
|
||||||
|
|
||||||
|
SKILL --> Developer : deep-dive-{subject}.md
|
||||||
|
@enduml
|
||||||
@@ -0,0 +1,60 @@
|
|||||||
|
{
|
||||||
|
"skill_name": "deep-dive",
|
||||||
|
"evals": [
|
||||||
|
{
|
||||||
|
"id": 1,
|
||||||
|
"prompt": "Help me understand the Redis codebase. I want to know its architecture, how the event loop works, and the key data structures it uses internally.",
|
||||||
|
"expected_output": "A structured report covering: Redis overview (in-memory data store, single-threaded event loop), architecture diagram showing the ae event loop, networking layer, command dispatcher, and persistence modules, explanation of core data structures (SDS, dict, ziplist/listpack, skiplist), sequence diagram for a SET command, and further reading pointing to specific source files like ae.c, t_string.c, dict.c.",
|
||||||
|
"assertions": [
|
||||||
|
"Report includes an Overview section describing Redis as an in-memory data store",
|
||||||
|
"Report includes a PlantUML architecture diagram",
|
||||||
|
"Report explains the single-threaded event loop (ae)",
|
||||||
|
"Report covers at least 3 internal data structures (e.g. SDS, dict, skiplist)",
|
||||||
|
"Report includes a Further Reading section with at least 3 actionable items",
|
||||||
|
"At least one PlantUML sequence diagram is included"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": 2,
|
||||||
|
"prompt": "I just joined a team working on a REST API built with FastAPI. Here's the project README:\n\n# OrderService\nA FastAPI service managing e-commerce orders. Uses PostgreSQL via SQLAlchemy, Redis for caching, and Celery for async tasks. Auth via JWT.\n\n## Endpoints\n- POST /orders — create order\n- GET /orders/{id} — get order\n- PATCH /orders/{id}/status — update status\n- GET /orders?user_id=X — list orders\n\n## Models\nOrder: id, user_id, status (pending/confirmed/shipped/delivered), items (JSON), created_at\n\nHelp me understand this service.",
|
||||||
|
"expected_output": "Report covering: overview of OrderService purpose and stack (FastAPI, PostgreSQL, Redis, Celery, JWT), architecture diagram showing the components and their connections, data model ER diagram for the Order entity, sequence diagrams for at least POST /orders and PATCH /orders/{id}/status flows, API reference table for all 4 endpoints, notes on JWT auth, Redis caching strategy, and Celery async task usage, further reading recommendations.",
|
||||||
|
"assertions": [
|
||||||
|
"Report includes an Overview section mentioning FastAPI, PostgreSQL, Redis, Celery, and JWT",
|
||||||
|
"Report includes a PlantUML architecture or component diagram",
|
||||||
|
"Report includes a PlantUML data model diagram showing the Order entity",
|
||||||
|
"Report includes a PlantUML sequence diagram for at least one endpoint flow",
|
||||||
|
"Report includes an API reference section covering all 4 endpoints",
|
||||||
|
"Report mentions JWT authentication",
|
||||||
|
"Report includes a Further Reading section"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": 3,
|
||||||
|
"prompt": "Give me a deep dive on the Kafka consumer group protocol. I need to understand how rebalancing works, what the group coordinator does, and the difference between eager and cooperative rebalancing.",
|
||||||
|
"expected_output": "Report covering: Kafka consumer group overview, architecture diagram showing brokers, group coordinator, and consumers, explanation of the group coordinator role (heartbeats, session timeout, offset commits), detailed sequence diagrams for both eager (stop-the-world) and cooperative (incremental) rebalance protocols, key concepts glossary (consumer group, partition assignment, rebalance, heartbeat, session.timeout.ms), known trade-offs between the two rebalance strategies, and further reading.",
|
||||||
|
"assertions": [
|
||||||
|
"Report includes an Overview section explaining consumer groups and their purpose",
|
||||||
|
"Report includes a PlantUML diagram showing brokers, group coordinator, and consumers",
|
||||||
|
"Report explains the group coordinator role",
|
||||||
|
"Report covers both eager and cooperative rebalancing with their differences",
|
||||||
|
"Report includes at least one PlantUML sequence diagram showing a rebalance flow",
|
||||||
|
"Report includes a Key Concepts section with relevant terminology",
|
||||||
|
"Report includes a Known Limitations or Trade-offs section comparing the two strategies",
|
||||||
|
"Report includes a Further Reading section"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": 4,
|
||||||
|
"prompt": "I need to understand this Go file quickly:\n\n```go\npackage worker\n\ntype Job struct {\n ID string\n Payload []byte\n Retries int\n}\n\ntype Worker struct {\n queue chan Job\n done chan struct{}\n handler func(Job) error\n}\n\nfunc New(concurrency int, handler func(Job) error) *Worker {\n w := &Worker{\n queue: make(chan Job, 100),\n done: make(chan struct{}),\n handler: handler,\n }\n for i := 0; i < concurrency; i++ {\n go w.loop()\n }\n return w\n}\n\nfunc (w *Worker) Submit(j Job) { w.queue <- j }\n\nfunc (w *Worker) Stop() { close(w.done) }\n\nfunc (w *Worker) loop() {\n for {\n select {\n case j := <-w.queue:\n if err := w.handler(j); err != nil && j.Retries > 0 {\n j.Retries--\n w.queue <- j\n }\n case <-w.done:\n return\n }\n }\n}\n```",
|
||||||
|
"expected_output": "Report covering: overview of the worker pool pattern implemented, architecture/component description of Job, Worker structs and their roles, sequence diagram showing Submit -> loop -> handler -> retry flow, explanation of concurrency model (goroutines, buffered channel, done channel for shutdown), key concepts (worker pool, buffered channel backpressure, retry with decrement), known limitations (no graceful drain on Stop, fixed buffer size, no dead-letter queue), and further reading suggestions.",
|
||||||
|
"assertions": [
|
||||||
|
"Report identifies this as a worker pool / job queue pattern",
|
||||||
|
"Report explains the role of the queue channel and done channel",
|
||||||
|
"Report includes a PlantUML sequence or activity diagram showing the job processing flow including retry",
|
||||||
|
"Report explains the concurrency model (goroutines spawned in New)",
|
||||||
|
"Report identifies at least 2 limitations (e.g. no graceful shutdown drain, fixed buffer, no DLQ)",
|
||||||
|
"Report includes a Further Reading section"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
@@ -0,0 +1,67 @@
|
|||||||
|
# docs-rag
|
||||||
|
|
||||||
|
Retrieval-augmented generation over a local `docs/` directory of 3GPP Release 19 specifications.
|
||||||
|
|
||||||
|
## Architecture
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
## Workflow
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
## When to Use
|
||||||
|
|
||||||
|
- Questions about 3GPP specs, Release 19 features, mission critical services, ambient IoT, ISAC, UAV/drone support, network sharing, SNPN interconnect, traffic steering/split
|
||||||
|
- Rebuilding or refreshing the document index after adding new files
|
||||||
|
|
||||||
|
## How It Works
|
||||||
|
|
||||||
|
1. Reads `data/index.json` (spec number, title, keywords, summary, file path)
|
||||||
|
2. Matches the query against `keywords` and `summary` fields
|
||||||
|
3. Answers from the summary if sufficient; reads the actual file for deeper detail
|
||||||
|
4. Always cites the spec number and version in the answer
|
||||||
|
|
||||||
|
## Indexed Documents
|
||||||
|
|
||||||
|
| Spec | Title |
|
||||||
|
|------|-------|
|
||||||
|
| TS 22.280 | Mission Critical Services Common Requirements |
|
||||||
|
| TS 22.369 | Service Requirements for Ambient IoT |
|
||||||
|
| TR 22.837 | Integrated Sensing and Communication (ISAC) |
|
||||||
|
| TR 22.840 | Study on Ambient Power-enabled IoT |
|
||||||
|
| TR 22.841 | Traffic Steer/Switch/Split over Dual 3GPP Access |
|
||||||
|
| TR 22.843 | UAV Phase 3 |
|
||||||
|
| TR 22.848 | Interconnect of SNPN |
|
||||||
|
| TR 22.851 | Network Sharing Feasibility Study |
|
||||||
|
|
||||||
|
## Maintaining the Index
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Full rebuild
|
||||||
|
python scripts/build_index.py
|
||||||
|
|
||||||
|
# Incremental update (skips unchanged files)
|
||||||
|
python scripts/build_index.py --update
|
||||||
|
```
|
||||||
|
|
||||||
|
## File Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
skills/docs-rag/
|
||||||
|
├── SKILL.md
|
||||||
|
├── README.md # this file
|
||||||
|
├── assets/
|
||||||
|
│ ├── workflow.puml
|
||||||
|
│ └── docs-rag-workflow.svg
|
||||||
|
├── data/
|
||||||
|
│ └── index.json # document index
|
||||||
|
└── evals/
|
||||||
|
└── evals.json
|
||||||
|
```
|
||||||
|
|
||||||
|
## Evals
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python scripts/run_evals.py docs-rag
|
||||||
|
```
|
||||||
@@ -0,0 +1,24 @@
|
|||||||
|
@startuml docs-rag-architecture
|
||||||
|
skinparam componentStyle rectangle
|
||||||
|
skinparam defaultFontName Arial
|
||||||
|
skinparam backgroundColor #FAFAFA
|
||||||
|
|
||||||
|
package "docs-rag Skill" {
|
||||||
|
component "SKILL.md\n(instructions)" as SKILL
|
||||||
|
component "data/index.json\n(spec, title, keywords,\nsummary, file path)" as INDEX
|
||||||
|
component "evals/evals.json" as EVALS
|
||||||
|
}
|
||||||
|
|
||||||
|
package "Source Documents" {
|
||||||
|
database "docs/\n*.docx / *.doc\n(3GPP specs)" as DOCS
|
||||||
|
component "scripts/build_index.py\n(index builder)" as BUILDER
|
||||||
|
}
|
||||||
|
|
||||||
|
actor Developer
|
||||||
|
|
||||||
|
Developer --> SKILL : 3GPP question
|
||||||
|
SKILL --> INDEX : keyword + semantic match
|
||||||
|
INDEX --> DOCS : read file (deep detail)
|
||||||
|
BUILDER --> INDEX : builds / updates
|
||||||
|
DOCS --> BUILDER : scans
|
||||||
|
@enduml
|
||||||
|
After Width: | Height: | Size: 7.6 KiB |
|
After Width: | Height: | Size: 9.0 KiB |
@@ -0,0 +1,24 @@
|
|||||||
|
@startuml docs-rag-workflow
|
||||||
|
skinparam defaultFontName Arial
|
||||||
|
skinparam backgroundColor #FAFAFA
|
||||||
|
|
||||||
|
actor Developer
|
||||||
|
participant "docs-rag\nSkill" as SKILL
|
||||||
|
participant "data/index.json" as INDEX
|
||||||
|
participant "docs/*.docx" as DOCS
|
||||||
|
|
||||||
|
Developer -> SKILL : 3GPP question
|
||||||
|
SKILL -> INDEX : read index
|
||||||
|
INDEX --> SKILL : entries (keywords, summary, path)
|
||||||
|
SKILL -> SKILL : match query vs keywords & summary
|
||||||
|
|
||||||
|
alt summary sufficient
|
||||||
|
SKILL --> Developer : answer from summary\n+ cite spec + version
|
||||||
|
else need deeper detail
|
||||||
|
SKILL -> DOCS : read source file
|
||||||
|
DOCS --> SKILL : full spec content
|
||||||
|
SKILL --> Developer : detailed answer\n+ cite spec + version
|
||||||
|
else no match
|
||||||
|
SKILL --> Developer : "No matching document found"
|
||||||
|
end
|
||||||
|
@enduml
|
||||||
@@ -0,0 +1,77 @@
|
|||||||
|
# meta-creator
|
||||||
|
|
||||||
|
A Kiro agent skill for creating and iteratively improving agent skills and custom agents.
|
||||||
|
|
||||||
|
## Architecture
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
## Workflow
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
## What It Does
|
||||||
|
|
||||||
|
- Creates new `SKILL.md` files with proper frontmatter and instructions
|
||||||
|
- Creates `evals/evals.json` with at least 3 eval cases
|
||||||
|
- Creates or updates Kiro custom agent configs (`.kiro/agents/<name>.json`)
|
||||||
|
- Runs eval-driven iteration: analyzes failures and improves skills
|
||||||
|
|
||||||
|
## When to Use
|
||||||
|
|
||||||
|
Trigger phrases: "create a skill", "make a skill", "new skill", "update skill", "improve skill", "create an agent", "new agent", "update agent", "创建skill", "创建技能", "新建skill", "更新skill", "优化skill", "创建agent", "新建agent", "更新agent"
|
||||||
|
|
||||||
|
## Workflow Steps
|
||||||
|
|
||||||
|
1. **Gather requirements** — what the skill does, example tasks, environment needs
|
||||||
|
2. **Create `SKILL.md`** — frontmatter (`name`, `description`) + step-by-step instructions
|
||||||
|
3. **Create `evals/evals.json`** — happy path, variation, and edge case
|
||||||
|
4. **Iterate** — if eval results are provided, fix instruction gaps and update assertions
|
||||||
|
5. **Create agent** (optional) — `.kiro/agents/<name>.json` with prompt, tools, and skill references
|
||||||
|
|
||||||
|
## Outputs
|
||||||
|
|
||||||
|
| File | Description |
|
||||||
|
|------|-------------|
|
||||||
|
| `skills/<name>/SKILL.md` | Skill instructions |
|
||||||
|
| `skills/<name>/evals/evals.json` | Eval cases |
|
||||||
|
| `.kiro/agents/<name>.json` | Agent config (only if requested) |
|
||||||
|
| `.kiro/agents/prompts/<name>.md` | Agent prompt file |
|
||||||
|
|
||||||
|
## File Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
skills/meta-creator/
|
||||||
|
├── SKILL.md
|
||||||
|
├── README.md # this file
|
||||||
|
├── assets/
|
||||||
|
│ ├── workflow.puml
|
||||||
|
│ └── meta-creator-workflow.svg
|
||||||
|
├── evals/
|
||||||
|
│ └── evals.json
|
||||||
|
└── references/
|
||||||
|
├── skills-Specification.md
|
||||||
|
├── skills-eval.md
|
||||||
|
├── custom-agents-configuration-reference.md
|
||||||
|
└── kiro-cli-chat-configuration.md
|
||||||
|
```
|
||||||
|
|
||||||
|
## Example Prompts
|
||||||
|
|
||||||
|
```
|
||||||
|
Create a skill that generates SQL queries from natural language descriptions.
|
||||||
|
```
|
||||||
|
|
||||||
|
```
|
||||||
|
Update the commit-message skill to also support Angular commit conventions.
|
||||||
|
```
|
||||||
|
|
||||||
|
```
|
||||||
|
Create a new agent called "db-helper" that uses the sql-gen skill.
|
||||||
|
```
|
||||||
|
|
||||||
|
## Evals
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python scripts/run_evals.py meta-creator
|
||||||
|
```
|
||||||
@@ -0,0 +1,232 @@
|
|||||||
|
---
|
||||||
|
name: meta-creator
|
||||||
|
description: Creates and iteratively improves agent skills and custom agents. Use when a user wants to create a new skill, update an existing skill, create a new agent, or run eval-driven iteration. Triggers on phrases like "create a skill", "make a skill", "new skill", "update skill", "improve skill", "create an agent", "new agent", "update agent", "创建skill", "创建技能", "新建skill", "更新skill", "优化skill", "创建agent", "新建agent", "更新agent".
|
||||||
|
metadata:
|
||||||
|
author: common-skills
|
||||||
|
version: "1.0"
|
||||||
|
---
|
||||||
|
|
||||||
|
# Meta Creator
|
||||||
|
|
||||||
|
Create or update agent skills and custom agents. Skills conform to the [Agent Skills specification](references/skills-Specification.md). Agents conform to the [Kiro custom agent configuration](references/custom-agents-configuration-reference.md). For eval-driven iteration, follow the [eval methodology](references/skills-eval.md). For Kiro CLI configuration scopes, file paths, and conflict resolution rules, refer to the [Kiro CLI Chat configuration](references/kiro-cli-chat-configuration.md).
|
||||||
|
|
||||||
|
## References
|
||||||
|
|
||||||
|
- [skills-Specification.md](references/skills-Specification.md) — SKILL.md format, frontmatter rules, directory structure
|
||||||
|
- [skills-eval.md](references/skills-eval.md) — eval design, grading, iteration methodology
|
||||||
|
- [custom-agents-configuration-reference.md](references/custom-agents-configuration-reference.md) — Kiro agent JSON config fields
|
||||||
|
- [kiro-cli-chat-configuration.md](references/kiro-cli-chat-configuration.md) — Kiro CLI configuration scopes (global/project/agent), file paths, and conflict resolution priority
|
||||||
|
|
||||||
|
## Inputs
|
||||||
|
|
||||||
|
The user will provide one of:
|
||||||
|
- A description of what the new skill should do
|
||||||
|
- An existing skill directory to update or improve
|
||||||
|
- Eval results / feedback to incorporate into an existing skill
|
||||||
|
|
||||||
|
## Workflow
|
||||||
|
|
||||||
|
### 1. Gather Requirements
|
||||||
|
|
||||||
|
Ask the user (or infer from context):
|
||||||
|
- What does the skill do? When should it activate?
|
||||||
|
- What are 2–3 concrete example tasks it should handle?
|
||||||
|
- Any environment requirements (tools, packages, network)?
|
||||||
|
|
||||||
|
### 2. Create or Update `SKILL.md`
|
||||||
|
|
||||||
|
**Frontmatter rules:**
|
||||||
|
- `name`: lowercase, hyphens only, matches directory name, max 64 chars
|
||||||
|
- `description`: describes what it does AND when to use it; include trigger phrases; max 1024 chars
|
||||||
|
- Add `compatibility` only if the skill has real environment requirements
|
||||||
|
- Add `metadata` (author, version) for team skills
|
||||||
|
|
||||||
|
**Body content:**
|
||||||
|
- Write clear step-by-step instructions the agent will follow
|
||||||
|
- Include concrete examples of inputs and expected outputs
|
||||||
|
- Cover the 2–3 most important edge cases
|
||||||
|
- Keep under 500 lines; move detailed reference material to `references/`
|
||||||
|
|
||||||
|
### 3. Create `evals/evals.json`
|
||||||
|
|
||||||
|
Write at least 3 eval cases covering:
|
||||||
|
- A typical happy-path use case
|
||||||
|
- A variation with different phrasing or context
|
||||||
|
- An edge case (unusual input, boundary condition, or ambiguous request)
|
||||||
|
|
||||||
|
Each eval case must have:
|
||||||
|
- `id`: integer
|
||||||
|
- `prompt`: realistic user message (not "process this data" — use specific context)
|
||||||
|
- `expected_output`: human-readable description of what success looks like
|
||||||
|
|
||||||
|
Add `assertions` after the first eval run reveals what "good" looks like.
|
||||||
|
|
||||||
|
Format:
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"skill_name": "<name>",
|
||||||
|
"evals": [
|
||||||
|
{
|
||||||
|
"id": 1,
|
||||||
|
"prompt": "...",
|
||||||
|
"expected_output": "..."
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4. Create or Update `README.md` and Diagrams
|
||||||
|
|
||||||
|
After creating or updating a skill, create (or update) `skills/<name>/README.md` and generate two PlantUML diagrams:
|
||||||
|
|
||||||
|
**Architecture diagram** (`assets/architecture.puml`) — static component view:
|
||||||
|
- Show the skill's files and their roles (SKILL.md, references/, assets/, evals/)
|
||||||
|
- Show external dependencies (tools, APIs, databases, other files the skill reads/writes)
|
||||||
|
- Use `package` blocks to group related components; use `component`, `database`, `actor`
|
||||||
|
|
||||||
|
**Workflow diagram** (`assets/workflow.puml`) — dynamic sequence view:
|
||||||
|
- Show the interaction between the user, the skill, and any external systems step by step
|
||||||
|
- Use `participant` / `actor` and sequence arrows (`->`, `-->`)
|
||||||
|
- Include branching (`alt`/`opt`) for key decision points
|
||||||
|
|
||||||
|
**Convert to SVG:**
|
||||||
|
```bash
|
||||||
|
bash scripts/puml2svg.sh <name>
|
||||||
|
```
|
||||||
|
This requires Java and Graphviz. The PlantUML jar is resolved automatically from the VS Code extension; override with `PLANTUML_JAR=/path/to/plantuml.jar`.
|
||||||
|
|
||||||
|
**README structure:**
|
||||||
|
```markdown
|
||||||
|
# <skill-name>
|
||||||
|
|
||||||
|
One-line description.
|
||||||
|
|
||||||
|
## Architecture
|
||||||
|

|
||||||
|
|
||||||
|
## Workflow
|
||||||
|

|
||||||
|
|
||||||
|
## When to Use
|
||||||
|
...
|
||||||
|
|
||||||
|
## How It Works
|
||||||
|
...
|
||||||
|
|
||||||
|
## File Structure
|
||||||
|
...
|
||||||
|
|
||||||
|
## Evals
|
||||||
|
\`\`\`bash
|
||||||
|
python scripts/run_evals.py <name>
|
||||||
|
\`\`\`
|
||||||
|
```
|
||||||
|
|
||||||
|
### 5. Iterative Improvement (if eval results are provided)
|
||||||
|
|
||||||
|
When the user provides eval results, grading output, or human feedback:
|
||||||
|
|
||||||
|
1. Identify which assertions failed and why (read execution transcripts if available)
|
||||||
|
2. Distinguish between:
|
||||||
|
- **Instruction gaps**: the skill didn't tell the agent to do something it should
|
||||||
|
- **Ambiguous instructions**: the agent interpreted instructions inconsistently
|
||||||
|
- **Wrong assertions**: the assertion was too strict, too vague, or checking the wrong thing
|
||||||
|
3. Propose targeted changes to `SKILL.md`:
|
||||||
|
- Generalize fixes — don't patch for a single test case
|
||||||
|
- Remove instructions that caused wasted work
|
||||||
|
- Add reasoning ("Do X because Y") rather than rigid directives
|
||||||
|
4. Update `evals/evals.json` to fix broken assertions and add new cases for uncovered scenarios
|
||||||
|
|
||||||
|
### 6. Create or Update a Custom Agent (if requested)
|
||||||
|
|
||||||
|
When the user wants a new or updated Kiro agent (`.kiro/agents/<name>.json`):
|
||||||
|
|
||||||
|
**Required fields:**
|
||||||
|
- `name`: descriptive, matches the filename (without `.json`)
|
||||||
|
- `description`: what the agent does and when to use it
|
||||||
|
- `prompt`: concise system prompt; delegate detail to skill resources where possible
|
||||||
|
- `tools`: only include tools the agent actually needs
|
||||||
|
- `allowedTools`: read-only tools are safe to auto-allow; tools that write files or run commands should require confirmation (omit from `allowedTools`)
|
||||||
|
|
||||||
|
**Help/greeting response:** The agent's prompt file MUST include instructions to respond to greetings and help requests (e.g., "hi", "hello", "help", "你好", "帮助", "?") with a structured introduction covering:
|
||||||
|
- What the agent does (one-line summary)
|
||||||
|
- Key capabilities (bullet list)
|
||||||
|
- How the agent works step-by-step (execution flow)
|
||||||
|
- 2–3 concrete example prompts
|
||||||
|
|
||||||
|
Example prompt section to include:
|
||||||
|
```
|
||||||
|
When the user sends a greeting or help request (e.g., "hi", "hello", "help", "你好", "帮助", "?"), respond with:
|
||||||
|
|
||||||
|
---
|
||||||
|
👋 **<Agent Name>** — <one-line description>
|
||||||
|
|
||||||
|
**功能:**
|
||||||
|
- <capability 1>
|
||||||
|
- <capability 2>
|
||||||
|
|
||||||
|
**执行步骤:**
|
||||||
|
1. <step 1>
|
||||||
|
2. <step 2>
|
||||||
|
3. <step 3>
|
||||||
|
|
||||||
|
**使用示例:**
|
||||||
|
- `<example prompt 1>`
|
||||||
|
- `<example prompt 2>`
|
||||||
|
---
|
||||||
|
```
|
||||||
|
|
||||||
|
**Resources:**
|
||||||
|
- Use `skill://` for skills (lazy-loads, saves context)
|
||||||
|
- Use `file://` only for small reference docs needed at startup
|
||||||
|
|
||||||
|
**Output location:** `.kiro/agents/<name>.json`
|
||||||
|
|
||||||
|
**Prompt file:** Extract the prompt to `file://prompts/<name>.md` (relative to `.kiro/agents/`) and reference it as `"prompt": "file://prompts/<name>.md"` to keep the JSON clean.
|
||||||
|
|
||||||
|
**Skill install path:** Skills are installed under `.kiro/skills/<name>/`. Reference them as `skill://.kiro/skills/**/SKILL.md` (or a specific path). The `skill://` protocol loads only name/description metadata at startup and fetches full content on demand.
|
||||||
|
|
||||||
|
### 7. Post-Creation: Agent Setup (after creating a new skill)
|
||||||
|
|
||||||
|
After successfully creating a new skill, ask the user:
|
||||||
|
|
||||||
|
> "Do you want a dedicated agent to invoke this skill? If not, it will be available to the `g-assistent` agent by default."
|
||||||
|
|
||||||
|
- If **yes**: proceed with Step 5 to create a `.kiro/agents/<name>.json` for the skill.
|
||||||
|
- If **no**: inform the user that `g-assistent` will route to this skill automatically based on its `description` trigger phrases.
|
||||||
|
|
||||||
|
### 8. Post-Agent Checkpoint: Update install-agents.sh
|
||||||
|
|
||||||
|
After creating or updating any agent, check whether `scripts/install-agents.sh` needs updating:
|
||||||
|
|
||||||
|
1. Read `scripts/install-agents.sh` (if it exists in the repo root).
|
||||||
|
2. Check if the script handles:
|
||||||
|
- Any `file://prompts/<name>.md` references — the script must copy prompt files to the target `prompts/` directory
|
||||||
|
- Any new skill references that require special handling
|
||||||
|
3. If a gap is found, update `scripts/install-agents.sh` and tell the user what changed.
|
||||||
|
4. If no changes are needed, briefly confirm: "install-agents.sh is up to date."
|
||||||
|
|
||||||
|
## Output
|
||||||
|
|
||||||
|
- `skills/<name>/SKILL.md` — the skill file
|
||||||
|
- `skills/<name>/evals/evals.json` — eval cases
|
||||||
|
- `skills/<name>/README.md` — documentation with architecture and workflow diagrams
|
||||||
|
- `skills/<name>/assets/architecture.puml` + `architecture.svg` — static component diagram
|
||||||
|
- `skills/<name>/assets/workflow.puml` + `workflow.svg` — dynamic sequence diagram
|
||||||
|
- `.kiro/agents/<name>.json` — the agent config (only if user requests a dedicated agent)
|
||||||
|
- `.kiro/agents/prompts/<name>.md` — the agent prompt file (extracted from JSON)
|
||||||
|
|
||||||
|
If creating a new skill, also suggest the directory structure needed (scripts/, references/, assets/) based on the skill's requirements.
|
||||||
|
|
||||||
|
## Quality Checklist
|
||||||
|
|
||||||
|
Before finishing, verify:
|
||||||
|
- [ ] `name` matches the directory name exactly
|
||||||
|
- [ ] `description` includes both what it does and when to activate (trigger phrases)
|
||||||
|
- [ ] Body instructions are actionable, not vague
|
||||||
|
- [ ] At least 3 eval cases with varied prompts
|
||||||
|
- [ ] No eval prompt is too generic (e.g., "test this skill")
|
||||||
|
- [ ] SKILL.md is under 500 lines
|
||||||
|
- [ ] `README.md` exists with Architecture and Workflow sections
|
||||||
|
- [ ] `assets/architecture.puml` and `assets/workflow.puml` exist and SVGs are generated
|
||||||
|
- [ ] Agent prompt includes a greeting/help response with capabilities and example prompts (for new agents)
|
||||||
@@ -0,0 +1,32 @@
|
|||||||
|
@startuml meta-creator-architecture
|
||||||
|
skinparam componentStyle rectangle
|
||||||
|
skinparam defaultFontName Arial
|
||||||
|
skinparam backgroundColor #FAFAFA
|
||||||
|
|
||||||
|
package "meta-creator Skill" {
|
||||||
|
component "SKILL.md\n(instructions)" as SKILL
|
||||||
|
component "references/\nskills-Specification.md" as SPEC
|
||||||
|
component "references/\nskills-eval.md" as EVAL_REF
|
||||||
|
component "references/\ncustom-agents-configuration-reference.md" as AGENT_REF
|
||||||
|
component "references/\nkiro-cli-chat-configuration.md" as CLI_REF
|
||||||
|
component "evals/evals.json" as EVALS
|
||||||
|
}
|
||||||
|
|
||||||
|
package "Outputs" {
|
||||||
|
component "skills/<name>/SKILL.md" as OUT_SKILL
|
||||||
|
component "skills/<name>/evals/evals.json" as OUT_EVALS
|
||||||
|
component ".kiro/agents/<name>.json" as OUT_AGENT #lightblue
|
||||||
|
component ".kiro/agents/prompts/<name>.md" as OUT_PROMPT #lightblue
|
||||||
|
}
|
||||||
|
|
||||||
|
SKILL --> SPEC : skill format rules
|
||||||
|
SKILL --> EVAL_REF : eval methodology
|
||||||
|
SKILL --> AGENT_REF : agent config schema
|
||||||
|
SKILL --> CLI_REF : config scopes & paths
|
||||||
|
SKILL --> OUT_SKILL : creates
|
||||||
|
SKILL --> OUT_EVALS : creates
|
||||||
|
SKILL --> OUT_AGENT : creates (optional)
|
||||||
|
SKILL --> OUT_PROMPT : creates (optional)
|
||||||
|
|
||||||
|
note right of OUT_AGENT : only if user\nrequests an agent
|
||||||
|
@enduml
|
||||||
|
After Width: | Height: | Size: 11 KiB |
|
After Width: | Height: | Size: 10 KiB |
@@ -0,0 +1,30 @@
|
|||||||
|
@startuml meta-creator-workflow
|
||||||
|
skinparam defaultFontName Arial
|
||||||
|
skinparam backgroundColor #FAFAFA
|
||||||
|
|
||||||
|
actor Developer
|
||||||
|
participant "meta-creator\nSkill" as SKILL
|
||||||
|
participant "File System" as FS
|
||||||
|
|
||||||
|
Developer -> SKILL : "create a skill: <description>"
|
||||||
|
SKILL -> Developer : clarifying questions\n(purpose, examples, env)
|
||||||
|
Developer -> SKILL : answers
|
||||||
|
|
||||||
|
SKILL -> FS : write skills/<name>/SKILL.md
|
||||||
|
SKILL -> FS : write skills/<name>/evals/evals.json
|
||||||
|
SKILL --> Developer : skill created
|
||||||
|
|
||||||
|
opt eval results provided
|
||||||
|
Developer -> SKILL : eval failures / feedback
|
||||||
|
SKILL -> SKILL : identify gaps vs wrong assertions
|
||||||
|
SKILL -> FS : update SKILL.md
|
||||||
|
SKILL -> FS : update evals.json
|
||||||
|
SKILL --> Developer : improved skill
|
||||||
|
end
|
||||||
|
|
||||||
|
opt agent requested
|
||||||
|
SKILL -> FS : write .kiro/agents/<name>.json
|
||||||
|
SKILL -> FS : write .kiro/agents/prompts/<name>.md
|
||||||
|
SKILL --> Developer : agent ready
|
||||||
|
end
|
||||||
|
@enduml
|
||||||
@@ -0,0 +1,30 @@
|
|||||||
|
{
|
||||||
|
"skill_name": "meta-creator",
|
||||||
|
"evals": [
|
||||||
|
{
|
||||||
|
"id": 1,
|
||||||
|
"prompt": "Create a new skill called 'csv-analyzer' that helps agents analyze CSV files: find summary statistics, detect missing values, and produce a short report.",
|
||||||
|
"expected_output": "A skills/csv-analyzer/SKILL.md with valid frontmatter (name matches directory, description explains what it does and when to use it), clear step-by-step instructions, and a skills/csv-analyzer/evals/evals.json with at least 3 eval cases covering typical use, varied phrasing, and an edge case (e.g. malformed CSV)."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": 2,
|
||||||
|
"prompt": "我想创建一个skill,帮助agent做代码审查,重点检查安全漏洞,比如SQL注入、XSS、硬编码密钥。",
|
||||||
|
"expected_output": "A skills/security-review/SKILL.md with Chinese-friendly trigger phrases in the description, security-focused review checklist in the body, and evals/evals.json with at least 3 cases including SQL injection, XSS, and hardcoded secrets scenarios."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": 3,
|
||||||
|
"prompt": "Here are the eval results for my 'doc-writer' skill. Assertion 'output includes a usage example' failed in 2 out of 3 cases. The agent wrote correct docs but skipped examples. How should I update the skill?",
|
||||||
|
"expected_output": "A targeted update to the doc-writer SKILL.md adding an explicit instruction to always include a usage example with reasoning. Does NOT add unrelated instructions or over-constrain the skill."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": 4,
|
||||||
|
"prompt": "Create a Kiro agent called 'db-expert' that specializes in database tasks. It should use a sql-helper skill and only have read access to files by default.",
|
||||||
|
"expected_output": "A .kiro/agents/db-expert.json with name 'db-expert', a description mentioning database tasks, tools including 'read' but not 'write' in allowedTools, and resources referencing the sql-helper skill via skill:// URI."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": 5,
|
||||||
|
"prompt": "帮我创建一个agent,名字叫 code-reviewer,调用 codereview 这个skill,只允许读文件,不能写。",
|
||||||
|
"expected_output": "A .kiro/agents/code-reviewer.json with name 'code-reviewer', read in allowedTools but write absent from allowedTools, and skill://skills/codereview/SKILL.md in resources."
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
@@ -0,0 +1,480 @@
|
|||||||
|
# Kiro CLI Custom Agents — 配置参考
|
||||||
|
|
||||||
|
> 原文:https://kiro.dev/docs/cli/custom-agents/configuration-reference/
|
||||||
|
> 更新:2026-04-14
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 快速开始
|
||||||
|
|
||||||
|
推荐在 Kiro 会话中使用 `/agent generate` 命令,通过 AI 辅助生成 Agent 配置。
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 文件位置
|
||||||
|
|
||||||
|
### 本地 Agent(项目级)
|
||||||
|
|
||||||
|
```
|
||||||
|
<project>/.kiro/agents/<name>.json
|
||||||
|
```
|
||||||
|
|
||||||
|
仅在该目录或其子目录下运行 Kiro CLI 时可用。
|
||||||
|
|
||||||
|
### 全局 Agent(用户级)
|
||||||
|
|
||||||
|
```
|
||||||
|
~/.kiro/agents/<name>.json
|
||||||
|
```
|
||||||
|
|
||||||
|
在任意目录下均可使用。
|
||||||
|
|
||||||
|
### 优先级
|
||||||
|
|
||||||
|
同名 Agent 时,**本地优先于全局**(并输出警告)。
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 配置字段总览
|
||||||
|
|
||||||
|
| 字段 | 说明 |
|
||||||
|
|------|------|
|
||||||
|
| `name` | Agent 名称(可选,默认取文件名) |
|
||||||
|
| `description` | Agent 描述 |
|
||||||
|
| `prompt` | 系统提示词(内联或 `file://` URI) |
|
||||||
|
| `mcpServers` | 可访问的 MCP 服务器 |
|
||||||
|
| `tools` | 可用工具列表 |
|
||||||
|
| `toolAliases` | 工具名称重映射 |
|
||||||
|
| `allowedTools` | 无需确认即可使用的工具 |
|
||||||
|
| `toolsSettings` | 工具专项配置 |
|
||||||
|
| `resources` | 可访问的本地资源 |
|
||||||
|
| `hooks` | 生命周期钩子命令 |
|
||||||
|
| `includeMcpJson` | 是否引入 mcp.json 中的 MCP 服务器 |
|
||||||
|
| `model` | 指定使用的模型 ID |
|
||||||
|
| `keyboardShortcut` | 快速切换快捷键 |
|
||||||
|
| `welcomeMessage` | 切换到该 Agent 时显示的欢迎语 |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 字段详解
|
||||||
|
|
||||||
|
### `name`
|
||||||
|
|
||||||
|
Agent 的标识名称,用于显示和识别。
|
||||||
|
|
||||||
|
```json
|
||||||
|
{ "name": "aws-expert" }
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### `description`
|
||||||
|
|
||||||
|
人类可读的描述,帮助区分不同 Agent。
|
||||||
|
|
||||||
|
```json
|
||||||
|
{ "description": "An agent specialized for AWS infrastructure tasks" }
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### `prompt`
|
||||||
|
|
||||||
|
类似系统提示词,为 Agent 提供高层上下文。支持内联文本或 `file://` URI。
|
||||||
|
|
||||||
|
**内联:**
|
||||||
|
```json
|
||||||
|
{ "prompt": "You are an expert AWS infrastructure specialist" }
|
||||||
|
```
|
||||||
|
|
||||||
|
**文件引用:**
|
||||||
|
```json
|
||||||
|
{ "prompt": "file://./prompts/aws-expert.md" }
|
||||||
|
```
|
||||||
|
|
||||||
|
**路径解析规则:**
|
||||||
|
- 相对路径:相对于 Agent 配置文件所在目录
|
||||||
|
- `"file://./prompt.md"` → 同目录
|
||||||
|
- `"file://../shared/prompt.md"` → 上级目录
|
||||||
|
- 绝对路径:直接使用
|
||||||
|
- `"file:///home/user/prompts/agent.md"`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### `mcpServers`
|
||||||
|
|
||||||
|
定义 Agent 可访问的 MCP 服务器。
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"mcpServers": {
|
||||||
|
"fetch": {
|
||||||
|
"command": "fetch3.1",
|
||||||
|
"args": []
|
||||||
|
},
|
||||||
|
"git": {
|
||||||
|
"command": "git-mcp",
|
||||||
|
"args": [],
|
||||||
|
"env": { "GIT_CONFIG_GLOBAL": "/dev/null" },
|
||||||
|
"timeout": 120000
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**字段:**
|
||||||
|
- `command`(必填):启动 MCP 服务器的命令
|
||||||
|
- `args`(可选):命令参数
|
||||||
|
- `env`(可选):环境变量
|
||||||
|
- `timeout`(可选):每次请求超时毫秒数,默认 `120000`
|
||||||
|
- `oauth`(可选):HTTP 类型 MCP 服务器的 OAuth 配置
|
||||||
|
- `redirectUri`:自定义重定向 URI
|
||||||
|
- `oauthScopes`:请求的 OAuth 权限范围数组
|
||||||
|
|
||||||
|
**OAuth 示例:**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"mcpServers": {
|
||||||
|
"github": {
|
||||||
|
"type": "http",
|
||||||
|
"url": "https://api.github.com/mcp",
|
||||||
|
"oauth": {
|
||||||
|
"redirectUri": "127.0.0.1:8080",
|
||||||
|
"oauthScopes": ["repo", "user"]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### `tools`
|
||||||
|
|
||||||
|
Agent 可使用的工具列表。
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"tools": ["read", "write", "shell", "@git", "@rust-analyzer/check_code"]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**引用方式:**
|
||||||
|
- 内置工具:`"read"`、`"shell"`
|
||||||
|
- MCP 服务器所有工具:`"@server_name"`
|
||||||
|
- MCP 服务器特定工具:`"@server_name/tool_name"`
|
||||||
|
- 所有工具:`"*"`
|
||||||
|
- 所有内置工具:`"@builtin"`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### `toolAliases`
|
||||||
|
|
||||||
|
重命名工具,解决命名冲突或创建更直观的名称。
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"toolAliases": {
|
||||||
|
"@github-mcp/get_issues": "github_issues",
|
||||||
|
"@gitlab-mcp/get_issues": "gitlab_issues",
|
||||||
|
"@aws-cloud-formation/deploy_stack_with_parameters": "deploy_cf"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### `allowedTools`
|
||||||
|
|
||||||
|
无需用户确认即可使用的工具。支持精确匹配和通配符。
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"allowedTools": [
|
||||||
|
"read",
|
||||||
|
"@git/git_status",
|
||||||
|
"@server/read_*",
|
||||||
|
"@fetch"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**匹配方式:**
|
||||||
|
|
||||||
|
| 模式 | 说明 |
|
||||||
|
|------|------|
|
||||||
|
| `"read"` | 精确匹配内置工具 |
|
||||||
|
| `"@server_name/tool_name"` | 精确匹配 MCP 工具 |
|
||||||
|
| `"@server_name"` | 该服务器的所有工具 |
|
||||||
|
| `"@server/read_*"` | 前缀通配 |
|
||||||
|
| `"@server/*_get"` | 后缀通配 |
|
||||||
|
| `"@git-*/*"` | 服务器名通配 |
|
||||||
|
| `"?ead"` | `?` 匹配单个字符 |
|
||||||
|
|
||||||
|
> **注意:** `allowedTools` 不支持 `"*"` 通配所有工具。
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### `toolsSettings`
|
||||||
|
|
||||||
|
对特定工具进行专项配置。
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"toolsSettings": {
|
||||||
|
"write": {
|
||||||
|
"allowedPaths": ["src/**", "tests/**"]
|
||||||
|
},
|
||||||
|
"shell": {
|
||||||
|
"allowedCommands": ["git status", "git fetch"],
|
||||||
|
"deniedCommands": ["git commit .*", "git push .*"],
|
||||||
|
"autoAllowReadonly": true
|
||||||
|
},
|
||||||
|
"@git/git_status": {
|
||||||
|
"git_user": "$GIT_USER"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### `resources`
|
||||||
|
|
||||||
|
Agent 可访问的本地资源,支持三种类型。
|
||||||
|
|
||||||
|
#### 文件资源(`file://`)
|
||||||
|
|
||||||
|
启动时直接加载到上下文。
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"resources": [
|
||||||
|
"file://README.md",
|
||||||
|
"file://docs/**/*.md"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Skill 资源(`skill://`)
|
||||||
|
|
||||||
|
启动时仅加载元数据(name/description),按需加载完整内容,保持上下文精简。
|
||||||
|
|
||||||
|
Skill 文件须以 YAML frontmatter 开头:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
---
|
||||||
|
name: dynamodb-data-modeling
|
||||||
|
description: Guide for DynamoDB data modeling best practices.
|
||||||
|
---
|
||||||
|
|
||||||
|
# DynamoDB Data Modeling
|
||||||
|
...
|
||||||
|
```
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"resources": [
|
||||||
|
"skill://.kiro/skills/**/SKILL.md"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 知识库资源(`knowledgeBase`)
|
||||||
|
|
||||||
|
支持对大量文档进行索引检索。
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"resources": [
|
||||||
|
{
|
||||||
|
"type": "knowledgeBase",
|
||||||
|
"source": "file://./docs",
|
||||||
|
"name": "ProjectDocs",
|
||||||
|
"description": "Project documentation and guides",
|
||||||
|
"indexType": "best",
|
||||||
|
"autoUpdate": true
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
| 字段 | 必填 | 说明 |
|
||||||
|
|------|------|------|
|
||||||
|
| `type` | 是 | 固定为 `"knowledgeBase"` |
|
||||||
|
| `source` | 是 | 索引路径,使用 `file://` 前缀 |
|
||||||
|
| `name` | 是 | 显示名称 |
|
||||||
|
| `description` | 否 | 内容描述 |
|
||||||
|
| `indexType` | 否 | `"best"`(默认,质量更高)或 `"fast"` |
|
||||||
|
| `autoUpdate` | 否 | Agent 启动时重新索引,默认 `false` |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### `hooks`
|
||||||
|
|
||||||
|
在 Agent 生命周期特定时机执行命令。
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"hooks": {
|
||||||
|
"agentSpawn": [
|
||||||
|
{ "command": "git status" }
|
||||||
|
],
|
||||||
|
"userPromptSubmit": [
|
||||||
|
{ "command": "ls -la" }
|
||||||
|
],
|
||||||
|
"preToolUse": [
|
||||||
|
{
|
||||||
|
"matcher": "execute_bash",
|
||||||
|
"command": "{ echo \"$(date) - Bash:\"; cat; } >> /tmp/audit.log"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"postToolUse": [
|
||||||
|
{
|
||||||
|
"matcher": "fs_write",
|
||||||
|
"command": "cargo fmt --all"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"stop": [
|
||||||
|
{ "command": "npm test" }
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**触发时机:**
|
||||||
|
|
||||||
|
| 钩子 | 触发时机 |
|
||||||
|
|------|----------|
|
||||||
|
| `agentSpawn` | Agent 初始化时 |
|
||||||
|
| `userPromptSubmit` | 用户提交消息时 |
|
||||||
|
| `preToolUse` | 工具执行前(可阻断) |
|
||||||
|
| `postToolUse` | 工具执行后 |
|
||||||
|
| `stop` | 助手完成响应时 |
|
||||||
|
|
||||||
|
每个 hook 条目:
|
||||||
|
- `command`(必填):要执行的命令
|
||||||
|
- `matcher`(可选):用于 `preToolUse`/`postToolUse` 的工具名匹配模式,使用内部工具名(如 `fs_read`、`fs_write`、`execute_bash`、`use_aws`)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### `includeMcpJson`
|
||||||
|
|
||||||
|
是否引入 `~/.kiro/settings/mcp.json`(全局)和 `<cwd>/.kiro/settings/mcp.json`(工作区)中定义的 MCP 服务器。
|
||||||
|
|
||||||
|
```json
|
||||||
|
{ "includeMcpJson": true }
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### `model`
|
||||||
|
|
||||||
|
指定该 Agent 使用的模型 ID。未指定或不可用时回退到默认模型。
|
||||||
|
|
||||||
|
```json
|
||||||
|
{ "model": "claude-sonnet-4" }
|
||||||
|
```
|
||||||
|
|
||||||
|
可通过 `/model` 命令查看可用模型列表。
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### `keyboardShortcut`
|
||||||
|
|
||||||
|
快速切换到该 Agent 的键盘快捷键。
|
||||||
|
|
||||||
|
```json
|
||||||
|
{ "keyboardShortcut": "ctrl+a" }
|
||||||
|
```
|
||||||
|
|
||||||
|
**格式:** `[modifier+]key`
|
||||||
|
**修饰键:** `ctrl`、`shift`
|
||||||
|
**按键:** `a-z`、`0-9`
|
||||||
|
|
||||||
|
- 当前不在该 Agent:切换到该 Agent
|
||||||
|
- 已在该 Agent:切换回上一个 Agent
|
||||||
|
- 多个 Agent 快捷键冲突时,快捷键被禁用并输出警告
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### `welcomeMessage`
|
||||||
|
|
||||||
|
切换到该 Agent 时显示的欢迎语。
|
||||||
|
|
||||||
|
```json
|
||||||
|
{ "welcomeMessage": "What would you like to build today?" }
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 完整示例
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"name": "aws-rust-agent",
|
||||||
|
"description": "Specialized agent for AWS and Rust development",
|
||||||
|
"prompt": "file://./prompts/aws-rust-expert.md",
|
||||||
|
"mcpServers": {
|
||||||
|
"fetch": { "command": "fetch-server", "args": [] },
|
||||||
|
"git": { "command": "git-mcp", "args": [] }
|
||||||
|
},
|
||||||
|
"tools": ["read", "write", "shell", "aws", "@git", "@fetch/fetch_url"],
|
||||||
|
"toolAliases": {
|
||||||
|
"@git/git_status": "status",
|
||||||
|
"@fetch/fetch_url": "get"
|
||||||
|
},
|
||||||
|
"allowedTools": ["read", "@git/git_status"],
|
||||||
|
"toolsSettings": {
|
||||||
|
"write": { "allowedPaths": ["src/**", "tests/**", "Cargo.toml"] },
|
||||||
|
"aws": { "allowedServices": ["s3", "lambda"], "autoAllowReadonly": true }
|
||||||
|
},
|
||||||
|
"resources": [
|
||||||
|
"file://README.md",
|
||||||
|
"file://docs/**/*.md"
|
||||||
|
],
|
||||||
|
"hooks": {
|
||||||
|
"agentSpawn": [{ "command": "git status" }],
|
||||||
|
"postToolUse": [{ "matcher": "fs_write", "command": "cargo fmt --all" }]
|
||||||
|
},
|
||||||
|
"model": "claude-sonnet-4",
|
||||||
|
"keyboardShortcut": "ctrl+shift+r",
|
||||||
|
"welcomeMessage": "Ready to help with AWS and Rust development!"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 最佳实践
|
||||||
|
|
||||||
|
### 本地 vs 全局 Agent
|
||||||
|
|
||||||
|
| 本地 Agent | 全局 Agent |
|
||||||
|
|-----------|-----------|
|
||||||
|
| 项目专属配置 | 跨项目通用 Agent |
|
||||||
|
| 需要访问项目文件/工具 | 个人效率工具 |
|
||||||
|
| 通过版本控制与团队共享 | 常用工具和工作流 |
|
||||||
|
|
||||||
|
### 安全建议
|
||||||
|
|
||||||
|
- 仔细审查 `allowedTools`,优先使用精确匹配而非通配符
|
||||||
|
- 对敏感操作配置 `toolsSettings`(如限制 `allowedPaths`)
|
||||||
|
- 启用写工具(`write`、`shell`)时,Agent 拥有与当前用户相同的文件系统权限,可读写 `~/.kiro` 下所有内容
|
||||||
|
- 使用 `preToolUse` hooks 审计或阻断敏感操作
|
||||||
|
- 在安全环境中充分测试后再共享 Agent
|
||||||
|
|
||||||
|
### 组织建议
|
||||||
|
|
||||||
|
- 使用描述性名称
|
||||||
|
- 在 `description` 中说明用途
|
||||||
|
- 将 prompt 文件单独维护
|
||||||
|
- 本地 Agent 随项目纳入版本控制
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 相关文档
|
||||||
|
|
||||||
|
- [创建自定义 Agent](https://kiro.dev/docs/cli/custom-agents/creating/)
|
||||||
|
- [内置工具参考](https://kiro.dev/docs/cli/reference/built-in-tools/)
|
||||||
|
- [Hooks 文档](https://kiro.dev/docs/cli/hooks)
|
||||||
|
- [Agent 示例](https://kiro.dev/docs/cli/custom-agents/examples/)
|
||||||
@@ -0,0 +1,54 @@
|
|||||||
|
# Kiro CLI Chat — Configuration Reference
|
||||||
|
|
||||||
|
> Source: https://kiro.dev/docs/cli/chat/configuration/
|
||||||
|
> Page updated: December 10, 2025
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Configuration File Paths
|
||||||
|
|
||||||
|
Kiro CLI configuration can be set at three scopes:
|
||||||
|
|
||||||
|
1. **Global** — applies across all projects: `~/.kiro/`
|
||||||
|
2. **Project** — specific to a project: `<project-root>/.kiro/`
|
||||||
|
3. **Agent** — defined in the agent config file: `<user-home | project-root>/.kiro/agents/`
|
||||||
|
|
||||||
|
| Configuration | Global Scope | Project Scope |
|
||||||
|
|---|---|---|
|
||||||
|
| MCP servers | `~/.kiro/settings/mcp.json` | `.kiro/settings/mcp.json` |
|
||||||
|
| Prompts | `~/.kiro/prompts` | `.kiro/prompts` |
|
||||||
|
| Custom agents | `~/.kiro/agents` | `.kiro/agents` |
|
||||||
|
| Steering | `~/.kiro/steering` | `.kiro/steering` |
|
||||||
|
| Settings | `~/.kiro/settings/cli.json` | *(N/A)* |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## What Can Be Configured at Each Scope
|
||||||
|
|
||||||
|
| Configuration | User Scope | Project Scope | Agent Scope |
|
||||||
|
|---|---|---|---|
|
||||||
|
| MCP servers | Yes | Yes | Yes |
|
||||||
|
| Prompts | Yes | Yes | No |
|
||||||
|
| Custom agents | Yes | Yes | N/A |
|
||||||
|
| Steering | Yes | Yes | Yes |
|
||||||
|
| Settings | Yes | N/A | N/A |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Resolving Configuration Conflicts
|
||||||
|
|
||||||
|
Conflicts are resolved by selecting the configuration closest to where you are interacting with Kiro CLI.
|
||||||
|
|
||||||
|
- If MCP config exists in both global and project `mcp.json`, the project-level config wins when working in that project folder.
|
||||||
|
- If a custom agent is defined at both global and project scope, the agent-level configuration takes precedence.
|
||||||
|
|
||||||
|
Priority order:
|
||||||
|
|
||||||
|
| Configuration | Priority |
|
||||||
|
|---|---|
|
||||||
|
| MCP servers | Agent > Project > Global |
|
||||||
|
| Prompts | Project > Global |
|
||||||
|
| Custom agents | Project > Global |
|
||||||
|
| Steering | Project > Global |
|
||||||
|
|
||||||
|
> **Note:** MCP servers can be configured in three scopes and are handled differently due to the `includeMcpJson` agent setting. See [MCP server loading priority](https://kiro.dev/docs/cli/mcp/#mcp-server-loading-priority).
|
||||||
@@ -0,0 +1,275 @@
|
|||||||
|
> ## Documentation Index
|
||||||
|
> Fetch the complete documentation index at: https://agentskills.io/llms.txt
|
||||||
|
> Use this file to discover all available pages before exploring further.
|
||||||
|
|
||||||
|
# Specification
|
||||||
|
|
||||||
|
> The complete format specification for Agent Skills.
|
||||||
|
|
||||||
|
## Directory structure
|
||||||
|
|
||||||
|
A skill is a directory containing, at minimum, a `SKILL.md` file:
|
||||||
|
|
||||||
|
```
|
||||||
|
skill-name/
|
||||||
|
├── SKILL.md # Required: metadata + instructions
|
||||||
|
├── scripts/ # Optional: executable code
|
||||||
|
├── references/ # Optional: documentation
|
||||||
|
├── assets/ # Optional: templates, resources
|
||||||
|
└── ... # Any additional files or directories
|
||||||
|
```
|
||||||
|
|
||||||
|
## `SKILL.md` format
|
||||||
|
|
||||||
|
The `SKILL.md` file must contain YAML frontmatter followed by Markdown content.
|
||||||
|
|
||||||
|
### Frontmatter
|
||||||
|
|
||||||
|
| Field | Required | Constraints |
|
||||||
|
| --------------- | -------- | ----------------------------------------------------------------------------------------------------------------- |
|
||||||
|
| `name` | Yes | Max 64 characters. Lowercase letters, numbers, and hyphens only. Must not start or end with a hyphen. |
|
||||||
|
| `description` | Yes | Max 1024 characters. Non-empty. Describes what the skill does and when to use it. |
|
||||||
|
| `license` | No | License name or reference to a bundled license file. |
|
||||||
|
| `compatibility` | No | Max 500 characters. Indicates environment requirements (intended product, system packages, network access, etc.). |
|
||||||
|
| `metadata` | No | Arbitrary key-value mapping for additional metadata. |
|
||||||
|
| `allowed-tools` | No | Space-delimited list of pre-approved tools the skill may use. (Experimental) |
|
||||||
|
|
||||||
|
<Card>
|
||||||
|
**Minimal example:**
|
||||||
|
|
||||||
|
```markdown SKILL.md theme={null}
|
||||||
|
---
|
||||||
|
name: skill-name
|
||||||
|
description: A description of what this skill does and when to use it.
|
||||||
|
---
|
||||||
|
```
|
||||||
|
|
||||||
|
**Example with optional fields:**
|
||||||
|
|
||||||
|
```markdown SKILL.md theme={null}
|
||||||
|
---
|
||||||
|
name: pdf-processing
|
||||||
|
description: Extract PDF text, fill forms, merge files. Use when handling PDFs.
|
||||||
|
license: Apache-2.0
|
||||||
|
metadata:
|
||||||
|
author: example-org
|
||||||
|
version: "1.0"
|
||||||
|
---
|
||||||
|
```
|
||||||
|
</Card>
|
||||||
|
|
||||||
|
#### `name` field
|
||||||
|
|
||||||
|
The required `name` field:
|
||||||
|
|
||||||
|
* Must be 1-64 characters
|
||||||
|
* May only contain unicode lowercase alphanumeric characters (`a-z`) and hyphens (`-`)
|
||||||
|
* Must not start or end with a hyphen (`-`)
|
||||||
|
* Must not contain consecutive hyphens (`--`)
|
||||||
|
* Must match the parent directory name
|
||||||
|
|
||||||
|
<Card>
|
||||||
|
**Valid examples:**
|
||||||
|
|
||||||
|
```yaml theme={null}
|
||||||
|
name: pdf-processing
|
||||||
|
```
|
||||||
|
|
||||||
|
```yaml theme={null}
|
||||||
|
name: data-analysis
|
||||||
|
```
|
||||||
|
|
||||||
|
```yaml theme={null}
|
||||||
|
name: code-review
|
||||||
|
```
|
||||||
|
|
||||||
|
**Invalid examples:**
|
||||||
|
|
||||||
|
```yaml theme={null}
|
||||||
|
name: PDF-Processing # uppercase not allowed
|
||||||
|
```
|
||||||
|
|
||||||
|
```yaml theme={null}
|
||||||
|
name: -pdf # cannot start with hyphen
|
||||||
|
```
|
||||||
|
|
||||||
|
```yaml theme={null}
|
||||||
|
name: pdf--processing # consecutive hyphens not allowed
|
||||||
|
```
|
||||||
|
</Card>
|
||||||
|
|
||||||
|
#### `description` field
|
||||||
|
|
||||||
|
The required `description` field:
|
||||||
|
|
||||||
|
* Must be 1-1024 characters
|
||||||
|
* Should describe both what the skill does and when to use it
|
||||||
|
* Should include specific keywords that help agents identify relevant tasks
|
||||||
|
|
||||||
|
<Card>
|
||||||
|
**Good example:**
|
||||||
|
|
||||||
|
```yaml theme={null}
|
||||||
|
description: Extracts text and tables from PDF files, fills PDF forms, and merges multiple PDFs. Use when working with PDF documents or when the user mentions PDFs, forms, or document extraction.
|
||||||
|
```
|
||||||
|
|
||||||
|
**Poor example:**
|
||||||
|
|
||||||
|
```yaml theme={null}
|
||||||
|
description: Helps with PDFs.
|
||||||
|
```
|
||||||
|
</Card>
|
||||||
|
|
||||||
|
#### `license` field
|
||||||
|
|
||||||
|
The optional `license` field:
|
||||||
|
|
||||||
|
* Specifies the license applied to the skill
|
||||||
|
* We recommend keeping it short (either the name of a license or the name of a bundled license file)
|
||||||
|
|
||||||
|
<Card>
|
||||||
|
**Example:**
|
||||||
|
|
||||||
|
```yaml theme={null}
|
||||||
|
license: Proprietary. LICENSE.txt has complete terms
|
||||||
|
```
|
||||||
|
</Card>
|
||||||
|
|
||||||
|
#### `compatibility` field
|
||||||
|
|
||||||
|
The optional `compatibility` field:
|
||||||
|
|
||||||
|
* Must be 1-500 characters if provided
|
||||||
|
* Should only be included if your skill has specific environment requirements
|
||||||
|
* Can indicate intended product, required system packages, network access needs, etc.
|
||||||
|
|
||||||
|
<Card>
|
||||||
|
**Examples:**
|
||||||
|
|
||||||
|
```yaml theme={null}
|
||||||
|
compatibility: Designed for Claude Code (or similar products)
|
||||||
|
```
|
||||||
|
|
||||||
|
```yaml theme={null}
|
||||||
|
compatibility: Requires git, docker, jq, and access to the internet
|
||||||
|
```
|
||||||
|
|
||||||
|
```yaml theme={null}
|
||||||
|
compatibility: Requires Python 3.14+ and uv
|
||||||
|
```
|
||||||
|
</Card>
|
||||||
|
|
||||||
|
<Note>
|
||||||
|
Most skills do not need the `compatibility` field.
|
||||||
|
</Note>
|
||||||
|
|
||||||
|
#### `metadata` field
|
||||||
|
|
||||||
|
The optional `metadata` field:
|
||||||
|
|
||||||
|
* A map from string keys to string values
|
||||||
|
* Clients can use this to store additional properties not defined by the Agent Skills spec
|
||||||
|
* We recommend making your key names reasonably unique to avoid accidental conflicts
|
||||||
|
|
||||||
|
<Card>
|
||||||
|
**Example:**
|
||||||
|
|
||||||
|
```yaml theme={null}
|
||||||
|
metadata:
|
||||||
|
author: example-org
|
||||||
|
version: "1.0"
|
||||||
|
```
|
||||||
|
</Card>
|
||||||
|
|
||||||
|
#### `allowed-tools` field
|
||||||
|
|
||||||
|
The optional `allowed-tools` field:
|
||||||
|
|
||||||
|
* A space-delimited list of tools that are pre-approved to run
|
||||||
|
* Experimental. Support for this field may vary between agent implementations
|
||||||
|
|
||||||
|
<Card>
|
||||||
|
**Example:**
|
||||||
|
|
||||||
|
```yaml theme={null}
|
||||||
|
allowed-tools: Bash(git:*) Bash(jq:*) Read
|
||||||
|
```
|
||||||
|
</Card>
|
||||||
|
|
||||||
|
### Body content
|
||||||
|
|
||||||
|
The Markdown body after the frontmatter contains the skill instructions. There are no format restrictions. Write whatever helps agents perform the task effectively.
|
||||||
|
|
||||||
|
Recommended sections:
|
||||||
|
|
||||||
|
* Step-by-step instructions
|
||||||
|
* Examples of inputs and outputs
|
||||||
|
* Common edge cases
|
||||||
|
|
||||||
|
Note that the agent will load this entire file once it's decided to activate a skill. Consider splitting longer `SKILL.md` content into referenced files.
|
||||||
|
|
||||||
|
## Optional directories
|
||||||
|
|
||||||
|
### `scripts/`
|
||||||
|
|
||||||
|
Contains executable code that agents can run. Scripts should:
|
||||||
|
|
||||||
|
* Be self-contained or clearly document dependencies
|
||||||
|
* Include helpful error messages
|
||||||
|
* Handle edge cases gracefully
|
||||||
|
|
||||||
|
Supported languages depend on the agent implementation. Common options include Python, Bash, and JavaScript.
|
||||||
|
|
||||||
|
### `references/`
|
||||||
|
|
||||||
|
Contains additional documentation that agents can read when needed:
|
||||||
|
|
||||||
|
* `REFERENCE.md` - Detailed technical reference
|
||||||
|
* `FORMS.md` - Form templates or structured data formats
|
||||||
|
* Domain-specific files (`finance.md`, `legal.md`, etc.)
|
||||||
|
|
||||||
|
Keep individual [reference files](#file-references) focused. Agents load these on demand, so smaller files mean less use of context.
|
||||||
|
|
||||||
|
### `assets/`
|
||||||
|
|
||||||
|
Contains static resources:
|
||||||
|
|
||||||
|
* Templates (document templates, configuration templates)
|
||||||
|
* Images (diagrams, examples)
|
||||||
|
* Data files (lookup tables, schemas)
|
||||||
|
|
||||||
|
## Progressive disclosure
|
||||||
|
|
||||||
|
Skills should be structured for efficient use of context:
|
||||||
|
|
||||||
|
1. **Metadata** (\~100 tokens): The `name` and `description` fields are loaded at startup for all skills
|
||||||
|
2. **Instructions** (\< 5000 tokens recommended): The full `SKILL.md` body is loaded when the skill is activated
|
||||||
|
3. **Resources** (as needed): Files (e.g. those in `scripts/`, `references/`, or `assets/`) are loaded only when required
|
||||||
|
|
||||||
|
Keep your main `SKILL.md` under 500 lines. Move detailed reference material to separate files.
|
||||||
|
|
||||||
|
## File references
|
||||||
|
|
||||||
|
When referencing other files in your skill, use relative paths from the skill root:
|
||||||
|
|
||||||
|
```markdown SKILL.md theme={null}
|
||||||
|
See [the reference guide](references/REFERENCE.md) for details.
|
||||||
|
|
||||||
|
Run the extraction script:
|
||||||
|
scripts/extract.py
|
||||||
|
```
|
||||||
|
|
||||||
|
Keep file references one level deep from `SKILL.md`. Avoid deeply nested reference chains.
|
||||||
|
|
||||||
|
## Validation
|
||||||
|
|
||||||
|
Use the [skills-ref](https://github.com/agentskills/agentskills/tree/main/skills-ref) reference library to validate your skills:
|
||||||
|
|
||||||
|
```bash theme={null}
|
||||||
|
skills-ref validate ./my-skill
|
||||||
|
```
|
||||||
|
|
||||||
|
This checks that your `SKILL.md` frontmatter is valid and follows all naming conventions.
|
||||||
|
|
||||||
|
|
||||||
|
Built with [Mintlify](https://mintlify.com).
|
||||||
@@ -0,0 +1,303 @@
|
|||||||
|
> ## Documentation Index
|
||||||
|
> Fetch the complete documentation index at: https://agentskills.io/llms.txt
|
||||||
|
> Use this file to discover all available pages before exploring further.
|
||||||
|
|
||||||
|
# Evaluating skill output quality
|
||||||
|
|
||||||
|
> How to test whether your skill produces good outputs using eval-driven iteration.
|
||||||
|
|
||||||
|
You wrote a skill, tried it on a prompt, and it seemed to work. But does it work reliably — across varied prompts, in edge cases, better than no skill at all? Running structured evaluations (evals) answers these questions and gives you a feedback loop for improving the skill systematically.
|
||||||
|
|
||||||
|
## Designing test cases
|
||||||
|
|
||||||
|
A test case has three parts:
|
||||||
|
|
||||||
|
* **Prompt**: a realistic user message — the kind of thing someone would actually type.
|
||||||
|
* **Expected output**: a human-readable description of what success looks like.
|
||||||
|
* **Input files** (optional): files the skill needs to work with.
|
||||||
|
|
||||||
|
Store test cases in `evals/evals.json` inside your skill directory:
|
||||||
|
|
||||||
|
```json evals/evals.json theme={null}
|
||||||
|
{
|
||||||
|
"skill_name": "csv-analyzer",
|
||||||
|
"evals": [
|
||||||
|
{
|
||||||
|
"id": 1,
|
||||||
|
"prompt": "I have a CSV of monthly sales data in data/sales_2025.csv. Can you find the top 3 months by revenue and make a bar chart?",
|
||||||
|
"expected_output": "A bar chart image showing the top 3 months by revenue, with labeled axes and values.",
|
||||||
|
"files": ["evals/files/sales_2025.csv"]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": 2,
|
||||||
|
"prompt": "there's a csv in my downloads called customers.csv, some rows have missing emails — can you clean it up and tell me how many were missing?",
|
||||||
|
"expected_output": "A cleaned CSV with missing emails handled, plus a count of how many were missing.",
|
||||||
|
"files": ["evals/files/customers.csv"]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Tips for writing good test prompts:**
|
||||||
|
|
||||||
|
* **Start with 2-3 test cases.** Don't over-invest before you've seen your first round of results. You can expand the set later.
|
||||||
|
* **Vary the prompts.** Use different phrasings, levels of detail, and formality. Some prompts should be casual ("hey can you clean up this csv"), others precise ("Parse the CSV at data/input.csv, drop rows where column B is null, and write the result to data/output.csv").
|
||||||
|
* **Cover edge cases.** Include at least one prompt that tests a boundary condition — a malformed input, an unusual request, or a case where the skill's instructions might be ambiguous.
|
||||||
|
* **Use realistic context.** Real users mention file paths, column names, and personal context. Prompts like "process this data" are too vague to test anything useful.
|
||||||
|
|
||||||
|
Don't worry about defining specific pass/fail checks yet — just the prompts and expected outputs. You'll add detailed checks (called assertions) after you see what the first run produces.
|
||||||
|
|
||||||
|
## Running evals
|
||||||
|
|
||||||
|
The core pattern is to run each test case twice: once **with the skill** and once **without it** (or with a previous version). This gives you a baseline to compare against.
|
||||||
|
|
||||||
|
### Workspace structure
|
||||||
|
|
||||||
|
Organize eval results in a workspace directory alongside your skill directory. Each pass through the full eval loop gets its own `iteration-N/` directory. Within that, each test case gets an eval directory with `with_skill/` and `without_skill/` subdirectories:
|
||||||
|
|
||||||
|
```
|
||||||
|
csv-analyzer/
|
||||||
|
├── SKILL.md
|
||||||
|
└── evals/
|
||||||
|
└── evals.json
|
||||||
|
csv-analyzer-workspace/
|
||||||
|
└── iteration-1/
|
||||||
|
├── eval-top-months-chart/
|
||||||
|
│ ├── with_skill/
|
||||||
|
│ │ ├── outputs/ # Files produced by the run
|
||||||
|
│ │ ├── timing.json # Tokens and duration
|
||||||
|
│ │ └── grading.json # Assertion results
|
||||||
|
│ └── without_skill/
|
||||||
|
│ ├── outputs/
|
||||||
|
│ ├── timing.json
|
||||||
|
│ └── grading.json
|
||||||
|
├── eval-clean-missing-emails/
|
||||||
|
│ ├── with_skill/
|
||||||
|
│ │ ├── outputs/
|
||||||
|
│ │ ├── timing.json
|
||||||
|
│ │ └── grading.json
|
||||||
|
│ └── without_skill/
|
||||||
|
│ ├── outputs/
|
||||||
|
│ ├── timing.json
|
||||||
|
│ └── grading.json
|
||||||
|
└── benchmark.json # Aggregated statistics
|
||||||
|
```
|
||||||
|
|
||||||
|
The main file you author by hand is `evals/evals.json`. The other JSON files (`grading.json`, `timing.json`, `benchmark.json`) are produced during the eval process — by the agent, by scripts, or by you.
|
||||||
|
|
||||||
|
### Spawning runs
|
||||||
|
|
||||||
|
Each eval run should start with a clean context — no leftover state from previous runs or from the skill development process. This ensures the agent follows only what the `SKILL.md` tells it. In environments that support subagents (Claude Code, for example), this isolation comes naturally: each child task starts fresh. Without subagents, use a separate session for each run.
|
||||||
|
|
||||||
|
For each run, provide:
|
||||||
|
|
||||||
|
* The skill path (or no skill for the baseline)
|
||||||
|
* The test prompt
|
||||||
|
* Any input files
|
||||||
|
* The output directory
|
||||||
|
|
||||||
|
Here's an example of the instructions you'd give the agent for a single with-skill run:
|
||||||
|
|
||||||
|
```
|
||||||
|
Execute this task:
|
||||||
|
- Skill path: /path/to/csv-analyzer
|
||||||
|
- Task: I have a CSV of monthly sales data in data/sales_2025.csv.
|
||||||
|
Can you find the top 3 months by revenue and make a bar chart?
|
||||||
|
- Input files: evals/files/sales_2025.csv
|
||||||
|
- Save outputs to: csv-analyzer-workspace/iteration-1/eval-top-months-chart/with_skill/outputs/
|
||||||
|
```
|
||||||
|
|
||||||
|
For the baseline, use the same prompt but without the skill path, saving to `without_skill/outputs/`.
|
||||||
|
|
||||||
|
When improving an existing skill, use the previous version as your baseline. Snapshot it before editing (`cp -r <skill-path> <workspace>/skill-snapshot/`), point the baseline run at the snapshot, and save to `old_skill/outputs/` instead of `without_skill/`.
|
||||||
|
|
||||||
|
### Capturing timing data
|
||||||
|
|
||||||
|
Timing data lets you compare how much time and tokens the skill costs relative to the baseline — a skill that dramatically improves output quality but triples token usage is a different trade-off than one that's both better and cheaper. When each run completes, record the token count and duration:
|
||||||
|
|
||||||
|
```json timing.json theme={null}
|
||||||
|
{
|
||||||
|
"total_tokens": 84852,
|
||||||
|
"duration_ms": 23332
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
<Tip>
|
||||||
|
In Claude Code, when a subagent task finishes, the [task completion notification](https://platform.claude.com/docs/en/agent-sdk/typescript#sdk-task-notification-message) includes `total_tokens` and `duration_ms`. Save these values immediately — they aren't persisted anywhere else.
|
||||||
|
</Tip>
|
||||||
|
|
||||||
|
## Writing assertions
|
||||||
|
|
||||||
|
Assertions are verifiable statements about what the output should contain or achieve. Add them after you see your first round of outputs — you often don't know what "good" looks like until the skill has run.
|
||||||
|
|
||||||
|
Good assertions:
|
||||||
|
|
||||||
|
* `"The output file is valid JSON"` — programmatically verifiable.
|
||||||
|
* `"The bar chart has labeled axes"` — specific and observable.
|
||||||
|
* `"The report includes at least 3 recommendations"` — countable.
|
||||||
|
|
||||||
|
Weak assertions:
|
||||||
|
|
||||||
|
* `"The output is good"` — too vague to grade.
|
||||||
|
* `"The output uses exactly the phrase 'Total Revenue: $X'"` — too brittle; correct output with different wording would fail.
|
||||||
|
|
||||||
|
Not everything needs an assertion. Some qualities — writing style, visual design, whether the output "feels right" — are hard to decompose into pass/fail checks. These are better caught during [human review](#reviewing-results-with-a-human). Reserve assertions for things that can be checked objectively.
|
||||||
|
|
||||||
|
Add assertions to each test case in `evals/evals.json`:
|
||||||
|
|
||||||
|
```json evals/evals.json highlight={9-14} theme={null}
|
||||||
|
{
|
||||||
|
"skill_name": "csv-analyzer",
|
||||||
|
"evals": [
|
||||||
|
{
|
||||||
|
"id": 1,
|
||||||
|
"prompt": "I have a CSV of monthly sales data in data/sales_2025.csv. Can you find the top 3 months by revenue and make a bar chart?",
|
||||||
|
"expected_output": "A bar chart image showing the top 3 months by revenue, with labeled axes and values.",
|
||||||
|
"files": ["evals/files/sales_2025.csv"],
|
||||||
|
"assertions": [
|
||||||
|
"The output includes a bar chart image file",
|
||||||
|
"The chart shows exactly 3 months",
|
||||||
|
"Both axes are labeled",
|
||||||
|
"The chart title or caption mentions revenue"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Grading outputs
|
||||||
|
|
||||||
|
Grading means evaluating each assertion against the actual outputs and recording **PASS** or **FAIL** with specific evidence. The evidence should quote or reference the output, not just state an opinion.
|
||||||
|
|
||||||
|
The simplest approach is to give the outputs and assertions to an LLM and ask it to evaluate each one. For assertions that can be checked by code (valid JSON, correct row count, file exists with expected dimensions), use a verification script — scripts are more reliable than LLM judgment for mechanical checks and reusable across iterations.
|
||||||
|
|
||||||
|
```json grading.json theme={null}
|
||||||
|
{
|
||||||
|
"assertion_results": [
|
||||||
|
{
|
||||||
|
"text": "The output includes a bar chart image file",
|
||||||
|
"passed": true,
|
||||||
|
"evidence": "Found chart.png (45KB) in outputs directory"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"text": "The chart shows exactly 3 months",
|
||||||
|
"passed": true,
|
||||||
|
"evidence": "Chart displays bars for March, July, and November"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"text": "Both axes are labeled",
|
||||||
|
"passed": false,
|
||||||
|
"evidence": "Y-axis is labeled 'Revenue ($)' but X-axis has no label"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"text": "The chart title or caption mentions revenue",
|
||||||
|
"passed": true,
|
||||||
|
"evidence": "Chart title reads 'Top 3 Months by Revenue'"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"summary": {
|
||||||
|
"passed": 3,
|
||||||
|
"failed": 1,
|
||||||
|
"total": 4,
|
||||||
|
"pass_rate": 0.75
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Grading principles
|
||||||
|
|
||||||
|
* **Require concrete evidence for a PASS.** Don't give the benefit of the doubt. If an assertion says "includes a summary" and the output has a section titled "Summary" with one vague sentence, that's a FAIL — the label is there but the substance isn't.
|
||||||
|
* **Review the assertions themselves, not just the results.** While grading, notice when assertions are too easy (always pass regardless of skill quality), too hard (always fail even when the output is good), or unverifiable (can't be checked from the output alone). Fix these for the next iteration.
|
||||||
|
|
||||||
|
<Tip>
|
||||||
|
For comparing two skill versions, try **blind comparison**: present both outputs to an LLM judge without revealing which came from which version. The judge scores holistic qualities — organization, formatting, usability, polish — on its own rubric, free from bias about which version "should" be better. This complements assertion grading: two outputs might both pass all assertions but differ significantly in overall quality.
|
||||||
|
</Tip>
|
||||||
|
|
||||||
|
## Aggregating results
|
||||||
|
|
||||||
|
Once every run in the iteration is graded, compute summary statistics per configuration and save them to `benchmark.json` alongside the eval directories (e.g., `csv-analyzer-workspace/iteration-1/benchmark.json`):
|
||||||
|
|
||||||
|
```json benchmark.json theme={null}
|
||||||
|
{
|
||||||
|
"run_summary": {
|
||||||
|
"with_skill": {
|
||||||
|
"pass_rate": { "mean": 0.83, "stddev": 0.06 },
|
||||||
|
"time_seconds": { "mean": 45.0, "stddev": 12.0 },
|
||||||
|
"tokens": { "mean": 3800, "stddev": 400 }
|
||||||
|
},
|
||||||
|
"without_skill": {
|
||||||
|
"pass_rate": { "mean": 0.33, "stddev": 0.10 },
|
||||||
|
"time_seconds": { "mean": 32.0, "stddev": 8.0 },
|
||||||
|
"tokens": { "mean": 2100, "stddev": 300 }
|
||||||
|
},
|
||||||
|
"delta": {
|
||||||
|
"pass_rate": 0.50,
|
||||||
|
"time_seconds": 13.0,
|
||||||
|
"tokens": 1700
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
The `delta` tells you what the skill costs (more time, more tokens) and what it buys (higher pass rate). A skill that adds 13 seconds but improves pass rate by 50 percentage points is probably worth it. A skill that doubles token usage for a 2-point improvement might not be.
|
||||||
|
|
||||||
|
<Note>
|
||||||
|
Standard deviation (`stddev`) is only meaningful with multiple runs per eval. In early iterations with just 2-3 test cases and single runs, focus on the raw pass counts and the delta — the statistical measures become useful as you expand the test set and run each eval multiple times.
|
||||||
|
</Note>
|
||||||
|
|
||||||
|
## Analyzing patterns
|
||||||
|
|
||||||
|
Aggregate statistics can hide important patterns. After computing the benchmarks:
|
||||||
|
|
||||||
|
* **Remove or replace assertions that always pass in both configurations.** These don't tell you anything useful — the model handles them fine without the skill. They inflate the with-skill pass rate without reflecting actual skill value.
|
||||||
|
* **Investigate assertions that always fail in both configurations.** Either the assertion is broken (asking for something the model can't do), the test case is too hard, or the assertion is checking for the wrong thing. Fix these before the next iteration.
|
||||||
|
* **Study assertions that pass with the skill but fail without.** This is where the skill is clearly adding value. Understand *why* — which instructions or scripts made the difference?
|
||||||
|
* **Tighten instructions when results are inconsistent across runs.** If the same eval passes sometimes and fails others (reflected as high `stddev` in the benchmark), the eval may be flaky (sensitive to model randomness), or the skill's instructions may be ambiguous enough that the model interprets them differently each time. Add examples or more specific guidance to reduce ambiguity.
|
||||||
|
* **Check time and token outliers.** If one eval takes 3x longer than the others, read its execution transcript (the full log of what the model did during the run) to find the bottleneck.
|
||||||
|
|
||||||
|
## Reviewing results with a human
|
||||||
|
|
||||||
|
Assertion grading and pattern analysis catch a lot, but they only check what you thought to write assertions for. A human reviewer brings a fresh perspective — catching issues you didn't anticipate, noticing when the output is technically correct but misses the point, or spotting problems that are hard to express as pass/fail checks. For each test case, review the actual outputs alongside the grades.
|
||||||
|
|
||||||
|
Record specific feedback for each test case and save it in the workspace (e.g., as a `feedback.json` alongside the eval directories):
|
||||||
|
|
||||||
|
```json feedback.json theme={null}
|
||||||
|
{
|
||||||
|
"eval-top-months-chart": "The chart is missing axis labels and the months are in alphabetical order instead of chronological.",
|
||||||
|
"eval-clean-missing-emails": ""
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
"The chart is missing axis labels" is actionable; "looks bad" is not. Empty feedback means the output looked fine — that test case passed your review. During the [iteration step](#iterating-on-the-skill), focus your improvements on the test cases where you had specific complaints.
|
||||||
|
|
||||||
|
## Iterating on the skill
|
||||||
|
|
||||||
|
After grading and reviewing, you have three sources of signal:
|
||||||
|
|
||||||
|
* **Failed assertions** point to specific gaps — a missing step, an unclear instruction, or a case the skill doesn't handle.
|
||||||
|
* **Human feedback** points to broader quality issues — the approach was wrong, the output was poorly structured, or the skill produced a technically correct but unhelpful result.
|
||||||
|
* **Execution transcripts** reveal *why* things went wrong. If the agent ignored an instruction, the instruction may be ambiguous. If the agent spent time on unproductive steps, those instructions may need to be simplified or removed.
|
||||||
|
|
||||||
|
The most effective way to turn these signals into skill improvements is to give all three — along with the current `SKILL.md` — to an LLM and ask it to propose changes. The LLM can synthesize patterns across failed assertions, reviewer complaints, and transcript behavior that would be tedious to connect manually. When prompting the LLM, include these guidelines:
|
||||||
|
|
||||||
|
* **Generalize from feedback.** The skill will be used across many different prompts, not just the test cases. Fixes should address underlying issues broadly rather than adding narrow patches for specific examples.
|
||||||
|
* **Keep the skill lean.** Fewer, better instructions often outperform exhaustive rules. If transcripts show wasted work (unnecessary validation, unneeded intermediate outputs), remove those instructions. If pass rates plateau despite adding more rules, the skill may be over-constrained — try removing instructions and see if results hold or improve.
|
||||||
|
* **Explain the why.** Reasoning-based instructions ("Do X because Y tends to cause Z") work better than rigid directives ("ALWAYS do X, NEVER do Y"). Models follow instructions more reliably when they understand the purpose.
|
||||||
|
* **Bundle repeated work.** If every test run independently wrote a similar helper script (a chart builder, a data parser), that's a signal to bundle the script into the skill's `scripts/` directory. See [Using scripts](/skill-creation/using-scripts) for how to do this.
|
||||||
|
|
||||||
|
### The loop
|
||||||
|
|
||||||
|
1. Give the eval signals and current `SKILL.md` to an LLM and ask it to propose improvements.
|
||||||
|
2. Review and apply the changes.
|
||||||
|
3. Rerun all test cases in a new `iteration-<N+1>/` directory.
|
||||||
|
4. Grade and aggregate the new results.
|
||||||
|
5. Review with a human. Repeat.
|
||||||
|
|
||||||
|
Stop when you're satisfied with the results, feedback is consistently empty, or you're no longer seeing meaningful improvement between iterations.
|
||||||
|
|
||||||
|
<Tip>
|
||||||
|
The [`skill-creator`](https://github.com/anthropics/skills/tree/main/skills/skill-creator) Skill automates much of this workflow — running evals, grading assertions, aggregating benchmarks, and presenting results for human review.
|
||||||
|
</Tip>
|
||||||
|
|
||||||
|
|
||||||
|
Built with [Mintlify](https://mintlify.com).
|
||||||
@@ -0,0 +1,75 @@
|
|||||||
|
# ppt-maker
|
||||||
|
|
||||||
|
一键将 Markdown 转换为专业 PPT,支持自动图表生成和多种主题。
|
||||||
|
|
||||||
|
## Architecture
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
## Workflow
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
## 功能特性
|
||||||
|
|
||||||
|
- Markdown 语法驱动幻灯片布局(封面页、内容页、结束页)
|
||||||
|
- 表格数据**自动转为图表**(饼图 / 柱状图 / 折线图)
|
||||||
|
- 6 种内置主题:ocean、sunset、purple、luxury、midnight、classic
|
||||||
|
- 支持有序/无序列表、引用块、代码块、表格
|
||||||
|
|
||||||
|
## 快速开始
|
||||||
|
|
||||||
|
```bash
|
||||||
|
node skills/ppt-maker/scripts/ppt-maker.js -i input.md -o output.pptx -t ocean
|
||||||
|
```
|
||||||
|
|
||||||
|
## 命令参数
|
||||||
|
|
||||||
|
| 参数 | 说明 | 必填 |
|
||||||
|
|------|------|------|
|
||||||
|
| `-i` | 输入 Markdown 文件 | ✅ |
|
||||||
|
| `-o` | 输出 PPTX 文件 | ✅ |
|
||||||
|
| `-t` | 主题名称(默认 ocean) | ❌ |
|
||||||
|
| `-l` | 列出所有可用主题 | ❌ |
|
||||||
|
|
||||||
|
## Markdown 页面结构
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
# 封面标题 → 封面页
|
||||||
|
副标题文字
|
||||||
|
|
||||||
|
## 章节标题 → 内容页
|
||||||
|
## 感谢聆听 → 结束页(自动识别)
|
||||||
|
```
|
||||||
|
|
||||||
|
## 图表自动生成
|
||||||
|
|
||||||
|
在标题中包含关键字,其下方表格自动转为对应图表:
|
||||||
|
|
||||||
|
| 图表类型 | 触发关键字示例 |
|
||||||
|
|----------|--------------|
|
||||||
|
| 🥧 饼图 | 占比、比例、饼图、pie |
|
||||||
|
| 📊 柱状图 | 对比、排名、销售额、bar |
|
||||||
|
| 📈 折线图 | 趋势、增长、月度、line |
|
||||||
|
|
||||||
|
## File Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
skills/ppt-maker/
|
||||||
|
├── SKILL.md
|
||||||
|
├── README.md # this file
|
||||||
|
├── _meta.json
|
||||||
|
├── assets/
|
||||||
|
│ ├── workflow.puml
|
||||||
|
│ └── ppt-maker-workflow.svg
|
||||||
|
└── scripts/
|
||||||
|
├── ppt-maker.js # 主脚本
|
||||||
|
└── package.json # 依赖 pptxgenjs
|
||||||
|
```
|
||||||
|
|
||||||
|
## 示例提示词
|
||||||
|
|
||||||
|
```
|
||||||
|
请使用 ppt-maker 技能,为我生成一份"2026年度销售总结"的汇报型幻灯片,
|
||||||
|
包含销售占比饼图、各产品销售额柱状图、月度趋势折线图,主题使用 midnight。
|
||||||
|
```
|
||||||
@@ -0,0 +1,286 @@
|
|||||||
|
|
||||||
|
---
|
||||||
|
name: ppt-maker
|
||||||
|
description: "专业级PPT一键生成。Markdown创建幻灯片,支持自动图表(饼图/柱状图/折线图)、多主题、有序/无序列表、引用块、代码块、表格、感谢页自动识别。"
|
||||||
|
---
|
||||||
|
|
||||||
|
# PPT Maker - 专业级PPT生成工具
|
||||||
|
|
||||||
|
使用 Markdown 自动创建精美 PPT,**表格数据自动转为图表**,支持多种主题和智能布局。
|
||||||
|
|
||||||
|
## 快速开始
|
||||||
|
|
||||||
|
```bash
|
||||||
|
node ~/.openclaw/workspace/skills/ppt-maker/scripts/ppt-maker.js -i input.md -o output.pptx -t ocean
|
||||||
|
```
|
||||||
|
|
||||||
|
## 命令参数
|
||||||
|
|
||||||
|
| 参数 | 说明 | 必填 |
|
||||||
|
|------|------|------|
|
||||||
|
| `-i, --input` | 输入 Markdown 文件路径 | ✅ |
|
||||||
|
| `-o, --output` | 输出 PPTX 文件路径(自动补 .pptx 后缀) | ✅ |
|
||||||
|
| `-t, --theme` | 主题名称,默认 ocean | ❌ |
|
||||||
|
| `-l, --list` | 列出所有可用主题 | ❌ |
|
||||||
|
| `-h, --help` | 显示帮助信息 | ❌ |
|
||||||
|
|
||||||
|
## 支持的主题
|
||||||
|
|
||||||
|
| 主题 | 风格 | 适用场景 |
|
||||||
|
|------|------|----------|
|
||||||
|
| ocean | 蓝色海洋 | 科技/专业 |
|
||||||
|
| sunset | 橙红日落 | 温暖/创意 |
|
||||||
|
| purple | 紫罗兰 | 创意/设计 |
|
||||||
|
| luxury | 黑金奢华 | 高端/奢侈 |
|
||||||
|
| midnight | 深夜暗色 | 演示/震撼 |
|
||||||
|
| classic | 经典绿 | 商务/正式 |
|
||||||
|
|
||||||
|
## Markdown 语法对照
|
||||||
|
|
||||||
|
### 页面类型
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
# 大标题 → 封面页(第一张幻灯片)
|
||||||
|
副标题文字 → 封面副标题
|
||||||
|
|
||||||
|
## 章节标题 → 内容页
|
||||||
|
## 感谢聆听 → 结束页(自动居中大字布局)
|
||||||
|
```
|
||||||
|
|
||||||
|
**结束页自动识别关键字:** 感谢、谢谢、thank、thanks、Q&A、问答、结束、The End、再见、联系方式
|
||||||
|
|
||||||
|
### 内容元素
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
### 小标题 → 页内加粗小标题
|
||||||
|
|
||||||
|
- 无序列表项1 → 带圆点的无序列表
|
||||||
|
- 无序列表项2
|
||||||
|
|
||||||
|
1. 有序列表项1 → 带编号圆圈的有序列表
|
||||||
|
2. 有序列表项2
|
||||||
|
|
||||||
|
> 引用文字 → 带左侧竖条的引用块
|
||||||
|
|
||||||
|
普通文字 → 正文段落
|
||||||
|
```
|
||||||
|
|
||||||
|
### 表格
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
| 列1 | 列2 | 列3 |
|
||||||
|
|-----|-----|-----|
|
||||||
|
| 内容 | 内容 | 内容 |
|
||||||
|
```
|
||||||
|
|
||||||
|
表格含数值列时**自动检测**是否转为图表(见下方图表规则)。不含数值或未匹配图表规则时,保持表格原样显示(含交替行底色)。
|
||||||
|
|
||||||
|
### 代码块
|
||||||
|
|
||||||
|
````markdown
|
||||||
|
```python
|
||||||
|
print("Hello World")
|
||||||
|
```
|
||||||
|
````
|
||||||
|
|
||||||
|
深色背景 + 等宽字体 + 圆角边框显示。
|
||||||
|
|
||||||
|
## ⭐ 自动图表生成
|
||||||
|
|
||||||
|
**核心功能:** 在 `##` 标题、`###` 小标题或表格前的正文中包含特定关键字,其下方的表格自动转为对应图表。
|
||||||
|
|
||||||
|
### 图表类型与关键字
|
||||||
|
|
||||||
|
| 图表类型 | 触发关键字 |
|
||||||
|
|----------|-----------|
|
||||||
|
| 🥧 饼图 | 饼图、饼状图、占比、比例、份额、构成、组成、百分比、比重、pie |
|
||||||
|
| 📊 柱状图 | 柱状、柱状图、柱形、排名、top、对比、比较、分布、销售额、金额、数量、业绩、产量、营收、bar |
|
||||||
|
| 📈 折线图 | 折线、折线图、趋势、增长、变化、走势、曲线、时间、月度、季度、年度、line、trend |
|
||||||
|
|
||||||
|
### 智能推断(无关键字时)
|
||||||
|
|
||||||
|
- 数值列总和在 80~120 之间 → 自动识别为**饼图**(占比数据)
|
||||||
|
- 有 ≥2 个数值点 → 默认生成**柱状图**
|
||||||
|
- 支持多列数值 → 自动生成**多系列**图表
|
||||||
|
|
||||||
|
### 数值解析
|
||||||
|
|
||||||
|
自动清理单元格中的干扰字符,以下写法均可正确识别:
|
||||||
|
- `100万` `¥250` `$1,200` `30%` `85元` `1200亿`
|
||||||
|
|
||||||
|
### 图表示例
|
||||||
|
|
||||||
|
#### 饼图示例
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
## 销售占比分析
|
||||||
|
### 各产品销售占比饼图
|
||||||
|
| 产品 | 占比(%) |
|
||||||
|
|------|---------|
|
||||||
|
| 大米 | 30 |
|
||||||
|
| 高粱 | 50 |
|
||||||
|
| 小麦 | 20 |
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 柱状图示例
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
## 各产品销售额对比
|
||||||
|
### 年度销售额柱状图
|
||||||
|
| 产品 | 销售额(万元) |
|
||||||
|
|------|-------------|
|
||||||
|
| 大米 | 100 |
|
||||||
|
| 高粱 | 250 |
|
||||||
|
| 小麦 | 130 |
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 折线图示例
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
## 月度销售趋势
|
||||||
|
### 销售额变化趋势折线图
|
||||||
|
| 月份 | 销售额(万元) |
|
||||||
|
|------|-------------|
|
||||||
|
| 1月 | 35 |
|
||||||
|
| 2月 | 42 |
|
||||||
|
| 3月 | 58 |
|
||||||
|
| 4月 | 72 |
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 不转图表的表格(无数值列或非图表场景)
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
## 工作计划
|
||||||
|
| 季度 | 目标 | 负责人 |
|
||||||
|
|------|------|--------|
|
||||||
|
| Q1 | 完成招聘 | 张经理 |
|
||||||
|
| Q2 | 市场拓展 | 李经理 |
|
||||||
|
```
|
||||||
|
|
||||||
|
## 使用示例
|
||||||
|
|
||||||
|
提供markdown文件,比如input.md,然后通过指令生成
|
||||||
|
```bash
|
||||||
|
node ~/.openclaw/workspace/skills/ppt-maker/scripts/ppt-maker.js -i input.md -o output.pptx -t ocean
|
||||||
|
```
|
||||||
|
### markdown提示词示例
|
||||||
|
```markdown
|
||||||
|
# 2026年度总结报告
|
||||||
|
北灵聊AI · 年度工作汇报
|
||||||
|
|
||||||
|
## 销售占比分析
|
||||||
|
### 各产品占比饼图
|
||||||
|
| 产品 | 占比(%) |
|
||||||
|
|------|---------|
|
||||||
|
| 大米 | 30 |
|
||||||
|
| 高粱 | 40 |
|
||||||
|
| 小麦 | 20 |
|
||||||
|
| 玉米 | 10 |
|
||||||
|
|
||||||
|
## 销售额对比
|
||||||
|
### 各产品销售额柱状图
|
||||||
|
| 产品 | 销售额(万元) |
|
||||||
|
|------|-------------|
|
||||||
|
| 大米 | 100 |
|
||||||
|
| 高粱 | 250 |
|
||||||
|
| 小麦 | 130 |
|
||||||
|
|
||||||
|
## 月度趋势
|
||||||
|
### 全年销售额变化趋势折线图
|
||||||
|
| 月份 | 销售额(万元) |
|
||||||
|
|------|-------------|
|
||||||
|
| 1月 | 35 |
|
||||||
|
| 6月 | 72 |
|
||||||
|
| 12月 | 102 |
|
||||||
|
|
||||||
|
## 核心成果
|
||||||
|
### 业务拓展
|
||||||
|
- 新增客户 126 家,同比增长 35%
|
||||||
|
- 开拓西南市场,覆盖 4 个新省份
|
||||||
|
|
||||||
|
### 团队建设
|
||||||
|
1. 团队扩充至 28 人
|
||||||
|
2. 组织培训 12 场
|
||||||
|
3. 员工满意度达 92%
|
||||||
|
|
||||||
|
> 全年目标超额完成,总销售额突破 560 万元
|
||||||
|
|
||||||
|
## 感谢聆听
|
||||||
|
北灵聊AI
|
||||||
|
期待2027再创佳绩!
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
### 自然语言提示词示例
|
||||||
|
```
|
||||||
|
请使用 ppt-maker 技能,为我生成一份“2026生成式AI行业发展与企业落地趋势”的汇报型幻灯片,整体风格专业、简洁、科技感强,讲解人是北灵聊AI。
|
||||||
|
|
||||||
|
封面是2026生成式AI行业发展与企业落地趋势,副标题写“模型能力升级、企业应用加速与商业化观察”,并显示讲解人“北灵聊AI”。
|
||||||
|
|
||||||
|
第一页是企业采用生成式AI的主要应用场景分布,用饼状图展示,其中知识助手占比32,智能客服占比24,内容生成占比18,研发提效占比16,数据分析占比10。
|
||||||
|
|
||||||
|
第二页是企业AI项目预算投入对比,用柱状图展示,其中大模型平台建设预算380万,AI代码助手预算300万,AI办公助手预算260万,AI智能客服预算220万,AI营销内容生成预算180万。
|
||||||
|
|
||||||
|
第三页是2026年企业生成式AI项目推进热度趋势,用折线图展示,其中Q1热度指数48,Q2热度指数63,Q3热度指数78,Q4热度指数92。
|
||||||
|
|
||||||
|
最后一页是感谢,标题写“感谢聆听”,副标题写“欢迎交流生成式AI与企业应用实践”。
|
||||||
|
```
|
||||||
|
|
||||||
|
## 支持的命令行
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 查看帮助
|
||||||
|
node ~/.openclaw/workspace/skills/ppt-maker/scripts/ppt-maker.js -h
|
||||||
|
|
||||||
|
# 列出所有主题
|
||||||
|
node ~/.openclaw/workspace/skills/ppt-maker/scripts/ppt-maker.js -l
|
||||||
|
|
||||||
|
# 海洋蓝主题(默认)
|
||||||
|
node ~/.openclaw/workspace/skills/ppt-maker/scripts/ppt-maker.js -i slides.md -o demo.pptx
|
||||||
|
|
||||||
|
# 深夜科技主题
|
||||||
|
node ~/.openclaw/workspace/skills/ppt-maker/scripts/ppt-maker.js -i slides.md -o demo.pptx -t midnight
|
||||||
|
|
||||||
|
# 黑金奢华主题
|
||||||
|
node ~/.openclaw/workspace/skills/ppt-maker/scripts/ppt-maker.js -i slides.md -o demo.pptx -t luxury
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
## 布局特点
|
||||||
|
|
||||||
|
### 页面类型
|
||||||
|
|
||||||
|
- **封面页** — 大标题 + 副标题 + 左侧装饰竖条
|
||||||
|
- **内容页** — 标题栏背景 + 正文区域 + 页码
|
||||||
|
- **结束页** — 居中大字 + 装饰色块 + 上下装饰线
|
||||||
|
|
||||||
|
### 装饰元素
|
||||||
|
|
||||||
|
- 顶部主题色强调条
|
||||||
|
- 左侧边栏装饰线
|
||||||
|
- 标题栏浅色背景
|
||||||
|
- 封面竖线装饰 + 底部横线
|
||||||
|
|
||||||
|
### 内容渲染
|
||||||
|
|
||||||
|
- 无序列表:主题色圆点
|
||||||
|
- 有序列表:主题色编号圆圈
|
||||||
|
- 引用块:左侧竖条 + 浅色背景 + 斜体
|
||||||
|
- 代码块:深色背景 + 等宽字体 + 圆角
|
||||||
|
- 表格:表头底色 + 交替行着色
|
||||||
|
- 图表与剩余内容并排显示(饼图右侧/柱状折线收窄后右侧)
|
||||||
|
|
||||||
|
## 注意事项
|
||||||
|
|
||||||
|
1. **Markdown格式要求:** 必须用 `#` 开头作为封面页,`##` 开头分页
|
||||||
|
2. **图表触发:** 关键字写在 `##` 标题或 `###` 小标题中最可靠
|
||||||
|
3. **表格格式:** 第一行为表头,第二行为分隔行 `|---|---|`,第三行起为数据
|
||||||
|
4. **数值列:** 表格第二列起含可解析数字才会触发图表
|
||||||
|
5. **输出格式:** 自动补 `.pptx` 后缀
|
||||||
|
6. **行内格式:** `**粗体**` `*斜体*` `~~删除线~~` 等会自动清理为纯文本
|
||||||
|
|
||||||
|
## 文件位置
|
||||||
|
|
||||||
|
- 脚本:`~/.openclaw/workspace/skills/ppt-maker/scripts/ppt-maker.js`
|
||||||
|
- 依赖:`pptxgenjs`(已安装)
|
||||||
|
|
||||||
@@ -0,0 +1,6 @@
|
|||||||
|
{
|
||||||
|
"ownerId": "kn75v9attg6retx4mdm14beva983desk",
|
||||||
|
"slug": "ppt-maker",
|
||||||
|
"version": "1.0.3",
|
||||||
|
"publishedAt": 1774166054605
|
||||||
|
}
|
||||||
@@ -0,0 +1,30 @@
|
|||||||
|
@startuml ppt-maker-architecture
|
||||||
|
skinparam componentStyle rectangle
|
||||||
|
skinparam defaultFontName Arial
|
||||||
|
skinparam backgroundColor #FAFAFA
|
||||||
|
|
||||||
|
package "ppt-maker Skill" {
|
||||||
|
component "SKILL.md\n(instructions)" as SKILL
|
||||||
|
component "scripts/ppt-maker.js\n(core engine)" as ENGINE
|
||||||
|
component "scripts/package.json\n(deps: pptxgenjs)" as PKG
|
||||||
|
}
|
||||||
|
|
||||||
|
package "ppt-maker.js Internals" {
|
||||||
|
component "Markdown Parser\n(slides, headings, lists)" as PARSER
|
||||||
|
component "Chart Detector\n(keyword → pie/bar/line)" as CHART
|
||||||
|
component "Theme Engine\n(ocean/sunset/purple/...)" as THEME
|
||||||
|
component "PPTX Renderer\n(pptxgenjs)" as RENDERER
|
||||||
|
}
|
||||||
|
|
||||||
|
actor User
|
||||||
|
|
||||||
|
User --> SKILL : natural language request
|
||||||
|
SKILL --> ENGINE : node ppt-maker.js -i ... -o ... -t ...
|
||||||
|
ENGINE --> PARSER
|
||||||
|
ENGINE --> CHART
|
||||||
|
ENGINE --> THEME
|
||||||
|
PARSER --> RENDERER
|
||||||
|
CHART --> RENDERER
|
||||||
|
THEME --> RENDERER
|
||||||
|
RENDERER --> User : output.pptx
|
||||||
|
@enduml
|
||||||
|
After Width: | Height: | Size: 9.5 KiB |
|
After Width: | Height: | Size: 8.8 KiB |
@@ -0,0 +1,25 @@
|
|||||||
|
@startuml ppt-maker-workflow
|
||||||
|
skinparam defaultFontName Arial
|
||||||
|
skinparam backgroundColor #FAFAFA
|
||||||
|
|
||||||
|
actor User
|
||||||
|
participant "ppt-maker\nSkill" as SKILL
|
||||||
|
participant "ppt-maker.js" as ENGINE
|
||||||
|
|
||||||
|
User -> SKILL : "生成一份PPT: <描述>"
|
||||||
|
SKILL -> SKILL : generate Markdown content
|
||||||
|
SKILL -> ENGINE : node ppt-maker.js -i input.md -o out.pptx -t <theme>
|
||||||
|
|
||||||
|
ENGINE -> ENGINE : parse slides\n(# → cover, ## → content, ## 感谢 → end)
|
||||||
|
|
||||||
|
loop each ## slide
|
||||||
|
ENGINE -> ENGINE : scan headings for chart keywords
|
||||||
|
alt chart keyword found & table present
|
||||||
|
ENGINE -> ENGINE : render pie / bar / line chart
|
||||||
|
else
|
||||||
|
ENGINE -> ENGINE : render text / list / table
|
||||||
|
end
|
||||||
|
end
|
||||||
|
|
||||||
|
ENGINE --> User : output.pptx
|
||||||
|
@enduml
|
||||||
@@ -0,0 +1,172 @@
|
|||||||
|
{
|
||||||
|
"name": "scripts",
|
||||||
|
"version": "1.0.0",
|
||||||
|
"lockfileVersion": 3,
|
||||||
|
"requires": true,
|
||||||
|
"packages": {
|
||||||
|
"": {
|
||||||
|
"name": "scripts",
|
||||||
|
"version": "1.0.0",
|
||||||
|
"license": "ISC",
|
||||||
|
"dependencies": {
|
||||||
|
"pptxgenjs": "^4.0.1"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"node_modules/@types/node": {
|
||||||
|
"version": "22.19.15",
|
||||||
|
"resolved": "https://registry.npmjs.org/@types/node/-/node-22.19.15.tgz",
|
||||||
|
"integrity": "sha512-F0R/h2+dsy5wJAUe3tAU6oqa2qbWY5TpNfL/RGmo1y38hiyO1w3x2jPtt76wmuaJI4DQnOBu21cNXQ2STIUUWg==",
|
||||||
|
"license": "MIT",
|
||||||
|
"dependencies": {
|
||||||
|
"undici-types": "~6.21.0"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"node_modules/core-util-is": {
|
||||||
|
"version": "1.0.3",
|
||||||
|
"resolved": "https://registry.npmjs.org/core-util-is/-/core-util-is-1.0.3.tgz",
|
||||||
|
"integrity": "sha512-ZQBvi1DcpJ4GDqanjucZ2Hj3wEO5pZDS89BWbkcrvdxksJorwUDDZamX9ldFkp9aw2lmBDLgkObEA4DWNJ9FYQ==",
|
||||||
|
"license": "MIT"
|
||||||
|
},
|
||||||
|
"node_modules/https": {
|
||||||
|
"version": "1.0.0",
|
||||||
|
"resolved": "https://registry.npmjs.org/https/-/https-1.0.0.tgz",
|
||||||
|
"integrity": "sha512-4EC57ddXrkaF0x83Oj8sM6SLQHAWXw90Skqu2M4AEWENZ3F02dFJE/GARA8igO79tcgYqGrD7ae4f5L3um2lgg==",
|
||||||
|
"license": "ISC"
|
||||||
|
},
|
||||||
|
"node_modules/image-size": {
|
||||||
|
"version": "1.2.1",
|
||||||
|
"resolved": "https://registry.npmjs.org/image-size/-/image-size-1.2.1.tgz",
|
||||||
|
"integrity": "sha512-rH+46sQJ2dlwfjfhCyNx5thzrv+dtmBIhPHk0zgRUukHzZ/kRueTJXoYYsclBaKcSMBWuGbOFXtioLpzTb5euw==",
|
||||||
|
"license": "MIT",
|
||||||
|
"dependencies": {
|
||||||
|
"queue": "6.0.2"
|
||||||
|
},
|
||||||
|
"bin": {
|
||||||
|
"image-size": "bin/image-size.js"
|
||||||
|
},
|
||||||
|
"engines": {
|
||||||
|
"node": ">=16.x"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"node_modules/immediate": {
|
||||||
|
"version": "3.0.6",
|
||||||
|
"resolved": "https://registry.npmjs.org/immediate/-/immediate-3.0.6.tgz",
|
||||||
|
"integrity": "sha512-XXOFtyqDjNDAQxVfYxuF7g9Il/IbWmmlQg2MYKOH8ExIT1qg6xc4zyS3HaEEATgs1btfzxq15ciUiY7gjSXRGQ==",
|
||||||
|
"license": "MIT"
|
||||||
|
},
|
||||||
|
"node_modules/inherits": {
|
||||||
|
"version": "2.0.4",
|
||||||
|
"resolved": "https://registry.npmjs.org/inherits/-/inherits-2.0.4.tgz",
|
||||||
|
"integrity": "sha512-k/vGaX4/Yla3WzyMCvTQOXYeIHvqOKtnqBduzTHpzpQZzAskKMhZ2K+EnBiSM9zGSoIFeMpXKxa4dYeZIQqewQ==",
|
||||||
|
"license": "ISC"
|
||||||
|
},
|
||||||
|
"node_modules/isarray": {
|
||||||
|
"version": "1.0.0",
|
||||||
|
"resolved": "https://registry.npmjs.org/isarray/-/isarray-1.0.0.tgz",
|
||||||
|
"integrity": "sha512-VLghIWNM6ELQzo7zwmcg0NmTVyWKYjvIeM83yjp0wRDTmUnrM678fQbcKBo6n2CJEF0szoG//ytg+TKla89ALQ==",
|
||||||
|
"license": "MIT"
|
||||||
|
},
|
||||||
|
"node_modules/jszip": {
|
||||||
|
"version": "3.10.1",
|
||||||
|
"resolved": "https://registry.npmjs.org/jszip/-/jszip-3.10.1.tgz",
|
||||||
|
"integrity": "sha512-xXDvecyTpGLrqFrvkrUSoxxfJI5AH7U8zxxtVclpsUtMCq4JQ290LY8AW5c7Ggnr/Y/oK+bQMbqK2qmtk3pN4g==",
|
||||||
|
"license": "(MIT OR GPL-3.0-or-later)",
|
||||||
|
"dependencies": {
|
||||||
|
"lie": "~3.3.0",
|
||||||
|
"pako": "~1.0.2",
|
||||||
|
"readable-stream": "~2.3.6",
|
||||||
|
"setimmediate": "^1.0.5"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"node_modules/lie": {
|
||||||
|
"version": "3.3.0",
|
||||||
|
"resolved": "https://registry.npmjs.org/lie/-/lie-3.3.0.tgz",
|
||||||
|
"integrity": "sha512-UaiMJzeWRlEujzAuw5LokY1L5ecNQYZKfmyZ9L7wDHb/p5etKaxXhohBcrw0EYby+G/NA52vRSN4N39dxHAIwQ==",
|
||||||
|
"license": "MIT",
|
||||||
|
"dependencies": {
|
||||||
|
"immediate": "~3.0.5"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"node_modules/pako": {
|
||||||
|
"version": "1.0.11",
|
||||||
|
"resolved": "https://registry.npmjs.org/pako/-/pako-1.0.11.tgz",
|
||||||
|
"integrity": "sha512-4hLB8Py4zZce5s4yd9XzopqwVv/yGNhV1Bl8NTmCq1763HeK2+EwVTv+leGeL13Dnh2wfbqowVPXCIO0z4taYw==",
|
||||||
|
"license": "(MIT AND Zlib)"
|
||||||
|
},
|
||||||
|
"node_modules/pptxgenjs": {
|
||||||
|
"version": "4.0.1",
|
||||||
|
"resolved": "https://registry.npmjs.org/pptxgenjs/-/pptxgenjs-4.0.1.tgz",
|
||||||
|
"integrity": "sha512-TeJISr8wouAuXw4C1F/mC33xbZs/FuEG6nH9FG1Zj+nuPcGMP5YRHl6X+j3HSUnS1f3at6k75ZZXPMZlA5Lj9A==",
|
||||||
|
"license": "MIT",
|
||||||
|
"dependencies": {
|
||||||
|
"@types/node": "^22.8.1",
|
||||||
|
"https": "^1.0.0",
|
||||||
|
"image-size": "^1.2.1",
|
||||||
|
"jszip": "^3.10.1"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"node_modules/process-nextick-args": {
|
||||||
|
"version": "2.0.1",
|
||||||
|
"resolved": "https://registry.npmjs.org/process-nextick-args/-/process-nextick-args-2.0.1.tgz",
|
||||||
|
"integrity": "sha512-3ouUOpQhtgrbOa17J7+uxOTpITYWaGP7/AhoR3+A+/1e9skrzelGi/dXzEYyvbxubEF6Wn2ypscTKiKJFFn1ag==",
|
||||||
|
"license": "MIT"
|
||||||
|
},
|
||||||
|
"node_modules/queue": {
|
||||||
|
"version": "6.0.2",
|
||||||
|
"resolved": "https://registry.npmjs.org/queue/-/queue-6.0.2.tgz",
|
||||||
|
"integrity": "sha512-iHZWu+q3IdFZFX36ro/lKBkSvfkztY5Y7HMiPlOUjhupPcG2JMfst2KKEpu5XndviX/3UhFbRngUPNKtgvtZiA==",
|
||||||
|
"license": "MIT",
|
||||||
|
"dependencies": {
|
||||||
|
"inherits": "~2.0.3"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"node_modules/readable-stream": {
|
||||||
|
"version": "2.3.8",
|
||||||
|
"resolved": "https://registry.npmjs.org/readable-stream/-/readable-stream-2.3.8.tgz",
|
||||||
|
"integrity": "sha512-8p0AUk4XODgIewSi0l8Epjs+EVnWiK7NoDIEGU0HhE7+ZyY8D1IMY7odu5lRrFXGg71L15KG8QrPmum45RTtdA==",
|
||||||
|
"license": "MIT",
|
||||||
|
"dependencies": {
|
||||||
|
"core-util-is": "~1.0.0",
|
||||||
|
"inherits": "~2.0.3",
|
||||||
|
"isarray": "~1.0.0",
|
||||||
|
"process-nextick-args": "~2.0.0",
|
||||||
|
"safe-buffer": "~5.1.1",
|
||||||
|
"string_decoder": "~1.1.1",
|
||||||
|
"util-deprecate": "~1.0.1"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"node_modules/safe-buffer": {
|
||||||
|
"version": "5.1.2",
|
||||||
|
"resolved": "https://registry.npmjs.org/safe-buffer/-/safe-buffer-5.1.2.tgz",
|
||||||
|
"integrity": "sha512-Gd2UZBJDkXlY7GbJxfsE8/nvKkUEU1G38c1siN6QP6a9PT9MmHB8GnpscSmMJSoF8LOIrt8ud/wPtojys4G6+g==",
|
||||||
|
"license": "MIT"
|
||||||
|
},
|
||||||
|
"node_modules/setimmediate": {
|
||||||
|
"version": "1.0.5",
|
||||||
|
"resolved": "https://registry.npmjs.org/setimmediate/-/setimmediate-1.0.5.tgz",
|
||||||
|
"integrity": "sha512-MATJdZp8sLqDl/68LfQmbP8zKPLQNV6BIZoIgrscFDQ+RsvK/BxeDQOgyxKKoh0y/8h3BqVFnCqQ/gd+reiIXA==",
|
||||||
|
"license": "MIT"
|
||||||
|
},
|
||||||
|
"node_modules/string_decoder": {
|
||||||
|
"version": "1.1.1",
|
||||||
|
"resolved": "https://registry.npmjs.org/string_decoder/-/string_decoder-1.1.1.tgz",
|
||||||
|
"integrity": "sha512-n/ShnvDi6FHbbVfviro+WojiFzv+s8MPMHBczVePfUpDJLwoLT0ht1l4YwBCbi8pJAveEEdnkHyPyTP/mzRfwg==",
|
||||||
|
"license": "MIT",
|
||||||
|
"dependencies": {
|
||||||
|
"safe-buffer": "~5.1.0"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"node_modules/undici-types": {
|
||||||
|
"version": "6.21.0",
|
||||||
|
"resolved": "https://registry.npmjs.org/undici-types/-/undici-types-6.21.0.tgz",
|
||||||
|
"integrity": "sha512-iwDZqg0QAGrg9Rav5H4n0M64c3mkR59cJ6wQp+7C4nI0gsmExaedaYLNO44eT4AtBBwjbTiGPMlt2Md0T9H9JQ==",
|
||||||
|
"license": "MIT"
|
||||||
|
},
|
||||||
|
"node_modules/util-deprecate": {
|
||||||
|
"version": "1.0.2",
|
||||||
|
"resolved": "https://registry.npmjs.org/util-deprecate/-/util-deprecate-1.0.2.tgz",
|
||||||
|
"integrity": "sha512-EPD5q1uXyFxJpCrLnCc1nHnq3gOa6DZBocAIiI2TaSCA7VCJ1UJDMagCzIkXNsUYfD1daK//LTEQ8xiIbrHtcw==",
|
||||||
|
"license": "MIT"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -0,0 +1,15 @@
|
|||||||
|
{
|
||||||
|
"name": "ppt-maker",
|
||||||
|
"version": "1.0.0",
|
||||||
|
"description": "Generate PPT slides automatically from user input",
|
||||||
|
"main": "ppt-maker.js",
|
||||||
|
"scripts": {
|
||||||
|
"test": "echo \"Error: no test specified\" && exit 1"
|
||||||
|
},
|
||||||
|
"keywords": ["ppt", "pptx", "slides", "presentation", "ai", "generator", "automatic"],
|
||||||
|
"author": "北灵聊AI",
|
||||||
|
"license": "MIT",
|
||||||
|
"dependencies": {
|
||||||
|
"pptxgenjs": "^4.0.1"
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -0,0 +1,654 @@
|
|||||||
|
/**
|
||||||
|
* PPT Maker - Markdown to PPTX Generator
|
||||||
|
* Supports auto charts (pie/bar/line), multiple themes, ending page detection.
|
||||||
|
*/
|
||||||
|
|
||||||
|
var PptxGenJS = require("pptxgenjs");
|
||||||
|
|
||||||
|
// ══════════════════════════════════════════════════════
|
||||||
|
// 1. Themes & Colors (no # prefix)
|
||||||
|
// ══════════════════════════════════════════════════════
|
||||||
|
|
||||||
|
var THEMES = {
|
||||||
|
sunset: { bg: 'FFF8F3', title: 'E85D04', text: '3D405B', accent: 'F48C06', secondary: 'FAA307', light: 'FFECD2', lighter: 'FFF5EB' },
|
||||||
|
ocean: { bg: 'F0F8FF', title: '0077B6', text: '2D3748', accent: '00B4D8', secondary: '90E0EF', light: 'CAF0F8', lighter: 'E8F8FF' },
|
||||||
|
purple: { bg: 'FAF5FF', title: '7C3AED', text: '4C1D95', accent: 'A78BFA', secondary: 'C4B5FD', light: 'EDE9FE', lighter: 'F5F3FF' },
|
||||||
|
luxury: { bg: '1C1917', title: 'F5F5F4', text: 'A8A29E', accent: 'D4AF37', secondary: 'F59E0B', light: '292524', lighter: '1C1917' },
|
||||||
|
midnight: { bg: '0F172A', title: 'F8FAFC', text: 'CBD5E1', accent: '38BDF8', secondary: '60A5FA', light: '1E293B', lighter: '0F172A' },
|
||||||
|
classic: { bg: 'FFFFFF', title: '1F2937', text: '4B5563', accent: '059669', secondary: '10B981', light: 'ECFDF5', lighter: 'F0FDF4' }
|
||||||
|
};
|
||||||
|
|
||||||
|
var CHART_COLORS = {
|
||||||
|
ocean: ['0077B6', '00B4D8', '90E0EF', '48CAE4', '023E8A', '0096C7', 'ADE8F4'],
|
||||||
|
sunset: ['E85D04', 'F48C06', 'FAA307', 'FFBA08', 'DC2F02', 'E36414', 'F77F00'],
|
||||||
|
purple: ['7C3AED', 'A78BFA', 'C4B5FD', '8B5CF6', '6D28D9', '5B21B6', 'DDD6FE'],
|
||||||
|
luxury: ['D4AF37', 'F59E0B', 'FBBF24', 'FFD700', 'B8860B', 'D97706', 'FCD34D'],
|
||||||
|
midnight: ['38BDF8', '60A5FA', '93C5FD', '2563EB', '1D4ED8', '3B82F6', 'BFDBFE'],
|
||||||
|
classic: ['059669', '10B981', '34D399', '6EE7B7', '047857', '065F46', 'A7F3D0']
|
||||||
|
};
|
||||||
|
|
||||||
|
var CHART_RULES = [
|
||||||
|
{ type: 'pie', keys: ['饼图', '饼状图', '占比', '比例', '份额', '构成', '组成', '百分比', '比重', 'pie', 'proportion', 'share'] },
|
||||||
|
{ type: 'line', keys: ['折线', '折线图', '趋势', '增长', '变化', '走势', '曲线', '时间', '月度', '季度', '年度', 'line', 'trend'] },
|
||||||
|
{ type: 'bar', keys: ['柱状', '柱状图', '柱形', '柱形图', '排名', 'top', '对比', '比较', '分布', '销售额', '金额', '数量', '业绩', '产量', '营收', '收入', 'bar', 'column', 'chart'] }
|
||||||
|
];
|
||||||
|
|
||||||
|
var ENDING_KEYWORDS = ['感谢', '谢谢', 'thank', 'thanks', 'q&a', 'q & a', '问答', '结束', 'the end', '再见', '联系方式', '联系我们'];
|
||||||
|
|
||||||
|
// ══════════════════════════════════════════════════════
|
||||||
|
// 2. Utility Functions
|
||||||
|
// ══════════════════════════════════════════════════════
|
||||||
|
|
||||||
|
function stripInlineMarkdown(text) {
|
||||||
|
if (!text) return '';
|
||||||
|
return text
|
||||||
|
.replace(/\*\*(.+?)\*\*/g, '$1')
|
||||||
|
.replace(/__(.+?)__/g, '$1')
|
||||||
|
.replace(/\*(.+?)\*/g, '$1')
|
||||||
|
.replace(/_(.+?)_/g, '$1')
|
||||||
|
.replace(/~~(.+?)~~/g, '$1')
|
||||||
|
.replace(/`(.+?)`/g, '$1')
|
||||||
|
.replace(/\[(.+?)\]\(.+?\)/g, '$1')
|
||||||
|
.trim();
|
||||||
|
}
|
||||||
|
|
||||||
|
function isTableSeparator(line) {
|
||||||
|
var inner = line.replace(/^\|/, '').replace(/\|$/, '');
|
||||||
|
var cells = inner.split('|');
|
||||||
|
return cells.length > 0 && cells.every(function(c) {
|
||||||
|
return /^\s*:?-{2,}:?\s*$/.test(c);
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
function isEndingSlide(title) {
|
||||||
|
if (!title) return false;
|
||||||
|
var lower = title.toLowerCase().trim();
|
||||||
|
return ENDING_KEYWORDS.some(function(k) { return lower.indexOf(k) !== -1; });
|
||||||
|
}
|
||||||
|
|
||||||
|
function parseNumericCell(raw) {
|
||||||
|
if (!raw) return NaN;
|
||||||
|
var cleaned = raw
|
||||||
|
.replace(/[,,]/g, '')
|
||||||
|
.replace(/[%%]/g, '')
|
||||||
|
.replace(/[¥¥$€£₩]/g, '')
|
||||||
|
.replace(/[元万亿千百十个份人次件台套组年月日号]/g, '')
|
||||||
|
.replace(/[(())\s]/g, '')
|
||||||
|
.trim();
|
||||||
|
return parseFloat(cleaned);
|
||||||
|
}
|
||||||
|
|
||||||
|
function ensureColors(colors, needed) {
|
||||||
|
var result = [];
|
||||||
|
for (var i = 0; i < needed; i++) {
|
||||||
|
result.push(colors[i % colors.length]);
|
||||||
|
}
|
||||||
|
return result;
|
||||||
|
}
|
||||||
|
|
||||||
|
// ══════════════════════════════════════════════════════
|
||||||
|
// 3. Markdown Parser
|
||||||
|
// ══════════════════════════════════════════════════════
|
||||||
|
|
||||||
|
function parse(text) {
|
||||||
|
var slides = [];
|
||||||
|
var lines = text.split('\n');
|
||||||
|
var current = null;
|
||||||
|
var inCode = false;
|
||||||
|
var codeContent = [];
|
||||||
|
|
||||||
|
for (var li = 0; li < lines.length; li++) {
|
||||||
|
var line = lines[li];
|
||||||
|
var trimmed = line.trim();
|
||||||
|
|
||||||
|
if (trimmed.indexOf('```') === 0) {
|
||||||
|
if (inCode) {
|
||||||
|
if (current) {
|
||||||
|
current.content.push({ type: 'code', code: codeContent.join('\n') });
|
||||||
|
}
|
||||||
|
codeContent = [];
|
||||||
|
}
|
||||||
|
inCode = !inCode;
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
if (inCode) {
|
||||||
|
codeContent.push(line);
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (!trimmed) continue;
|
||||||
|
|
||||||
|
if (/^# (?!#)/.test(trimmed)) {
|
||||||
|
if (current) slides.push(current);
|
||||||
|
current = {
|
||||||
|
type: 'cover',
|
||||||
|
title: stripInlineMarkdown(trimmed.slice(2)),
|
||||||
|
subtitle: '',
|
||||||
|
content: []
|
||||||
|
};
|
||||||
|
}
|
||||||
|
else if (/^## (?!#)/.test(trimmed)) {
|
||||||
|
if (current) slides.push(current);
|
||||||
|
var title = stripInlineMarkdown(trimmed.slice(3));
|
||||||
|
current = {
|
||||||
|
type: isEndingSlide(title) ? 'ending' : 'content',
|
||||||
|
title: title,
|
||||||
|
content: []
|
||||||
|
};
|
||||||
|
}
|
||||||
|
else if (trimmed.indexOf('### ') === 0) {
|
||||||
|
if (!current) current = { type: 'content', title: '', content: [] };
|
||||||
|
current.content.push({ type: 'h3', text: stripInlineMarkdown(trimmed.slice(4)) });
|
||||||
|
}
|
||||||
|
else if (/^[-*]\s/.test(trimmed)) {
|
||||||
|
if (!current) current = { type: 'content', title: '', content: [] };
|
||||||
|
var itemText = stripInlineMarkdown(trimmed.replace(/^[-*]\s+/, ''));
|
||||||
|
var last = current.content[current.content.length - 1];
|
||||||
|
if (last && last.type === 'list') {
|
||||||
|
last.items.push(itemText);
|
||||||
|
} else {
|
||||||
|
current.content.push({ type: 'list', items: [itemText] });
|
||||||
|
}
|
||||||
|
}
|
||||||
|
else if (/^\d+\.\s/.test(trimmed)) {
|
||||||
|
if (!current) current = { type: 'content', title: '', content: [] };
|
||||||
|
var oItemText = stripInlineMarkdown(trimmed.replace(/^\d+\.\s+/, ''));
|
||||||
|
var oLast = current.content[current.content.length - 1];
|
||||||
|
if (oLast && oLast.type === 'olist') {
|
||||||
|
oLast.items.push(oItemText);
|
||||||
|
} else {
|
||||||
|
current.content.push({ type: 'olist', items: [oItemText] });
|
||||||
|
}
|
||||||
|
}
|
||||||
|
else if (trimmed.charAt(0) === '|') {
|
||||||
|
if (isTableSeparator(trimmed)) continue;
|
||||||
|
if (!current) current = { type: 'content', title: '', content: [] };
|
||||||
|
var inner = trimmed.replace(/^\|/, '').replace(/\|$/, '');
|
||||||
|
var cells = inner.split('|').map(function(c) { return c.trim(); });
|
||||||
|
if (cells.length === 0) continue;
|
||||||
|
var tLast = current.content[current.content.length - 1];
|
||||||
|
if (tLast && tLast.type === 'table') {
|
||||||
|
tLast.rows.push(cells);
|
||||||
|
} else {
|
||||||
|
current.content.push({ type: 'table', rows: [cells] });
|
||||||
|
}
|
||||||
|
}
|
||||||
|
else if (trimmed.charAt(0) === '>') {
|
||||||
|
if (!current) current = { type: 'content', title: '', content: [] };
|
||||||
|
var quoteText = stripInlineMarkdown(trimmed.replace(/^>\s*/, ''));
|
||||||
|
var qLast = current.content[current.content.length - 1];
|
||||||
|
if (qLast && qLast.type === 'quote') {
|
||||||
|
qLast.lines.push(quoteText);
|
||||||
|
} else {
|
||||||
|
current.content.push({ type: 'quote', lines: [quoteText] });
|
||||||
|
}
|
||||||
|
}
|
||||||
|
else {
|
||||||
|
if (!current) current = { type: 'content', title: '', content: [] };
|
||||||
|
var cleaned = stripInlineMarkdown(trimmed);
|
||||||
|
if (current.type === 'cover' && !current.subtitle) {
|
||||||
|
current.subtitle = cleaned;
|
||||||
|
} else {
|
||||||
|
current.content.push({ type: 'text', text: cleaned });
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if (inCode && codeContent.length > 0 && current) {
|
||||||
|
current.content.push({ type: 'code', code: codeContent.join('\n') });
|
||||||
|
}
|
||||||
|
if (current) slides.push(current);
|
||||||
|
return slides;
|
||||||
|
}
|
||||||
|
|
||||||
|
// ══════════════════════════════════════════════════════
|
||||||
|
// 4. Chart Detection
|
||||||
|
// ══════════════════════════════════════════════════════
|
||||||
|
|
||||||
|
function extractSeries(table) {
|
||||||
|
if (!table.rows || table.rows.length < 2) return null;
|
||||||
|
|
||||||
|
var headers = table.rows[0];
|
||||||
|
var dataRows = table.rows.slice(1);
|
||||||
|
if (headers.length < 2 || dataRows.length === 0) return null;
|
||||||
|
|
||||||
|
var labels = dataRows.map(function(r) { return (r[0] || '').trim(); });
|
||||||
|
var series = [];
|
||||||
|
|
||||||
|
for (var col = 1; col < headers.length; col++) {
|
||||||
|
var values = dataRows.map(function(r) { return parseNumericCell(r[col]); });
|
||||||
|
if (values.every(function(v) { return !isNaN(v) && isFinite(v); })) {
|
||||||
|
series.push({
|
||||||
|
name: (headers[col] || '').trim() || ('Series' + col),
|
||||||
|
labels: labels,
|
||||||
|
values: values
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return series.length > 0 ? series : null;
|
||||||
|
}
|
||||||
|
|
||||||
|
function detectChart(table, slide, tableIndex) {
|
||||||
|
var series = extractSeries(table);
|
||||||
|
if (!series) return null;
|
||||||
|
|
||||||
|
var labels = series[0].labels;
|
||||||
|
var hints = [];
|
||||||
|
|
||||||
|
if (slide.title) hints.push(slide.title.toLowerCase());
|
||||||
|
|
||||||
|
var hintIndex = -1;
|
||||||
|
for (var j = tableIndex - 1; j >= 0; j--) {
|
||||||
|
var item = slide.content[j];
|
||||||
|
if (item.type === 'h3') {
|
||||||
|
hints.push(item.text.toLowerCase());
|
||||||
|
hintIndex = j;
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
if (item.type === 'text') {
|
||||||
|
hints.push(item.text.toLowerCase());
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
hints.push(table.rows[0].map(function(h) { return (h || '').toLowerCase(); }).join(' '));
|
||||||
|
|
||||||
|
var combined = hints.join(' ');
|
||||||
|
|
||||||
|
for (var ri = 0; ri < CHART_RULES.length; ri++) {
|
||||||
|
var rule = CHART_RULES[ri];
|
||||||
|
if (rule.keys.some(function(k) { return combined.indexOf(k) !== -1; })) {
|
||||||
|
return { type: rule.type, series: series, labels: labels, hintIndex: hintIndex };
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
var vals = series[0].values;
|
||||||
|
var sum = vals.reduce(function(a, b) { return a + b; }, 0);
|
||||||
|
var count = vals.length;
|
||||||
|
|
||||||
|
if (count >= 2 && count <= 12 && sum >= 80 && sum <= 120) {
|
||||||
|
return { type: 'pie', series: series, labels: labels, hintIndex: hintIndex };
|
||||||
|
}
|
||||||
|
if (count >= 2) {
|
||||||
|
return { type: 'bar', series: series, labels: labels, hintIndex: hintIndex };
|
||||||
|
}
|
||||||
|
|
||||||
|
return null;
|
||||||
|
}
|
||||||
|
|
||||||
|
// ══════════════════════════════════════════════════════
|
||||||
|
// 5. Chart Type Resolution (multi-version compat)
|
||||||
|
// ══════════════════════════════════════════════════════
|
||||||
|
|
||||||
|
function getChartType(pres, name) {
|
||||||
|
var MAP = { pie: 'PIE', line: 'LINE', bar: 'BAR' };
|
||||||
|
var key = MAP[name];
|
||||||
|
if (!key) return name;
|
||||||
|
if (pres.charts && pres.charts[key] !== undefined) return pres.charts[key];
|
||||||
|
if (pres.ChartType && pres.ChartType[key] !== undefined) return pres.ChartType[key];
|
||||||
|
if (pres.ChartType && pres.ChartType[name] !== undefined) return pres.ChartType[name];
|
||||||
|
return name;
|
||||||
|
}
|
||||||
|
|
||||||
|
// ══════════════════════════════════════════════════════
|
||||||
|
// 6. Chart Rendering
|
||||||
|
// ══════════════════════════════════════════════════════
|
||||||
|
|
||||||
|
function addChartToSlide(s, pres, chartData, colors, t, layout) {
|
||||||
|
var lx = (layout && layout.x) || 0.5;
|
||||||
|
var ly = (layout && layout.y) || 1.4;
|
||||||
|
var lw = (layout && layout.w) || 9;
|
||||||
|
var lh = (layout && layout.h) || 3.8;
|
||||||
|
|
||||||
|
var chartType = getChartType(pres, chartData.type);
|
||||||
|
var isPie = chartData.type === 'pie';
|
||||||
|
var isLine = chartData.type === 'line';
|
||||||
|
|
||||||
|
var data = isPie ? [chartData.series[0]] : chartData.series;
|
||||||
|
var needed = isPie ? data[0].values.length : Math.max(data[0].values.length, data.length);
|
||||||
|
var clrs = ensureColors(colors, needed);
|
||||||
|
|
||||||
|
var opts = {
|
||||||
|
x: lx, y: ly, w: lw, h: lh,
|
||||||
|
chartColors: clrs,
|
||||||
|
showLegend: true,
|
||||||
|
legendPos: isPie ? 'r' : 'b',
|
||||||
|
legendFontSize: 9,
|
||||||
|
legendColor: t.text,
|
||||||
|
showTitle: false
|
||||||
|
};
|
||||||
|
|
||||||
|
if (isPie) {
|
||||||
|
opts.showPercent = true;
|
||||||
|
opts.showValue = false;
|
||||||
|
opts.dataLabelColor = t.text;
|
||||||
|
opts.dataLabelFontSize = 10;
|
||||||
|
} else if (isLine) {
|
||||||
|
opts.lineSize = 2;
|
||||||
|
opts.showMarker = true;
|
||||||
|
opts.markerSize = 6;
|
||||||
|
opts.catAxisLabelColor = t.text;
|
||||||
|
opts.catAxisLabelFontSize = 9;
|
||||||
|
opts.valAxisLabelColor = t.text;
|
||||||
|
opts.valAxisLabelFontSize = 9;
|
||||||
|
opts.showValue = true;
|
||||||
|
opts.dataLabelColor = t.text;
|
||||||
|
opts.dataLabelFontSize = 8;
|
||||||
|
opts.dataLabelPosition = 'outEnd';
|
||||||
|
} else {
|
||||||
|
opts.barDir = 'col';
|
||||||
|
opts.barGapWidthPct = 80;
|
||||||
|
opts.catAxisLabelColor = t.text;
|
||||||
|
opts.catAxisLabelFontSize = 10;
|
||||||
|
opts.valAxisLabelColor = t.text;
|
||||||
|
opts.valAxisLabelFontSize = 9;
|
||||||
|
opts.showValue = true;
|
||||||
|
opts.dataLabelColor = t.text;
|
||||||
|
opts.dataLabelFontSize = 9;
|
||||||
|
opts.dataLabelPosition = 'outEnd';
|
||||||
|
}
|
||||||
|
|
||||||
|
s.addChart(chartType, data, opts);
|
||||||
|
}
|
||||||
|
|
||||||
|
// ══════════════════════════════════════════════════════
|
||||||
|
// 7. Content Rendering
|
||||||
|
// ══════════════════════════════════════════════════════
|
||||||
|
|
||||||
|
function renderTable(s, tableItem, t, startY, maxY, startX, totalW) {
|
||||||
|
if (!tableItem.rows || tableItem.rows.length === 0) return startY;
|
||||||
|
|
||||||
|
var colCount = Math.max.apply(null, tableItem.rows.map(function(r) { return r.length; }));
|
||||||
|
var cw = totalW / colCount;
|
||||||
|
var rowH = 0.35;
|
||||||
|
|
||||||
|
for (var r = 0; r < tableItem.rows.length; r++) {
|
||||||
|
var ry = startY + r * rowH;
|
||||||
|
if (ry + rowH > maxY) break;
|
||||||
|
|
||||||
|
if (r === 0) {
|
||||||
|
s.addShape('rect', {
|
||||||
|
x: startX - 0.05, y: ry, w: colCount * cw + 0.1, h: rowH,
|
||||||
|
fill: { color: t.light }
|
||||||
|
});
|
||||||
|
} else if (r % 2 === 0) {
|
||||||
|
s.addShape('rect', {
|
||||||
|
x: startX - 0.05, y: ry, w: colCount * cw + 0.1, h: rowH,
|
||||||
|
fill: { color: t.lighter || t.bg }
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
for (var c = 0; c < tableItem.rows[r].length; c++) {
|
||||||
|
s.addText(tableItem.rows[r][c], {
|
||||||
|
x: startX + c * cw, y: ry, w: cw - 0.05, h: rowH,
|
||||||
|
fontSize: 10, color: r === 0 ? t.title : t.text,
|
||||||
|
fontFace: 'Arial', bold: r === 0, valign: 'middle'
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
var renderedRows = Math.min(tableItem.rows.length, Math.floor((maxY - startY) / rowH));
|
||||||
|
return startY + renderedRows * rowH + 0.2;
|
||||||
|
}
|
||||||
|
|
||||||
|
function renderContent(s, content, t, opts) {
|
||||||
|
var startY = (opts && opts.startY) || 1.4;
|
||||||
|
var maxY = (opts && opts.maxY) || 5.0;
|
||||||
|
var x = (opts && opts.x) || 0.4;
|
||||||
|
var w = (opts && opts.w) || 8.5;
|
||||||
|
var y = startY;
|
||||||
|
|
||||||
|
for (var idx = 0; idx < content.length; idx++) {
|
||||||
|
var item = content[idx];
|
||||||
|
if (y > maxY) break;
|
||||||
|
|
||||||
|
if (item.type === 'h3') {
|
||||||
|
s.addText(item.text, {
|
||||||
|
x: x, y: y, w: w, h: 0.4,
|
||||||
|
fontSize: 16, color: t.title, fontFace: 'Arial', bold: true
|
||||||
|
});
|
||||||
|
y += 0.5;
|
||||||
|
}
|
||||||
|
else if (item.type === 'list') {
|
||||||
|
for (var li = 0; li < item.items.length; li++) {
|
||||||
|
if (y > maxY) break;
|
||||||
|
s.addShape('ellipse', {
|
||||||
|
x: x + 0.02, y: y + 0.13, w: 0.09, h: 0.09,
|
||||||
|
fill: { color: t.accent }
|
||||||
|
});
|
||||||
|
s.addText(item.items[li], {
|
||||||
|
x: x + 0.22, y: y, w: w - 0.3, h: 0.35,
|
||||||
|
fontSize: 13, color: t.text, fontFace: 'Arial'
|
||||||
|
});
|
||||||
|
y += 0.42;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
else if (item.type === 'olist') {
|
||||||
|
for (var oi = 0; oi < item.items.length; oi++) {
|
||||||
|
if (y > maxY) break;
|
||||||
|
s.addShape('ellipse', {
|
||||||
|
x: x, y: y + 0.05, w: 0.22, h: 0.22,
|
||||||
|
fill: { color: t.accent }
|
||||||
|
});
|
||||||
|
s.addText(String(oi + 1), {
|
||||||
|
x: x, y: y + 0.05, w: 0.22, h: 0.22,
|
||||||
|
fontSize: 9, color: 'FFFFFF', fontFace: 'Arial', bold: true,
|
||||||
|
align: 'center', valign: 'middle'
|
||||||
|
});
|
||||||
|
s.addText(item.items[oi], {
|
||||||
|
x: x + 0.3, y: y, w: w - 0.4, h: 0.35,
|
||||||
|
fontSize: 13, color: t.text, fontFace: 'Arial'
|
||||||
|
});
|
||||||
|
y += 0.42;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
else if (item.type === 'code') {
|
||||||
|
var lineCount = item.code.split('\n').length;
|
||||||
|
var ch = Math.min(2.5, lineCount * 0.22 + 0.3);
|
||||||
|
s.addShape('roundRect', {
|
||||||
|
x: x - 0.1, y: y, w: w + 0.2, h: ch,
|
||||||
|
fill: { color: '1E1E1E' }, rectRadius: 0.05
|
||||||
|
});
|
||||||
|
s.addText(item.code, {
|
||||||
|
x: x + 0.05, y: y + 0.1, w: w - 0.1, h: ch - 0.2,
|
||||||
|
fontSize: 10, color: 'D4D4D4', fontFace: 'Consolas', valign: 'top'
|
||||||
|
});
|
||||||
|
y += ch + 0.2;
|
||||||
|
}
|
||||||
|
else if (item.type === 'quote') {
|
||||||
|
var quoteText = item.lines.join('\n');
|
||||||
|
var qLines = item.lines.length;
|
||||||
|
var qh = Math.min(2.0, qLines * 0.25 + 0.2);
|
||||||
|
s.addShape('rect', {
|
||||||
|
x: x, y: y, w: 0.05, h: qh,
|
||||||
|
fill: { color: t.accent }
|
||||||
|
});
|
||||||
|
s.addShape('rect', {
|
||||||
|
x: x + 0.05, y: y, w: w - 0.05, h: qh,
|
||||||
|
fill: { color: t.light }
|
||||||
|
});
|
||||||
|
s.addText(quoteText, {
|
||||||
|
x: x + 0.2, y: y + 0.05, w: w - 0.3, h: qh - 0.1,
|
||||||
|
fontSize: 12, color: t.text, fontFace: 'Arial', italic: true, valign: 'top'
|
||||||
|
});
|
||||||
|
y += qh + 0.15;
|
||||||
|
}
|
||||||
|
else if (item.type === 'table') {
|
||||||
|
y = renderTable(s, item, t, y, maxY, x, w);
|
||||||
|
}
|
||||||
|
else if (item.type === 'text') {
|
||||||
|
s.addText(item.text, {
|
||||||
|
x: x, y: y, w: w, h: 0.35,
|
||||||
|
fontSize: 12, color: t.text, fontFace: 'Arial'
|
||||||
|
});
|
||||||
|
y += 0.4;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return y;
|
||||||
|
}
|
||||||
|
|
||||||
|
// ══════════════════════════════════════════════════════
|
||||||
|
// 8. Slide Renderers
|
||||||
|
// ══════════════════════════════════════════════════════
|
||||||
|
|
||||||
|
function addDecorations(s, t) {
|
||||||
|
s.addShape('rect', { x: 0, y: 0, w: 10, h: 0.12, fill: { color: t.accent } });
|
||||||
|
s.addShape('rect', { x: 0, y: 0, w: 0.10, h: 5.625, fill: { color: t.accent } });
|
||||||
|
}
|
||||||
|
|
||||||
|
function renderCoverSlide(s, slide, t) {
|
||||||
|
s.addShape('rect', { x: 0.4, y: 1.3, w: 0.06, h: 1.6, fill: { color: t.accent } });
|
||||||
|
s.addText(slide.title, {
|
||||||
|
x: 0.7, y: 1.3, w: 8.5, h: 1.4,
|
||||||
|
fontSize: 40, color: t.title, fontFace: 'Arial', bold: true, valign: 'middle'
|
||||||
|
});
|
||||||
|
if (slide.subtitle) {
|
||||||
|
s.addText(slide.subtitle, {
|
||||||
|
x: 0.7, y: 3.0, w: 8, h: 0.8,
|
||||||
|
fontSize: 18, color: t.text, fontFace: 'Arial'
|
||||||
|
});
|
||||||
|
}
|
||||||
|
s.addShape('rect', { x: 0.7, y: 4.2, w: 2.5, h: 0.03, fill: { color: t.secondary } });
|
||||||
|
if (slide.content && slide.content.length > 0) {
|
||||||
|
renderContent(s, slide.content, t, { startY: 4.5, maxY: 5.3 });
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function renderEndingSlide(s, slide, t) {
|
||||||
|
s.addShape('rect', { x: 1.5, y: 1.0, w: 7, h: 3.5, fill: { color: t.light } });
|
||||||
|
s.addShape('rect', { x: 2.5, y: 1.3, w: 5, h: 0.04, fill: { color: t.accent } });
|
||||||
|
s.addShape('rect', { x: 2.5, y: 4.2, w: 5, h: 0.04, fill: { color: t.accent } });
|
||||||
|
s.addText(slide.title, {
|
||||||
|
x: 1, y: 1.5, w: 8, h: 1.5,
|
||||||
|
fontSize: 44, color: t.title, fontFace: 'Arial', bold: true,
|
||||||
|
align: 'center', valign: 'middle'
|
||||||
|
});
|
||||||
|
if (slide.content && slide.content.length > 0) {
|
||||||
|
var texts = [];
|
||||||
|
for (var i = 0; i < slide.content.length; i++) {
|
||||||
|
var ci = slide.content[i];
|
||||||
|
if (ci.type === 'text') texts.push(ci.text);
|
||||||
|
if (ci.type === 'list') texts = texts.concat(ci.items);
|
||||||
|
if (ci.type === 'olist') texts = texts.concat(ci.items);
|
||||||
|
}
|
||||||
|
if (texts.length > 0) {
|
||||||
|
s.addText(texts.join('\n'), {
|
||||||
|
x: 2, y: 3.0, w: 6, h: 1.2,
|
||||||
|
fontSize: 14, color: t.text, fontFace: 'Arial',
|
||||||
|
align: 'center', valign: 'top'
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function renderContentSlide(s, pres, slide, slideIndex, totalSlides, colors, t) {
|
||||||
|
s.addShape('rect', { x: 0, y: 0.15, w: 10, h: 1.0, fill: { color: t.light } });
|
||||||
|
s.addText(slide.title, {
|
||||||
|
x: 0.5, y: 0.25, w: 9, h: 0.8,
|
||||||
|
fontSize: 26, color: t.title, fontFace: 'Arial', bold: true, valign: 'middle'
|
||||||
|
});
|
||||||
|
|
||||||
|
var chartData = null;
|
||||||
|
var chartTableIdx = -1;
|
||||||
|
|
||||||
|
for (var ci = 0; ci < slide.content.length; ci++) {
|
||||||
|
if (slide.content[ci].type === 'table') {
|
||||||
|
var detected = detectChart(slide.content[ci], slide, ci);
|
||||||
|
if (detected) {
|
||||||
|
chartData = detected;
|
||||||
|
chartTableIdx = ci;
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if (chartData) {
|
||||||
|
var remaining = slide.content.filter(function(_, idx) {
|
||||||
|
return idx !== chartTableIdx && idx !== chartData.hintIndex;
|
||||||
|
});
|
||||||
|
var hasExtra = remaining.length > 0;
|
||||||
|
|
||||||
|
try {
|
||||||
|
var isPie = chartData.type === 'pie';
|
||||||
|
var chartW = hasExtra ? (isPie ? 5.5 : 6.0) : (isPie ? 6.5 : 9.0);
|
||||||
|
|
||||||
|
addChartToSlide(s, pres, chartData, colors, t, {
|
||||||
|
x: 0.5, y: 1.4, w: chartW, h: 3.8
|
||||||
|
});
|
||||||
|
|
||||||
|
if (hasExtra) {
|
||||||
|
var sideX = chartW + 0.8;
|
||||||
|
var sideW = 10 - sideX - 0.3;
|
||||||
|
if (sideW > 1.5) {
|
||||||
|
renderContent(s, remaining, t, { startY: 1.5, maxY: 5.0, x: sideX, w: sideW });
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} catch (err) {
|
||||||
|
renderContent(s, slide.content, t);
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
renderContent(s, slide.content, t);
|
||||||
|
}
|
||||||
|
|
||||||
|
s.addText((slideIndex + 1) + ' / ' + totalSlides, {
|
||||||
|
x: 8.5, y: 5.3, w: 1.3, h: 0.25,
|
||||||
|
fontSize: 9, color: t.secondary, fontFace: 'Arial', align: 'right'
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
// ══════════════════════════════════════════════════════
|
||||||
|
// 9. Main Generator
|
||||||
|
// ══════════════════════════════════════════════════════
|
||||||
|
|
||||||
|
function createPPTX(markdownText, options) {
|
||||||
|
options = options || {};
|
||||||
|
var themeName = options.theme || 'ocean';
|
||||||
|
var t = THEMES[themeName] || THEMES.ocean;
|
||||||
|
var colors = (CHART_COLORS[themeName] || CHART_COLORS.ocean).slice();
|
||||||
|
|
||||||
|
var pres = new PptxGenJS();
|
||||||
|
pres.layout = 'LAYOUT_16x9';
|
||||||
|
|
||||||
|
var slides = parse(markdownText);
|
||||||
|
|
||||||
|
if (slides.length === 0) {
|
||||||
|
var emptySlide = pres.addSlide();
|
||||||
|
emptySlide.background = { color: t.bg };
|
||||||
|
emptySlide.addText('(empty content)', {
|
||||||
|
x: 1, y: 2, w: 8, h: 1.5,
|
||||||
|
fontSize: 24, color: t.text, fontFace: 'Arial', align: 'center', valign: 'middle'
|
||||||
|
});
|
||||||
|
return pres;
|
||||||
|
}
|
||||||
|
|
||||||
|
for (var si = 0; si < slides.length; si++) {
|
||||||
|
var slide = slides[si];
|
||||||
|
var s = pres.addSlide();
|
||||||
|
s.background = { color: t.bg };
|
||||||
|
addDecorations(s, t);
|
||||||
|
|
||||||
|
if (slide.type === 'cover') {
|
||||||
|
renderCoverSlide(s, slide, t);
|
||||||
|
} else if (slide.type === 'ending') {
|
||||||
|
renderEndingSlide(s, slide, t);
|
||||||
|
s.addText((si + 1) + ' / ' + slides.length, {
|
||||||
|
x: 8.5, y: 5.3, w: 1.3, h: 0.25,
|
||||||
|
fontSize: 9, color: t.secondary, fontFace: 'Arial', align: 'right'
|
||||||
|
});
|
||||||
|
} else {
|
||||||
|
renderContentSlide(s, pres, slide, si, slides.length, colors, t);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return pres;
|
||||||
|
}
|
||||||
|
|
||||||
|
// ══════════════════════════════════════════════════════
|
||||||
|
// 10. Module Export
|
||||||
|
// ══════════════════════════════════════════════════════
|
||||||
|
|
||||||
|
module.exports = {
|
||||||
|
createPPTX: createPPTX,
|
||||||
|
parse: parse,
|
||||||
|
THEMES: THEMES,
|
||||||
|
CHART_COLORS: CHART_COLORS
|
||||||
|
};
|
||||||
@@ -1,68 +0,0 @@
|
|||||||
---
|
|
||||||
name: python-skill
|
|
||||||
description: Python coding best practices and patterns. Use when writing, reviewing, or debugging Python code.
|
|
||||||
---
|
|
||||||
|
|
||||||
# Python Skill
|
|
||||||
|
|
||||||
## Type Hints (Python 3.10+)
|
|
||||||
|
|
||||||
```python
|
|
||||||
# Bad
|
|
||||||
def process(data, callback):
|
|
||||||
return callback(data)
|
|
||||||
|
|
||||||
# Good
|
|
||||||
from typing import Callable
|
|
||||||
|
|
||||||
def process(data: dict, callback: Callable[[dict], str]) -> str:
|
|
||||||
return callback(data)
|
|
||||||
```
|
|
||||||
|
|
||||||
## Dataclasses Over Plain Dicts
|
|
||||||
|
|
||||||
```python
|
|
||||||
from dataclasses import dataclass
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class GameFrame:
|
|
||||||
timestamp: float
|
|
||||||
objects: list[str]
|
|
||||||
confidence: float = 1.0
|
|
||||||
```
|
|
||||||
|
|
||||||
## Context Managers for Resources
|
|
||||||
|
|
||||||
```python
|
|
||||||
# Bad
|
|
||||||
f = open("log.txt")
|
|
||||||
data = f.read()
|
|
||||||
f.close()
|
|
||||||
|
|
||||||
# Good
|
|
||||||
with open("log.txt") as f:
|
|
||||||
data = f.read()
|
|
||||||
```
|
|
||||||
|
|
||||||
## List Comprehensions vs Loops
|
|
||||||
|
|
||||||
```python
|
|
||||||
# Prefer comprehension for simple transforms
|
|
||||||
enemies = [obj for obj in objects if obj.type == "enemy"]
|
|
||||||
|
|
||||||
# Use loop when logic is complex (>2 conditions)
|
|
||||||
results = []
|
|
||||||
for obj in objects:
|
|
||||||
if obj.type == "enemy" and obj.visible and obj.distance < 100:
|
|
||||||
results.append(obj.position)
|
|
||||||
```
|
|
||||||
|
|
||||||
## Error Handling
|
|
||||||
```python
|
|
||||||
# Be specific — never catch bare Exception silently
|
|
||||||
try:
|
|
||||||
frame = capture_screen()
|
|
||||||
except ScreenCaptureError as e:
|
|
||||||
logger.error("Screen capture failed: %s", e)
|
|
||||||
raise
|
|
||||||
```
|
|
||||||
@@ -1,20 +0,0 @@
|
|||||||
{
|
|
||||||
"skill_name": "python",
|
|
||||||
"evals": [
|
|
||||||
{
|
|
||||||
"id": 1,
|
|
||||||
"prompt": "What's the best way to handle file reading in Python?",
|
|
||||||
"expected_output": "Recommends 'with' statement (context manager) for automatic resource cleanup, shows open() usage with proper mode."
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"id": 2,
|
|
||||||
"prompt": "How should I add type hints to a Python function?",
|
|
||||||
"expected_output": "Shows parameter type annotations, return type with ->, use of dataclasses or TypedDict for complex types."
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"id": 3,
|
|
||||||
"prompt": "When should I use a list comprehension vs a for loop in Python?",
|
|
||||||
"expected_output": "Recommends comprehensions for simple transforms, for loops when logic is complex (multiple conditions/side effects), with examples of each."
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
@@ -0,0 +1,83 @@
|
|||||||
|
# SDLC Skill
|
||||||
|
|
||||||
|
Guides you through a complete software development lifecycle: requirements → design → tasks → implementation plan → code → verification.
|
||||||
|
|
||||||
|
## Architecture
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
## Workflow
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
## Starting a New Project
|
||||||
|
|
||||||
|
Just describe what you want to build:
|
||||||
|
|
||||||
|
> "帮我设计一个订单管理系统"
|
||||||
|
> "help me build a URL shortener"
|
||||||
|
> "构建一个 RAG 系统"
|
||||||
|
|
||||||
|
The agent will ask clarifying questions if needed, then walk you through each phase one at a time.
|
||||||
|
|
||||||
|
## Resuming After Interruption
|
||||||
|
|
||||||
|
If a session was interrupted, start a new session with the sdlc agent and say:
|
||||||
|
|
||||||
|
> "continue"
|
||||||
|
> "继续"
|
||||||
|
> "resume"
|
||||||
|
|
||||||
|
The agent will read `specs/STATUS.md` automatically and pick up from where it left off.
|
||||||
|
|
||||||
|
## Confirming Each Phase
|
||||||
|
|
||||||
|
After each phase the agent writes an artifact and waits. Reply with any of:
|
||||||
|
|
||||||
|
> "done" / "ok" / "looks good" / "继续" / "确认" / "approve"
|
||||||
|
|
||||||
|
If the artifact has a `## ❓ Questions` section, fill in the answers in the file first, then reply "done".
|
||||||
|
|
||||||
|
## Phases
|
||||||
|
|
||||||
|
| Phase | Output | Trigger to advance |
|
||||||
|
|---|---|---|
|
||||||
|
| 1. Requirements | `specs/requirements.md` | "done" |
|
||||||
|
| 2. Design | `specs/design.md` | "done" |
|
||||||
|
| 3. Tasks | `specs/tasks.md` | "done" |
|
||||||
|
| 4. Implementation Plan | `specs/impl-plan.md` | "done" |
|
||||||
|
| 5. Implementation | source code | automatic |
|
||||||
|
| 6. Verification | updated checkboxes + test results | automatic |
|
||||||
|
|
||||||
|
## Checking Status
|
||||||
|
|
||||||
|
At any time:
|
||||||
|
|
||||||
|
> "what phase are we on?"
|
||||||
|
> "show status"
|
||||||
|
|
||||||
|
The agent will read `specs/STATUS.md` and report current progress.
|
||||||
|
|
||||||
|
## Change Requests (CR)
|
||||||
|
|
||||||
|
After the project is implemented, to add a new feature or significant change:
|
||||||
|
|
||||||
|
> "add semantic search support"
|
||||||
|
> "新增一个 REST API 接口"
|
||||||
|
> "CR: support multi-language"
|
||||||
|
|
||||||
|
The agent will create `specs/crs/CR-<N>-<title>.md` with a mini SDLC (requirements delta → design delta → tasks → impl plan), walk through confirmation phase by phase, then implement and verify.
|
||||||
|
|
||||||
|
## File Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
skills/sdlc/
|
||||||
|
├── SKILL.md
|
||||||
|
├── README.md # this file
|
||||||
|
├── assets/
|
||||||
|
│ ├── phase-checklist.md
|
||||||
|
│ ├── workflow.puml
|
||||||
|
│ └── sdlc-workflow.svg
|
||||||
|
└── evals/
|
||||||
|
└── evals.json
|
||||||
|
```
|
||||||
@@ -0,0 +1,275 @@
|
|||||||
|
---
|
||||||
|
name: sdlc
|
||||||
|
description: Systematic software development lifecycle assistant. Guides through requirements analysis, system design, task decomposition, and implementation planning. Use when a user needs to build a new feature, system, or product from scratch, or when they say "help me build", "design a system", "break down this feature", "plan this project", "需求分析", "系统设计", "任务分解", "实现计划", "帮我做", "帮我设计".
|
||||||
|
metadata:
|
||||||
|
author: common-skills
|
||||||
|
version: "1.0"
|
||||||
|
---
|
||||||
|
|
||||||
|
# SDLC Skill
|
||||||
|
|
||||||
|
Guide the user through a complete, systematic software development lifecycle: requirements → design → task breakdown → implementation plan. Produce concrete, actionable artifacts at each phase.
|
||||||
|
|
||||||
|
## Phase Overview
|
||||||
|
|
||||||
|
Run phases sequentially. Each phase produces a written artifact. **After completing each phase, present the artifact to the user and wait for explicit confirmation (e.g. "ok", "looks good", "continue", "确认") before proceeding to the next phase.** Do not advance automatically. Do not skip phases unless the user explicitly says a phase is already done.
|
||||||
|
|
||||||
|
```
|
||||||
|
Phase 1: Requirements Analysis → specs/requirements.md
|
||||||
|
Phase 2: System Design → specs/design.md
|
||||||
|
Phase 3: Task Decomposition → specs/tasks.md
|
||||||
|
Phase 4: Implementation Plan → specs/impl-plan.md
|
||||||
|
Phase 5: Implementation → source code
|
||||||
|
Phase 6: Verification → specs/impl-plan.md (DoD checkboxes updated)
|
||||||
|
```
|
||||||
|
|
||||||
|
**State tracking:** Maintain `specs/STATUS.md` throughout the process. Update it after each phase completes. Format:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
# SDLC Status
|
||||||
|
|
||||||
|
- [x] Phase 1: Requirements — completed
|
||||||
|
- [x] Phase 2: Design — completed
|
||||||
|
- [ ] Phase 3: Tasks — in progress
|
||||||
|
- [ ] Phase 4: Implementation Plan
|
||||||
|
- [ ] Phase 5: Implementation
|
||||||
|
- [ ] Phase 6: Verification
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output location:** Write all artifacts to `./specs/` in the current working directory. If the user specifies a path, use that instead.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Phase 1 — Requirements Analysis
|
||||||
|
|
||||||
|
**Goal:** Clarify what to build and why, before any design decisions.
|
||||||
|
|
||||||
|
Steps:
|
||||||
|
1. Identify the problem statement and user goals.
|
||||||
|
2. List functional requirements (what the system must do).
|
||||||
|
3. List non-functional requirements (performance, security, scalability, availability).
|
||||||
|
4. Identify constraints (tech stack, timeline, team size, budget).
|
||||||
|
5. Define out-of-scope items explicitly.
|
||||||
|
6. List open questions that need answers before design can proceed.
|
||||||
|
|
||||||
|
Output format — `requirements.md`:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
# Requirements: {Project Name}
|
||||||
|
|
||||||
|
## Problem Statement
|
||||||
|
{1–3 sentences: what problem, for whom, why it matters}
|
||||||
|
|
||||||
|
## Functional Requirements
|
||||||
|
- FR-1: ...
|
||||||
|
- FR-2: ...
|
||||||
|
|
||||||
|
## Non-Functional Requirements
|
||||||
|
- NFR-1: ...
|
||||||
|
|
||||||
|
## Constraints
|
||||||
|
- ...
|
||||||
|
|
||||||
|
## Out of Scope
|
||||||
|
- ...
|
||||||
|
|
||||||
|
## Open Questions
|
||||||
|
- [ ] ...
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Phase 2 — System Design
|
||||||
|
|
||||||
|
**Goal:** Define the architecture and key technical decisions.
|
||||||
|
|
||||||
|
Steps:
|
||||||
|
1. Choose architecture style (monolith, microservices, event-driven, etc.) and justify.
|
||||||
|
2. Draw a component diagram (PlantUML) showing major components and their interactions.
|
||||||
|
3. Define the data model (entities, relationships, key fields).
|
||||||
|
4. Identify external dependencies (third-party APIs, databases, queues, auth providers).
|
||||||
|
5. Document key technical decisions as ADRs (Architecture Decision Records) — one sentence each: decision + rationale.
|
||||||
|
6. Identify risks and mitigations.
|
||||||
|
|
||||||
|
Output format — `design.md`:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
# Design: {Project Name}
|
||||||
|
|
||||||
|
## Architecture
|
||||||
|
{Style chosen and rationale}
|
||||||
|
|
||||||
|
### Component Diagram
|
||||||
|
```plantuml
|
||||||
|
@startuml
|
||||||
|
...
|
||||||
|
@enduml
|
||||||
|
```
|
||||||
|
|
||||||
|
## Data Model
|
||||||
|
```plantuml
|
||||||
|
@startuml
|
||||||
|
entity ...
|
||||||
|
@enduml
|
||||||
|
```
|
||||||
|
|
||||||
|
## External Dependencies
|
||||||
|
| Dependency | Purpose | Notes |
|
||||||
|
|---|---|---|
|
||||||
|
|
||||||
|
## Architecture Decision Records
|
||||||
|
- ADR-1: {Decision} — {Rationale}
|
||||||
|
- ADR-2: ...
|
||||||
|
|
||||||
|
## Risks
|
||||||
|
| Risk | Likelihood | Impact | Mitigation |
|
||||||
|
|---|---|---|---|
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Phase 3 — Task Decomposition
|
||||||
|
|
||||||
|
**Goal:** Break the system into concrete, independently deliverable tasks.
|
||||||
|
|
||||||
|
Steps:
|
||||||
|
1. Group work into milestones (e.g., M1: Foundation, M2: Core Features, M3: Polish).
|
||||||
|
2. Within each milestone, list tasks. Each task must be:
|
||||||
|
- Independently completable by one developer
|
||||||
|
- Estimable (provide story points or hours)
|
||||||
|
- Linked to at least one functional requirement
|
||||||
|
3. Identify dependencies between tasks (task B requires task A).
|
||||||
|
4. Flag tasks that are on the critical path.
|
||||||
|
5. Identify tasks that can be parallelized.
|
||||||
|
|
||||||
|
Output format — `tasks.md`:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
# Tasks: {Project Name}
|
||||||
|
|
||||||
|
## Milestone 1 — {Name} (Target: {date or sprint})
|
||||||
|
|
||||||
|
| ID | Task | Est. | Depends On | FR | Critical |
|
||||||
|
|---|---|---|---|---|---|
|
||||||
|
| T-1 | ... | 2h | — | FR-1 | ✓ |
|
||||||
|
| T-2 | ... | 4h | T-1 | FR-2 | |
|
||||||
|
|
||||||
|
## Milestone 2 — {Name}
|
||||||
|
...
|
||||||
|
|
||||||
|
## Dependency Graph
|
||||||
|
```plantuml
|
||||||
|
@startuml
|
||||||
|
[T-1] --> [T-2]
|
||||||
|
...
|
||||||
|
@enduml
|
||||||
|
```
|
||||||
|
|
||||||
|
## Parallelizable Work
|
||||||
|
- T-3, T-4, T-5 can run in parallel after T-1
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Phase 4 — Implementation Plan
|
||||||
|
|
||||||
|
**Goal:** Produce a concrete, ordered execution plan a developer can follow immediately.
|
||||||
|
|
||||||
|
Steps:
|
||||||
|
1. Order tasks into a sprint/week plan based on dependencies and priorities.
|
||||||
|
2. For each task, provide:
|
||||||
|
- Acceptance criteria (how to know it's done)
|
||||||
|
- Key implementation notes (approach, gotchas, patterns to use)
|
||||||
|
- Definition of Done checklist
|
||||||
|
3. Identify the first task to start (the "day 1" task).
|
||||||
|
4. List any setup/scaffolding needed before coding begins.
|
||||||
|
|
||||||
|
Output format — `impl-plan.md`:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
# Implementation Plan: {Project Name}
|
||||||
|
|
||||||
|
## Setup Checklist
|
||||||
|
- [ ] ...
|
||||||
|
|
||||||
|
## Sprint 1
|
||||||
|
|
||||||
|
### T-1: {Task Name}
|
||||||
|
**Acceptance Criteria:**
|
||||||
|
- ...
|
||||||
|
|
||||||
|
**Implementation Notes:**
|
||||||
|
- ...
|
||||||
|
|
||||||
|
**Definition of Done:**
|
||||||
|
- [ ] Code written and reviewed
|
||||||
|
- [ ] Tests passing
|
||||||
|
- [ ] Deployed to staging
|
||||||
|
|
||||||
|
...
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Phase 5 — Implementation
|
||||||
|
|
||||||
|
**Goal:** Write the actual code following the impl-plan.
|
||||||
|
|
||||||
|
- Implement tasks in the order defined in `specs/impl-plan.md`, respecting dependencies.
|
||||||
|
- After completing each task, tick its Definition of Done checkboxes in `specs/impl-plan.md`.
|
||||||
|
- Update `specs/STATUS.md` to reflect current progress.
|
||||||
|
- Do not proceed to Phase 6 until all tasks in impl-plan are implemented.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Phase 6 — Verification
|
||||||
|
|
||||||
|
**Goal:** Confirm every task's Acceptance Criteria is met. Incomplete tasks must be completed before marking done.
|
||||||
|
|
||||||
|
Steps:
|
||||||
|
1. Re-read `specs/impl-plan.md` in full.
|
||||||
|
2. For **every** task and **every** DoD checkbox:
|
||||||
|
- Check whether the corresponding artifact/code actually exists.
|
||||||
|
- If it exists and passes: mark `[x]`.
|
||||||
|
- If it is missing or failing: **complete the work first**, then mark `[x]`.
|
||||||
|
- Never mark `[x]` for work that has not been done.
|
||||||
|
3. Update `specs/tasks.md` — add a `Done` column and mark each row ✓ when its DoD is fully checked.
|
||||||
|
4. Update `specs/STATUS.md` — mark Phase 5 and Phase 6 complete only after all DoD items are `[x]`.
|
||||||
|
5. Report a summary: total criteria, passed, completed-during-verification, any remaining failures.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Behavior Guidelines
|
||||||
|
|
||||||
|
- **Context budget.** The model context window is finite (~200K tokens). To avoid overflow:
|
||||||
|
- Never read source code files in bulk during planning phases (1–4). Only read `specs/` artifacts and targeted config/metadata files.
|
||||||
|
- During Phase 5 (implementation), implement and verify one task at a time. Do not load all source files simultaneously — read only the files relevant to the current task.
|
||||||
|
- During Phase 6 (verification), run tests and check artifacts one task at a time rather than loading everything at once.
|
||||||
|
- If context is running low (many files already read), summarize completed work, write state to `specs/STATUS.md`, and tell the user: "Context limit approaching — please start a new session and say 'continue' to resume from Phase N."
|
||||||
|
- **Track state.** After each phase completes, update `specs/STATUS.md`. On session start, read `specs/STATUS.md` first to resume from the correct phase.
|
||||||
|
1. Check if `specs/STATUS.md` exists.
|
||||||
|
2. If yes: read it, identify the last completed phase, and tell the user: "Resuming from Phase N — [phase name]. Last completed: [summary]." Then continue from where it left off.
|
||||||
|
3. If no: this is a fresh start — proceed with Phase 1.
|
||||||
|
- **Resume on session start.** At the beginning of every session, before doing anything else: If the user's input is ambiguous (e.g., "build a chat app"), ask 2–3 targeted clarifying questions before starting Phase 1. Do not invent requirements.
|
||||||
|
- **Inline Q&A for unknowns.** When there are unclear decisions within a phase, embed them directly in the artifact file under a `## ❓ Questions` section at the bottom. Each question must include a suggested answer. Example:
|
||||||
|
```markdown
|
||||||
|
## ❓ Questions
|
||||||
|
- Q1: Should the API be REST or GraphQL?
|
||||||
|
Suggested: REST — simpler for this use case.
|
||||||
|
- Q2: Do we need multi-tenancy from day one?
|
||||||
|
Suggested: No — single-tenant first, add later if needed.
|
||||||
|
```
|
||||||
|
After writing the file, tell the user: "Please review the questions at the bottom of the file, fill in or adjust the answers, then reply 'done' to continue." Wait for the user to confirm (e.g. "done", "approve", "已回答", "继续") before proceeding. Once confirmed, re-read the file, incorporate the answers, and remove the `## ❓ Questions` section before moving to the next phase.
|
||||||
|
- **Be concrete.** Use actual names, technologies, and numbers from the user's context — not generic placeholders.
|
||||||
|
- **Stay lean.** Prefer simple designs over complex ones. Flag complexity only when requirements demand it.
|
||||||
|
- **Surface trade-offs.** When multiple approaches exist, briefly state the trade-off and recommend one.
|
||||||
|
- **No code until design is complete.** Do NOT generate any implementation code until Phases 1–3 are fully confirmed by the user. Phase 4 produces a plan with notes, not code — actual code is only written after the user explicitly requests it post-planning.
|
||||||
|
- **Iterate on request.** If the user says "change X" or "add Y" to an existing artifact, update only the affected artifact and note what changed.
|
||||||
|
- **Change Requests (CR).** When the user requests a new feature or significant change after Phase 4 is complete, treat it as a CR:
|
||||||
|
1. Create `specs/crs/CR-<N>-<short-title>.md` (e.g. `specs/crs/CR-1-add-semantic-search.md`) with the same structure as a mini SDLC: requirements delta, design delta, new/updated tasks, impl notes.
|
||||||
|
2. Update `specs/STATUS.md` to list the CR and its phase status.
|
||||||
|
3. Follow the same phase-by-phase confirmation flow for the CR before implementing.
|
||||||
|
4. On completion, update the original `specs/tasks.md` and `specs/impl-plan.md` to reflect the additions.
|
||||||
|
- **Language.** Respond in the same language the user used.
|
||||||
|
|
||||||
|
See [assets/phase-checklist.md](assets/phase-checklist.md) for a quick reference checklist for each phase.
|
||||||
@@ -0,0 +1,38 @@
|
|||||||
|
@startuml sdlc-architecture
|
||||||
|
skinparam componentStyle rectangle
|
||||||
|
skinparam defaultFontName Arial
|
||||||
|
skinparam backgroundColor #FAFAFA
|
||||||
|
|
||||||
|
package "sdlc Skill" {
|
||||||
|
component "SKILL.md\n(instructions)" as SKILL
|
||||||
|
component "assets/\nphase-checklist.md" as CHECKLIST
|
||||||
|
component "evals/evals.json" as EVALS
|
||||||
|
}
|
||||||
|
|
||||||
|
package "Project Artifacts (specs/)" {
|
||||||
|
component "requirements.md" as R
|
||||||
|
component "design.md" as D
|
||||||
|
component "tasks.md" as T
|
||||||
|
component "impl-plan.md" as I
|
||||||
|
component "STATUS.md\n(resume point)" as STATUS
|
||||||
|
component "crs/CR-N-*.md\n(change requests)" as CR
|
||||||
|
}
|
||||||
|
|
||||||
|
package "Codebase" {
|
||||||
|
component "source code" as CODE
|
||||||
|
component "tests" as TESTS
|
||||||
|
}
|
||||||
|
|
||||||
|
actor Developer
|
||||||
|
|
||||||
|
Developer --> SKILL : describe project / "continue"
|
||||||
|
SKILL --> R : phase 1
|
||||||
|
SKILL --> D : phase 2
|
||||||
|
SKILL --> T : phase 3
|
||||||
|
SKILL --> I : phase 4
|
||||||
|
SKILL --> CODE : phase 5
|
||||||
|
SKILL --> TESTS : phase 6
|
||||||
|
SKILL --> STATUS : tracks progress
|
||||||
|
SKILL --> CR : change requests
|
||||||
|
CHECKLIST ..> SKILL : phase gate rules
|
||||||
|
@enduml
|
||||||
@@ -0,0 +1,32 @@
|
|||||||
|
# Phase Quick-Reference Checklist
|
||||||
|
|
||||||
|
## Phase 1 — Requirements
|
||||||
|
- [ ] Problem statement written (1–3 sentences)
|
||||||
|
- [ ] Functional requirements listed (FR-N format)
|
||||||
|
- [ ] Non-functional requirements listed
|
||||||
|
- [ ] Constraints documented
|
||||||
|
- [ ] Out-of-scope items explicit
|
||||||
|
- [ ] Open questions listed
|
||||||
|
|
||||||
|
## Phase 2 — Design
|
||||||
|
- [ ] Architecture style chosen with rationale
|
||||||
|
- [ ] Component diagram (PlantUML)
|
||||||
|
- [ ] Data model diagram (PlantUML)
|
||||||
|
- [ ] External dependencies table
|
||||||
|
- [ ] ADRs written (decision + rationale)
|
||||||
|
- [ ] Risks table with mitigations
|
||||||
|
|
||||||
|
## Phase 3 — Tasks
|
||||||
|
- [ ] Work grouped into milestones
|
||||||
|
- [ ] Each task: estimable, independent, linked to FR
|
||||||
|
- [ ] Dependencies identified
|
||||||
|
- [ ] Critical path flagged
|
||||||
|
- [ ] Parallelizable tasks noted
|
||||||
|
|
||||||
|
## Phase 4 — Implementation Plan
|
||||||
|
- [ ] Setup checklist written
|
||||||
|
- [ ] Tasks ordered by dependency + priority
|
||||||
|
- [ ] Each task has acceptance criteria
|
||||||
|
- [ ] Each task has implementation notes
|
||||||
|
- [ ] Each task has Definition of Done
|
||||||
|
- [ ] "Day 1" first task identified
|
||||||
|
After Width: | Height: | Size: 12 KiB |
|
After Width: | Height: | Size: 9.6 KiB |
@@ -0,0 +1,34 @@
|
|||||||
|
@startuml sdlc-workflow
|
||||||
|
skinparam defaultFontName Arial
|
||||||
|
skinparam backgroundColor #FAFAFA
|
||||||
|
|
||||||
|
actor Developer
|
||||||
|
participant "sdlc Skill" as SKILL
|
||||||
|
participant "specs/" as FS
|
||||||
|
|
||||||
|
Developer -> SKILL : describe project
|
||||||
|
SKILL -> FS : write requirements.md
|
||||||
|
SKILL --> Developer : phase 1 complete ✓
|
||||||
|
Developer -> SKILL : "done"
|
||||||
|
|
||||||
|
SKILL -> FS : write design.md
|
||||||
|
SKILL --> Developer : phase 2 complete ✓
|
||||||
|
Developer -> SKILL : "done"
|
||||||
|
|
||||||
|
SKILL -> FS : write tasks.md
|
||||||
|
SKILL --> Developer : phase 3 complete ✓
|
||||||
|
Developer -> SKILL : "done"
|
||||||
|
|
||||||
|
SKILL -> FS : write impl-plan.md
|
||||||
|
SKILL --> Developer : phase 4 complete ✓
|
||||||
|
Developer -> SKILL : "done"
|
||||||
|
|
||||||
|
SKILL -> FS : write source code
|
||||||
|
SKILL -> FS : run tests → update STATUS.md
|
||||||
|
SKILL --> Developer : phase 5 & 6 complete ✓
|
||||||
|
|
||||||
|
note over Developer, SKILL
|
||||||
|
At any point: "continue" resumes from STATUS.md
|
||||||
|
After completion: describe a CR to add features
|
||||||
|
end note
|
||||||
|
@enduml
|
||||||
@@ -0,0 +1,69 @@
|
|||||||
|
{
|
||||||
|
"skill_name": "sdlc",
|
||||||
|
"evals": [
|
||||||
|
{
|
||||||
|
"id": 1,
|
||||||
|
"prompt": "I want to build a URL shortener service. Users paste a long URL and get a short one back. Help me go through the full SDLC.",
|
||||||
|
"expected_output": "Four artifacts: requirements.md with functional requirements (shorten URL, redirect, track clicks), NFRs (latency, availability), and out-of-scope items; design.md with architecture diagram, data model (URL entity with short_code, original_url, created_at, click_count), ADRs; tasks.md with milestones and estimated tasks linked to FRs; impl-plan.md with ordered sprint plan and acceptance criteria per task.",
|
||||||
|
"assertions": [
|
||||||
|
"requirements.md is created with at least 3 functional requirements",
|
||||||
|
"requirements.md includes non-functional requirements (e.g. latency or availability)",
|
||||||
|
"requirements.md includes an out-of-scope section",
|
||||||
|
"design.md includes a PlantUML component or architecture diagram",
|
||||||
|
"design.md includes a data model with at least a URL/link entity",
|
||||||
|
"design.md includes at least 2 ADRs",
|
||||||
|
"tasks.md groups work into at least 2 milestones",
|
||||||
|
"tasks.md includes effort estimates for each task",
|
||||||
|
"tasks.md identifies task dependencies",
|
||||||
|
"impl-plan.md includes acceptance criteria for at least 2 tasks",
|
||||||
|
"impl-plan.md includes a setup checklist"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": 2,
|
||||||
|
"prompt": "帮我设计一个团队任务管理系统,支持创建任务、分配给成员、设置截止日期、评论。技术栈用 React + Node.js + PostgreSQL。",
|
||||||
|
"expected_output": "Four artifacts in Chinese or bilingual: requirements.md capturing task CRUD, assignment, deadlines, comments as FRs; design.md with React/Node.js/PostgreSQL architecture, data model (Task, User, Comment entities), ADRs for tech choices; tasks.md with milestones (backend API, frontend, integration), estimates, dependencies; impl-plan.md with sprint plan, acceptance criteria, and implementation notes referencing the chosen stack.",
|
||||||
|
"assertions": [
|
||||||
|
"requirements.md lists functional requirements including task creation, assignment, deadlines, and comments",
|
||||||
|
"design.md references React, Node.js, and PostgreSQL in the architecture",
|
||||||
|
"design.md includes a PlantUML diagram",
|
||||||
|
"design.md includes a data model with Task, User, and Comment entities",
|
||||||
|
"tasks.md separates backend and frontend work into distinct milestones or groups",
|
||||||
|
"tasks.md includes effort estimates",
|
||||||
|
"impl-plan.md includes acceptance criteria per task",
|
||||||
|
"Response language matches the user's input language (Chinese or bilingual)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": 3,
|
||||||
|
"prompt": "I need to add a notification system to our existing e-commerce platform. Users should get email and in-app notifications for order status changes. We use Python/Django backend and React frontend. Keep it simple — we're a small team of 3.",
|
||||||
|
"expected_output": "Four artifacts scoped to the notification feature (not a full platform rebuild): requirements.md with FRs for email/in-app notifications on order events, NFRs for delivery reliability, constraints noting small team and existing Django/React stack; design.md showing notification service integration with existing platform, data model for Notification entity, ADRs (e.g. use Celery for async, use existing email provider); tasks.md with realistic estimates for a 3-person team, parallelizable tasks identified; impl-plan.md with concrete Django/React implementation notes.",
|
||||||
|
"assertions": [
|
||||||
|
"requirements.md scopes the work to notifications only (not a full platform rewrite)",
|
||||||
|
"requirements.md captures the small team size as a constraint",
|
||||||
|
"design.md references Django and React in the architecture",
|
||||||
|
"design.md includes a Notification data model",
|
||||||
|
"design.md addresses async delivery (e.g. Celery, queue, or similar)",
|
||||||
|
"tasks.md estimates are realistic for a small team (no single task > 2 days without breakdown)",
|
||||||
|
"tasks.md identifies at least one set of parallelizable tasks",
|
||||||
|
"impl-plan.md includes Django-specific implementation notes (e.g. signals, Celery tasks, or similar)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": 4,
|
||||||
|
"prompt": "Design a real-time collaborative document editor like Google Docs. Multiple users edit the same document simultaneously.",
|
||||||
|
"expected_output": "Four artifacts acknowledging the high complexity: requirements.md with FRs (real-time sync, conflict resolution, presence indicators, history), NFRs (latency < 100ms, consistency), and explicit out-of-scope items to keep scope manageable; design.md with architecture covering WebSocket/CRDT/OT choice as a key ADR, component diagram showing client, server, and sync layer, risks table noting operational complexity; tasks.md with phased milestones (basic editor → real-time sync → conflict resolution → polish), critical path flagged; impl-plan.md with concrete first steps and setup checklist.",
|
||||||
|
"assertions": [
|
||||||
|
"requirements.md includes real-time sync and conflict resolution as functional requirements",
|
||||||
|
"requirements.md includes latency as a non-functional requirement",
|
||||||
|
"requirements.md includes explicit out-of-scope items to bound the problem",
|
||||||
|
"design.md includes an ADR addressing conflict resolution strategy (CRDT, OT, or similar)",
|
||||||
|
"design.md includes a PlantUML architecture diagram showing WebSocket or real-time communication",
|
||||||
|
"design.md includes a risks table",
|
||||||
|
"tasks.md phases work so real-time sync is not in the first milestone",
|
||||||
|
"tasks.md flags the critical path",
|
||||||
|
"impl-plan.md identifies the day-1 starting task"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
@@ -1,52 +0,0 @@
|
|||||||
---
|
|
||||||
name: testing-skill
|
|
||||||
description: Testing best practices for Python and general projects. Use when writing unit tests, debugging test failures, or improving test coverage.
|
|
||||||
---
|
|
||||||
|
|
||||||
# Testing Skill
|
|
||||||
|
|
||||||
## Test Structure (AAA Pattern)
|
|
||||||
|
|
||||||
```python
|
|
||||||
def test_decision_layer_returns_action():
|
|
||||||
# Arrange
|
|
||||||
state = {"health": 80, "enemy_visible": True}
|
|
||||||
|
|
||||||
# Act
|
|
||||||
action = decision_layer.decide(state)
|
|
||||||
|
|
||||||
# Assert
|
|
||||||
assert action == "attack"
|
|
||||||
```
|
|
||||||
|
|
||||||
## Pytest Tips
|
|
||||||
|
|
||||||
### Parametrize to Avoid Duplication
|
|
||||||
```python
|
|
||||||
import pytest
|
|
||||||
|
|
||||||
@pytest.mark.parametrize("health,expected", [
|
|
||||||
(100, "idle"),
|
|
||||||
(30, "flee"),
|
|
||||||
(0, "dead"),
|
|
||||||
])
|
|
||||||
def test_state_by_health(health, expected):
|
|
||||||
assert get_state(health) == expected
|
|
||||||
```
|
|
||||||
|
|
||||||
### Use Fixtures for Shared Setup
|
|
||||||
```python
|
|
||||||
@pytest.fixture
|
|
||||||
def mock_vision():
|
|
||||||
return {"objects": ["enemy", "wall"], "confidence": 0.95}
|
|
||||||
|
|
||||||
def test_understanding_layer(mock_vision):
|
|
||||||
result = understanding_layer.parse(mock_vision)
|
|
||||||
assert "enemy" in result["threats"]
|
|
||||||
```
|
|
||||||
|
|
||||||
## What to Test
|
|
||||||
- ✅ Happy path (normal input)
|
|
||||||
- ✅ Edge cases (empty, None, boundary values)
|
|
||||||
- ✅ Error paths (invalid input raises expected exception)
|
|
||||||
- ❌ Don't test implementation details — test behavior
|
|
||||||
@@ -1,20 +0,0 @@
|
|||||||
{
|
|
||||||
"skill_name": "testing",
|
|
||||||
"evals": [
|
|
||||||
{
|
|
||||||
"id": 1,
|
|
||||||
"prompt": "How should I write a pytest test for a function that returns game state?",
|
|
||||||
"expected_output": "Shows AAA pattern (Arrange/Act/Assert), uses pytest assertions, and demonstrates clear test structure."
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"id": 2,
|
|
||||||
"prompt": "Show me how to use pytest fixtures for shared test setup",
|
|
||||||
"expected_output": "Demonstrates @pytest.fixture decorator, fixture injection into test functions, and explains reuse across multiple tests."
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"id": 3,
|
|
||||||
"prompt": "What should I test in a unit test? What should I avoid testing?",
|
|
||||||
"expected_output": "Recommends testing behavior/outputs not implementation details, covering happy path and edge cases, avoiding testing private internals."
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
@@ -1,47 +0,0 @@
|
|||||||
---
|
|
||||||
name: typescript-skill
|
|
||||||
description: TypeScript coding patterns and type safety guide. Use when writing, reviewing, or debugging TypeScript code.
|
|
||||||
---
|
|
||||||
|
|
||||||
# TypeScript Skill
|
|
||||||
|
|
||||||
## Key Rules
|
|
||||||
|
|
||||||
### Always Prefer Explicit Types
|
|
||||||
```typescript
|
|
||||||
// Bad
|
|
||||||
const process = (data: any) => data.value;
|
|
||||||
|
|
||||||
// Good
|
|
||||||
interface GameState { value: number; }
|
|
||||||
const process = (data: GameState): number => data.value;
|
|
||||||
```
|
|
||||||
|
|
||||||
### Use `unknown` Instead of `any`
|
|
||||||
```typescript
|
|
||||||
// Bad
|
|
||||||
function parse(input: any) { return input.name; }
|
|
||||||
|
|
||||||
// Good
|
|
||||||
function parse(input: unknown): string {
|
|
||||||
if (typeof input === 'object' && input !== null && 'name' in input) {
|
|
||||||
return String((input as { name: unknown }).name);
|
|
||||||
}
|
|
||||||
throw new Error('Invalid input');
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Prefer `const` Assertions for Literals
|
|
||||||
```typescript
|
|
||||||
const DIRECTIONS = ['up', 'down', 'left', 'right'] as const;
|
|
||||||
type Direction = typeof DIRECTIONS[number]; // 'up' | 'down' | 'left' | 'right'
|
|
||||||
```
|
|
||||||
|
|
||||||
## Common Patterns
|
|
||||||
|
|
||||||
| Pattern | Use Case |
|
|
||||||
|---------|----------|
|
|
||||||
| `type` | Unions, intersections, primitives |
|
|
||||||
| `interface` | Object shapes (extendable) |
|
|
||||||
| `enum` | Named constants (prefer `as const` for simple cases) |
|
|
||||||
| `generic <T>` | Reusable, type-safe utilities |
|
|
||||||
@@ -1,20 +0,0 @@
|
|||||||
{
|
|
||||||
"skill_name": "typescript",
|
|
||||||
"evals": [
|
|
||||||
{
|
|
||||||
"id": 1,
|
|
||||||
"prompt": "How do I avoid using 'any' type in TypeScript?",
|
|
||||||
"expected_output": "Explains using 'unknown' instead of 'any', with type guards, and shows interface/type definitions as alternatives."
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"id": 2,
|
|
||||||
"prompt": "Show me how to create a type-safe function in TypeScript that processes a list of items",
|
|
||||||
"expected_output": "Demonstrates a generic function with <T> type parameter, proper return type annotation, and no use of 'any'."
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"id": 3,
|
|
||||||
"prompt": "What's the difference between type and interface in TypeScript?",
|
|
||||||
"expected_output": "Explains that interfaces are extendable and better for object shapes, types support unions/intersections, with concrete examples of each."
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||