chore: restructure skills repo with new agents and skill bundles

- Add new skills: deep-dive, docs-rag, meta-creator, ppt-maker, sdlc
- Add agent configs: g-assistent, meta-creator, sdlc with prompt files
- Add reference docs for custom agents and skills specification
- Add utility scripts: install-agents.sh, orchestrate.py, puml2svg.sh
- Update README and commit-message skill config
- Remove deprecated skills: codereview, python, testing, typescript
- Add .gitignore
This commit is contained in:
Team
2026-04-18 13:07:46 +08:00
parent 72f16d26b8
commit c0d14c6ac1
74 changed files with 5726 additions and 324 deletions
+82
View File
@@ -0,0 +1,82 @@
# deep-dive
A Kiro agent skill that analyzes codebases, documentation, APIs, or product specs and produces a structured technical report for developers.
## Architecture
![Architecture](assets/deep-dive-architecture.svg)
## Workflow
![Workflow](assets/deep-dive-workflow.svg)
## What it does
Given any technical material — source code, README, OpenAPI spec, pasted docs, or just a topic name — the agent produces a detailed Markdown report covering:
- System overview and design philosophy
- Architecture diagram (PlantUML)
- Key concepts & terminology glossary
- Data model with ER diagram
- Core flows with sequence diagrams
- API / interface reference
- Configuration & deployment notes
- Extension and integration points
- Observability (logging, metrics, tracing)
- Known limitations and trade-offs
- Actionable further reading recommendations
## When to Use
Activate this skill when a developer says things like:
- "help me understand this codebase"
- "deep dive into X"
- "onboard me to this service"
- "how does X work"
- "analyze this doc / spec"
- "详细分析 X 架构 / 部署流程"
## Accepted Inputs
| Input type | Example |
|---|---|
| File path(s) | `src/`, `docs/api.yaml`, `main.go` |
| Pasted text | README content, architecture notes |
| Topic name | "Kafka consumer groups", "Redis internals" |
| URL | Link to documentation or spec |
When given a directory, the skill automatically scans `README*`, `docs/`, entry-point files, and package manifests.
## Example Prompts
```
Give me a deep dive on the Kafka consumer group rebalancing protocol.
```
```
Analyze this FastAPI service and explain how it works: [paste README]
```
```
Help me understand the worker pool in src/worker/pool.go
```
## File Structure
```
skills/deep-dive/
├── SKILL.md
├── README.md # this file
├── assets/
│ ├── report-template.md
│ ├── workflow.puml
│ └── deep-dive-workflow.svg
└── evals/
└── evals.json
```
## Evals
```bash
python scripts/run_evals.py deep-dive
```
+48
View File
@@ -0,0 +1,48 @@
---
name: deep-dive
description: Analyzes codebases, technical documentation, APIs, product specs, or infrastructure topics and produces a structured deep-dive report for developers. Use when a developer needs to quickly understand an unfamiliar system, library, service, codebase, or deployment architecture. Triggers on phrases like "help me understand", "explain this codebase", "analyze this doc", "how does X work", "onboard me to", "deep dive into", "详细分析", "分析架构", "分析部署", "解释一下", "帮我理解".
metadata:
author: common-skills
version: "1.0"
---
# Deep Dive
Produce a structured technical report that helps a developer rapidly understand an unfamiliar system. The report should be as detailed as the available material allows — depth is the goal.
## Inputs
Accept any combination of:
- Local file paths (source code, markdown docs, OpenAPI/Swagger specs, config files)
- Pasted text (README, architecture notes, API docs)
- A topic or product name (research from general knowledge)
- A URL (fetch and analyze if possible)
If the user provides a directory, scan key files: `README*`, `ARCHITECTURE*`, `docs/`, entry-point source files, config files (`package.json`, `pyproject.toml`, `go.mod`, `Cargo.toml`, `pom.xml`, etc.).
## Output
Use the report template at [assets/report-template.md](assets/report-template.md) as the structure for every report.
**Output location:**
- If the user specifies a path, write the report there
- Otherwise, write to `./deep-dive-{subject}.md` in the current working directory (replace spaces with hyphens, lowercase)
Fill in all template sections that are relevant to the material. Skip sections where there is genuinely nothing to say. Always include at least: Overview, Architecture, and Further Reading.
Section-specific guidance:
- **Architecture**: label diagram arrows with protocol/data type; group by layer; include external dependencies
- **Data Model**: only include if the system has a meaningful schema or domain model
- **Core Flows**: pick the 24 most important user journeys; one sequence diagram each
- **API Reference**: group endpoints by resource; note auth mechanism, pagination, versioning
- **Further Reading**: 58 items, ordered most-to-least important, each with a concrete location (file path, URL, or search term)
---
## Quality Standards
- **Depth over breadth**: detailed analysis of the most important parts beats shallow coverage of everything
- **Concrete over abstract**: use actual class names, file paths, endpoint names from the material — not generic placeholders
- **Accurate diagrams only**: if you lack enough information to make a diagram correct, omit it and say what's missing
- **Honest gaps**: if a section cannot be filled, write one sentence explaining what additional material is needed
- **Developer-first language**: assume a competent reader; skip basics, focus on what is non-obvious
+27
View File
@@ -0,0 +1,27 @@
@startuml deep-dive-architecture
skinparam componentStyle rectangle
skinparam defaultFontName Arial
skinparam backgroundColor #FAFAFA
package "deep-dive Skill" {
component "SKILL.md\n(instructions + triggers)" as SKILL
component "assets/\nreport-template.md\n(11 section skeletons)" as TMPL
component "evals/evals.json" as EVALS
}
package "Input Sources" {
component "File path(s)\n(src/, docs/, manifests)" as FILES
component "URL\n(fetched content)" as URL
component "Pasted text /\nTopic name" as TEXT
}
package "Output" {
component "deep-dive-{subject}.md\n(structured report)" as REPORT
}
SKILL --> TMPL : loads 11-section template
SKILL --> FILES : reads source files
SKILL --> URL : fetches content
SKILL --> TEXT : analyzes inline
SKILL --> REPORT : writes report
@enduml
File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 8.3 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 8.7 KiB

+143
View File
@@ -0,0 +1,143 @@
# Deep Dive Report: {SUBJECT}
> Generated by the `deep-dive` skill.
> Date: {DATE}
> Source: {SOURCE}
---
## 1. Overview
- **What it is**:
- **Problem it solves**:
- **Target users**:
- **Design philosophy**:
- **Tech stack**:
---
## 2. Architecture
{description of high-level structure}
```plantuml
@startuml
' Replace with actual components
package "Layer A" {
[Component 1]
}
package "Layer B" {
[Component 2]
[Component 3]
}
[Component 1] --> [Component 2] : protocol
[Component 2] --> [Component 3] : protocol
@enduml
```
---
## 3. Key Concepts & Terminology
**Term** — definition and why it matters in this system.
---
## 4. Data Model
{description of primary entities and relationships}
```plantuml
@startuml
entity EntityA {
* id : UUID
--
field : Type
}
entity EntityB {
* id : UUID
--
field : Type
}
EntityA ||--o{ EntityB : relationship
@enduml
```
---
## 5. Core Flows & Sequences
### Flow 1: {Name}
{one-paragraph description}
```plantuml
@startuml
actor User
participant "Component A" as A
participant "Component B" as B
User -> A : action
A -> B : call
B --> A : response
A --> User : result
@enduml
```
---
## 6. Public API / Interface Reference
| Method | Path / Signature | Purpose | Key Params | Returns |
|--------|-----------------|---------|------------|---------|
| GET | /resource | description | param | type |
**Auth**: {mechanism}
---
## 7. Configuration & Deployment
**Key config options:**
| Variable | Default | Description |
|----------|---------|-------------|
| `ENV_VAR` | value | what it controls |
**Run locally:**
```bash
# minimal steps
```
**Deployment topology**: {description}
---
## 8. Extension & Integration Points
- {plugin/hook/middleware description}
- {how to add a new feature}
- {external integration patterns}
---
## 9. Observability
- **Logging**:
- **Metrics**:
- **Tracing**:
- **Health check**:
---
## 10. Known Limitations & Trade-offs
- {limitation or trade-off}
---
## 11. Further Reading
1. **[Topic]** — why it matters and where to find it
2. **[Topic]** — why it matters and where to find it
3. **[Topic]** — why it matters and where to find it
+23
View File
@@ -0,0 +1,23 @@
@startuml deep-dive-workflow
skinparam defaultFontName Arial
skinparam backgroundColor #FAFAFA
actor Developer
participant "deep-dive\nSkill" as SKILL
participant "Input Source" as SRC
participant "report-template.md" as TMPL
Developer -> SKILL : "deep dive into X"\n(files / URL / topic / text)
SKILL -> SRC : read files / fetch URL / analyze text
SRC --> SKILL : raw material
SKILL -> TMPL : load 11-section template
SKILL -> SKILL : analyze architecture,\ndata model, flows, APIs
loop each relevant section
SKILL -> SKILL : generate PlantUML diagram\n(component / ER / sequence)
SKILL -> SKILL : fill section content
end
SKILL --> Developer : deep-dive-{subject}.md
@enduml
+60
View File
@@ -0,0 +1,60 @@
{
"skill_name": "deep-dive",
"evals": [
{
"id": 1,
"prompt": "Help me understand the Redis codebase. I want to know its architecture, how the event loop works, and the key data structures it uses internally.",
"expected_output": "A structured report covering: Redis overview (in-memory data store, single-threaded event loop), architecture diagram showing the ae event loop, networking layer, command dispatcher, and persistence modules, explanation of core data structures (SDS, dict, ziplist/listpack, skiplist), sequence diagram for a SET command, and further reading pointing to specific source files like ae.c, t_string.c, dict.c.",
"assertions": [
"Report includes an Overview section describing Redis as an in-memory data store",
"Report includes a PlantUML architecture diagram",
"Report explains the single-threaded event loop (ae)",
"Report covers at least 3 internal data structures (e.g. SDS, dict, skiplist)",
"Report includes a Further Reading section with at least 3 actionable items",
"At least one PlantUML sequence diagram is included"
]
},
{
"id": 2,
"prompt": "I just joined a team working on a REST API built with FastAPI. Here's the project README:\n\n# OrderService\nA FastAPI service managing e-commerce orders. Uses PostgreSQL via SQLAlchemy, Redis for caching, and Celery for async tasks. Auth via JWT.\n\n## Endpoints\n- POST /orders — create order\n- GET /orders/{id} — get order\n- PATCH /orders/{id}/status — update status\n- GET /orders?user_id=X — list orders\n\n## Models\nOrder: id, user_id, status (pending/confirmed/shipped/delivered), items (JSON), created_at\n\nHelp me understand this service.",
"expected_output": "Report covering: overview of OrderService purpose and stack (FastAPI, PostgreSQL, Redis, Celery, JWT), architecture diagram showing the components and their connections, data model ER diagram for the Order entity, sequence diagrams for at least POST /orders and PATCH /orders/{id}/status flows, API reference table for all 4 endpoints, notes on JWT auth, Redis caching strategy, and Celery async task usage, further reading recommendations.",
"assertions": [
"Report includes an Overview section mentioning FastAPI, PostgreSQL, Redis, Celery, and JWT",
"Report includes a PlantUML architecture or component diagram",
"Report includes a PlantUML data model diagram showing the Order entity",
"Report includes a PlantUML sequence diagram for at least one endpoint flow",
"Report includes an API reference section covering all 4 endpoints",
"Report mentions JWT authentication",
"Report includes a Further Reading section"
]
},
{
"id": 3,
"prompt": "Give me a deep dive on the Kafka consumer group protocol. I need to understand how rebalancing works, what the group coordinator does, and the difference between eager and cooperative rebalancing.",
"expected_output": "Report covering: Kafka consumer group overview, architecture diagram showing brokers, group coordinator, and consumers, explanation of the group coordinator role (heartbeats, session timeout, offset commits), detailed sequence diagrams for both eager (stop-the-world) and cooperative (incremental) rebalance protocols, key concepts glossary (consumer group, partition assignment, rebalance, heartbeat, session.timeout.ms), known trade-offs between the two rebalance strategies, and further reading.",
"assertions": [
"Report includes an Overview section explaining consumer groups and their purpose",
"Report includes a PlantUML diagram showing brokers, group coordinator, and consumers",
"Report explains the group coordinator role",
"Report covers both eager and cooperative rebalancing with their differences",
"Report includes at least one PlantUML sequence diagram showing a rebalance flow",
"Report includes a Key Concepts section with relevant terminology",
"Report includes a Known Limitations or Trade-offs section comparing the two strategies",
"Report includes a Further Reading section"
]
},
{
"id": 4,
"prompt": "I need to understand this Go file quickly:\n\n```go\npackage worker\n\ntype Job struct {\n ID string\n Payload []byte\n Retries int\n}\n\ntype Worker struct {\n queue chan Job\n done chan struct{}\n handler func(Job) error\n}\n\nfunc New(concurrency int, handler func(Job) error) *Worker {\n w := &Worker{\n queue: make(chan Job, 100),\n done: make(chan struct{}),\n handler: handler,\n }\n for i := 0; i < concurrency; i++ {\n go w.loop()\n }\n return w\n}\n\nfunc (w *Worker) Submit(j Job) { w.queue <- j }\n\nfunc (w *Worker) Stop() { close(w.done) }\n\nfunc (w *Worker) loop() {\n for {\n select {\n case j := <-w.queue:\n if err := w.handler(j); err != nil && j.Retries > 0 {\n j.Retries--\n w.queue <- j\n }\n case <-w.done:\n return\n }\n }\n}\n```",
"expected_output": "Report covering: overview of the worker pool pattern implemented, architecture/component description of Job, Worker structs and their roles, sequence diagram showing Submit -> loop -> handler -> retry flow, explanation of concurrency model (goroutines, buffered channel, done channel for shutdown), key concepts (worker pool, buffered channel backpressure, retry with decrement), known limitations (no graceful drain on Stop, fixed buffer size, no dead-letter queue), and further reading suggestions.",
"assertions": [
"Report identifies this as a worker pool / job queue pattern",
"Report explains the role of the queue channel and done channel",
"Report includes a PlantUML sequence or activity diagram showing the job processing flow including retry",
"Report explains the concurrency model (goroutines spawned in New)",
"Report identifies at least 2 limitations (e.g. no graceful shutdown drain, fixed buffer, no DLQ)",
"Report includes a Further Reading section"
]
}
]
}