start
npx skills add https://github.com/mindfold-ai/trellis --skill start
Agent 安装分布
Skill 文档
Start Session
Initialize your AI development session and begin working on tasks.
Operation Types
| Marker | Meaning | Executor |
|---|---|---|
[AI] |
Bash scripts or Task calls executed by AI | You (AI) |
[USER] |
Slash commands executed by user | User |
Initialization [AI]
Step 1: Understand Development Workflow
First, read the workflow guide to understand the development process:
cat .trellis/workflow.md
Follow the instructions in workflow.md – it contains:
- Core principles (Read Before Write, Follow Standards, etc.)
- File system structure
- Development process
- Best practices
Step 2: Get Current Context
python3 ./.trellis/scripts/get_context.py
This shows: developer identity, git status, current task (if any), active tasks.
Step 3: Read Guidelines Index
cat .trellis/spec/frontend/index.md # Frontend guidelines
cat .trellis/spec/backend/index.md # Backend guidelines
cat .trellis/spec/guides/index.md # Thinking guides
Step 4: Report and Ask
Report what you learned and ask: “What would you like to work on?”
Task Classification
When user describes a task, classify it:
| Type | Criteria | Workflow |
|---|---|---|
| Question | User asks about code, architecture, or how something works | Answer directly |
| Trivial Fix | Typo fix, comment update, single-line change | Direct Edit |
| Simple Task | Clear goal, 1-2 files, well-defined scope | Quick confirm â Implement |
| Complex Task | Vague goal, multiple files, architectural decisions | Brainstorm â Task Workflow |
Classification Signals
Trivial/Simple indicators:
- User specifies exact file and change
- “Fix the typo in X”
- “Add field Y to component Z”
- Clear acceptance criteria already stated
Complex indicators:
- “I want to add a feature for…”
- “Can you help me improve…”
- Mentions multiple areas or systems
- No clear implementation path
- User seems unsure about approach
Decision Rule
If in doubt, use Brainstorm + Task Workflow.
Task Workflow ensures specs are injected to agents, resulting in higher quality code. The overhead is minimal, but the benefit is significant.
Question / Trivial Fix
For questions or trivial fixes, work directly:
- Answer question or make the fix
- If code was changed, remind user to run
/trellis:finish-work
Simple Task
For simple, well-defined tasks:
- Quick confirm: “I understand you want to [goal]. Ready to proceed?”
- If yes, skip to Task Workflow Step 2 (Research)
- If no, clarify and confirm again
Complex Task – Brainstorm First
For complex or vague tasks, use the brainstorm process to clarify requirements.
See /trellis:brainstorm for the full process. Summary:
- Acknowledge and classify – State your understanding
- Create task directory – Track evolving requirements in
prd.md - Ask questions one at a time – Update PRD after each answer
- Propose approaches – For architectural decisions
- Confirm final requirements – Get explicit approval
- Proceed to Task Workflow – With clear requirements in PRD
Key Brainstorm Principles
| Principle | Description |
|---|---|
| One question at a time | Never overwhelm with multiple questions |
| Update PRD immediately | After each answer, update the document |
| Prefer multiple choice | Easier for users to answer |
| YAGNI | Challenge unnecessary complexity |
Task Workflow (Development Tasks)
Why this workflow?
- Research Agent analyzes what specs are needed
- Specs are configured in jsonl files
- Implement Agent receives specs via Hook injection
- Check Agent verifies against specs
- Result: Code that follows project conventions automatically
Step 1: Understand the Task [AI]
If coming from Brainstorm: Skip this step – requirements are already in PRD.
If Simple Task: Quick confirm understanding:
- What is the goal?
- What type of development? (frontend / backend / fullstack)
- Any specific requirements or constraints?
Step 2: Research the Codebase [AI]
Call Research Agent to analyze:
Task(
subagent_type: "research",
prompt: "Analyze the codebase for this task:
Task: <user's task description>
Type: <frontend/backend/fullstack>
Please find:
1. Relevant spec files in .trellis/spec/
2. Existing code patterns to follow (find 2-3 examples)
3. Files that will likely need modification
Output:
## Relevant Specs
- <path>: <why it's relevant>
## Code Patterns Found
- <pattern>: <example file path>
## Files to Modify
- <path>: <what change>
## Suggested Task Name
- <short-slug-name>",
model: "opus"
)
Step 3: Create Task Directory [AI]
Based on research results:
TASK_DIR=$(python3 ./.trellis/scripts/task.py create "<title from research>" --slug <suggested-slug>)
Step 4: Configure Context [AI]
Initialize default context:
python3 ./.trellis/scripts/task.py init-context "$TASK_DIR" <type>
# type: backend | frontend | fullstack
Add specs found by Research Agent:
# For each relevant spec and code pattern:
python3 ./.trellis/scripts/task.py add-context "$TASK_DIR" implement "<path>" "<reason>"
python3 ./.trellis/scripts/task.py add-context "$TASK_DIR" check "<path>" "<reason>"
Step 5: Write Requirements [AI]
Create prd.md in the task directory with:
# <Task Title>
## Goal
<What we're trying to achieve>
## Requirements
- <Requirement 1>
- <Requirement 2>
## Acceptance Criteria
- [ ] <Criterion 1>
- [ ] <Criterion 2>
## Technical Notes
<Any technical decisions or constraints>
Step 6: Activate Task [AI]
python3 ./.trellis/scripts/task.py start "$TASK_DIR"
This sets .current-task so hooks can inject context.
Step 7: Implement [AI]
Call Implement Agent (specs are auto-injected by hook):
Task(
subagent_type: "implement",
prompt: "Implement the task described in prd.md.
Follow all specs that have been injected into your context.
Run lint and typecheck before finishing.",
model: "opus"
)
Step 8: Check Quality [AI]
Call Check Agent (specs are auto-injected by hook):
Task(
subagent_type: "check",
prompt: "Review all code changes against the specs.
Fix any issues you find directly.
Ensure lint and typecheck pass.",
model: "opus"
)
Step 9: Complete [AI]
- Verify lint and typecheck pass
- Report what was implemented
- Remind user to:
- Test the changes
- Commit when ready
- Run
/trellis:record-sessionto record this session
Continuing Existing Task
If get_context.py shows a current task:
- Read the task’s
prd.mdto understand the goal - Check
task.jsonfor current status and phase - Ask user: “Continue working on ?”
If yes, resume from the appropriate step (usually Step 7 or 8).
Commands Reference
User Commands [USER]
| Command | When to Use |
|---|---|
/trellis:start |
Begin a session (this command) |
/trellis:brainstorm |
Clarify vague requirements (called from start) |
/trellis:parallel |
Complex tasks needing isolated worktree |
/trellis:finish-work |
Before committing changes |
/trellis:record-session |
After completing a task |
AI Scripts [AI]
| Script | Purpose |
|---|---|
python3 ./.trellis/scripts/get_context.py |
Get session context |
python3 ./.trellis/scripts/task.py create |
Create task directory |
python3 ./.trellis/scripts/task.py init-context |
Initialize jsonl files |
python3 ./.trellis/scripts/task.py add-context |
Add spec to jsonl |
python3 ./.trellis/scripts/task.py start |
Set current task |
python3 ./.trellis/scripts/task.py finish |
Clear current task |
python3 ./.trellis/scripts/task.py archive |
Archive completed task |
Sub Agents [AI]
| Agent | Purpose | Hook Injection |
|---|---|---|
| research | Analyze codebase | No (reads directly) |
| implement | Write code | Yes (implement.jsonl) |
| check | Review & fix | Yes (check.jsonl) |
| debug | Fix specific issues | Yes (debug.jsonl) |
Key Principle
Specs are injected, not remembered.
The Task Workflow ensures agents receive relevant specs automatically. This is more reliable than hoping the AI “remembers” conventions.