agentstack-wrapper
npx skills add https://github.com/i-am-bee/agentstack --skill agentstack-wrapper
Agent 安装分布
Skill 文档
AgentStack Wrapper Skill
Table of Contents
- Overview
- When to Use
- Prerequisites
- Constraints (must follow)
- Integration Workflow Checklist
- Step 1 â Classify the Agent
- Step 2 â Add and Install Dependencies
- Step 3 â Create the Server Wrapper
- Step 4 â Wire LLM / Services via Extensions
- Step 5 â Error Handling
- Step 6 â Forms (Single-Turn Structured Input)
- Step 7 â Entrypoint
- Step 8 â Use Platform Extensions
- Step 9 â Update README
- Anti-Patterns
- Failure Conditions
- Finalization Report (Required)
- Verification Checklist
Overview
This SKILL.md is an instructional integration guide for wrapping Python agents to run on AgentStack. It is documentation, not executable code. It describes dependency management and runtime extension wiring. Primary security considerations are dependency supply-chain integrity and safe handling of sensitive runtime values provided through platform extensions.
- Do not add instructions that execute remote scripts or untrusted code.
- Verify package versions from trusted PyPI metadata, pin versions, and audit installed
agentstack-sdk/a2a-sdkpackages before use. - Handle sensitive values only through declared AgentStack extensions.
- Never log, print, persist, or expose secret values.
- Do not send secrets to untrusted intermediaries or endpoints not required by the wrapped agent contract.
The wrapper exposes the agent via the A2A protocol so it can be discovered, called, and composed with other agents on the platform.
When to Use
- You have a working Python agent (CLI tool, library function, framework-based agent) and need to deploy it as an AgentStack service.
- You want to expose an agent over A2A without rewriting its business logic.
Prerequisites
- Python 3.12+
- The agent’s source code is available locally
- AgentStack server is running locally and is properly configured
agentstack-sdkversion selected from a trusted source (project lockfile/constraints, active environment, or vetted PyPI release metadata) and pinned in project dependencies using~=a2a-sdkonly if the project manages it directly, and pin it to a version compatible with the selectedagentstack-sdk(do not independently chase the latesta2a-sdkif resolver constraints differ)
Constraints (must follow)
| ID | Rule |
|---|---|
| C1 | No business-logic changes. Only modify code for AgentStack compatibility. |
| C2 | Strict minimal changes. Do not add auth, Dockerfile (containerization is optional and separate), telemetry, or platform middleware unless explicitly requested. If an agent works with simple text, don’t force a Form. If it works with env vars, refactor minimally. |
| C3 | Cleanup temp files. If the agent downloads or creates helper files at runtime, add a cleanup step before the function returns. |
| C4 | Prioritize Public Access (No redundant tokens). Only use the Secrets extension if the secret is strictly mandatory for the agent’s core functionality and no public/anonymous access is viable. Do not add secrets or tokens that increase configuration burden if they were optional in the original agent (e.g., optional GitHub token). Preserve existing optional auth behavior unless removal is explicitly approved and documented as a behavior change. API keys must be passed explicitly, never read from env vars. |
| C5 | Detect existing tooling. If the project uses requirements.txt, add agentstack-sdk~=<VERSION> there. If it uses pyproject.toml, add it there. Add a2a-sdk only when the project manages it directly, and keep it compatible with the chosen agentstack-sdk version. Never force uv or create duplicate manifests. |
| C6 | Import Truth and Validation. All imports must match modules that exist in the active virtual environment (agentstack_sdk, a2a). If official docs conflict with installed package layout, follow installed package reality and note the mismatch. After wrapping, run import validation and fail the task if any import is unresolved. |
| C7 | Analyze installed SDK packages in active virtual environment. Inspect the installed agentstack_sdk and a2a modules in the active environment and revisit all imports to ensure they match actual installed files, avoiding hallucinations. See also source structure. |
| C8 | Structured Parameters to Forms. For single-turn agents with named parameters, map them to an initial_form using FormServiceExtensionSpec.demand(initial_form=...). |
| C9 | Remove CLI arguments. Remove all argparse or sys.argv logic. Replace mandatory CLI inputs with initial_form items or AgentStack Environment Variables. |
| C10 | Approval gate for business-logic changes. If compatibility requires business-logic changes, stop and request explicit approval with justification before proceeding. |
| C11 | Keep adaptation reversible. Isolate wrapper and integration changes, avoid destructive refactors, and preserve a rollback path. |
| C12 | Preserve original helpers. Do not delete original business-logic helpers unless strictly required. If removal is necessary, document why. |
| C13 | Optional extension safety. Service/UI extensions are optional. Check presence/data before use (e.g., if llm and llm.data ...). |
| C14 | No secret exposure. Never log, print, persist, or echo secret values (API keys, tokens, passwords). Redact sensitive values in logs and errors. |
| C15 | No remote script execution. Never run untrusted remote code during wrapping. Use project manifests and trusted package metadata only. |
| C16 | Constrained outbound targets. Do not introduce arbitrary outbound network targets. Limit external calls to trusted dependency sources and runtime endpoints explicitly required by the wrapped agent contract. |
| C17 | No dynamic command execution from input. Do not introduce wrapper patterns that execute shell commands from user/model input (for example, eval, exec, os.system, or unsanitized subprocess calls). |
| C18 | Read Wrapper Documentation First. Before starting any implementation, you must read the official guide: Wrap Your Existing Agents. |
Integration Workflow Checklist
Copy this checklist into your context and check off items as you complete them:
Task Progress:
- [ ] Step 1: Classify the Agent
- [ ] Step 2: Add and Install Dependencies
- [ ] Step 3: Create the Server Wrapper
- [ ] Step 4: Wire LLM / Services via Extensions
- [ ] Step 5: Implement Error Handling
- [ ] Step 6: Map Forms (if applicable)
- [ ] Step 7: Create Entrypoint
- [ ] Step 8: Use Platform Extensions
- [ ] Step 9: Update README
- [ ] Finalization: Run Verification Checklist and Finalization Report
Step 1 â Classify the Agent
If there is a README.md or AGENTS.md file, read it first to better understand the structure and purpose of the agent.
Read the agent’s code and classify it:
| Pattern | Classification | Indicators |
|---|---|---|
| Single-turn | One request â one response | CLI entrypoint, argparse (must be removed), primarily stateless business logic, context persistence still recommended |
| Multi-turn | Conversation with memory | Chat loop, message history, session state, memory object |
This classification determines:
- How to use
context.store()andcontext.load_history(): persist input/response by default for all agents;context.load_history()is required for multi-turn, and optional for single-turn (use only when prior context is intentionally part of behavior) - Whether to define an
initial_formfor structured inputs (single-turn with named parameters)
Step 2 â Add and Install Dependencies
- Find the existing dependency file:
requirements.txtâ appendagentstack-sdk~=<VERSION>pyproject.tomlâ add to[project.dependencies]or[tool.poetry.dependencies]- add
a2a-sdkonly when direct pinning is required by the project dependency policy
- Select and pin a trusted version (required). If the project already pins
agentstack-sdkin its lockfile/constraints or active environment, use that compatible version and keep consistency with the project. If no version is present, use the latest compatible stable releasedagentstack-sdkversion from trusted PyPI metadata, then pin with~=. If the project requires directa2a-sdkpinning, use a version compatible with the selectedagentstack-sdkdependency constraints. - Install the dependencies. Once added to the manifest, install them in your virtual environment (e.g.,
pip install -r requirements.txt). - Do not create a new manifest type the project doesn’t already use.
- Do not force
uvif the project usespip.
Source-of-truth rule: Use current official docs and installed package inspection as the authority. If they conflict, follow installed package behavior and report the mismatch.
Security rule: Do not execute remote installation scripts. Use only the repository’s existing dependency workflow and trusted package sources.
Import Recovery Sequence (required)
If import validation fails, follow this exact order:
- Run import validation to identify missing modules.
- If a missing import is caused by absent dependencies, install or repair dependencies in the existing manifest workflow.
- Re-run import validation after dependency repair.
- If imports still fail, stop and report unresolved imports with module names and file paths.
Exploring Unknown Packages Without Test Files (Zero-File Discovery)
If you need to figure out exact imports from installed libraries (agentstack_sdk, a2a) but docs are unavailable, do not create temporary test scripts. Instead, use inline Python execution (python -c) or your native search tools. This is the cleanest and fastest way to map imports without polluting the project repository.
The Most Reliable Method (Inline Package Search):
Execute this single inline Python command to crawl the installed SDK and locate the exact module exporting your target class (e.g., AgentDetail). This reliably finds the correct import path in a single attempt:
python -c '
import pkgutil, importlib
def find_class(pkg_name, target):
pkg = importlib.import_module(pkg_name)
for _, modname, _ in pkgutil.walk_packages(pkg.__path__, pkg.__name__ + "."):
try:
if hasattr(importlib.import_module(modname), target):
print(f"Found {target} in: {modname}")
except Exception:
pass
find_class("agentstack_sdk", "AgentDetail")
'
Once the module is located (e.g., agentstack_sdk.server.agent), you can inspect its signature or docstring directly via another short inline command:
python -c "from agentstack_sdk.server.agent import AgentDetail; help(AgentDetail)"
Step 3 â Create the Server Wrapper
Create a new file (e.g. agent.py or server.py) with the wrapping code, or modify the original agent files directly. The original code can be changed for AgentStack compatibility (e.g. accepting config as parameters instead of reading env vars), but the agent’s business logic must not be altered.
Prefer additive wrapper files and minimal adapters over invasive refactors to keep migration reversible.
If the original repository exposes legacy HTTP endpoints that are asserted by tests or explicit contracts, preserve those endpoints or provide compatibility shim routes.
Follow the wrapping pattern from the official guide: Wrap Your Existing Agents
For building agents from scratch or understanding the full server pattern: Build New Agents
Real-world examples of wrapped agents are available at: agents/ on GitHub
Metadata Extraction
Before writing the code, analyze the original source (docstrings, CLI help, README) to populate the @server.agent() parameters:
- Identity: Set
nameandversion. - Documentation: Use
documentation_urlpointing to the source. - Detail: Populate
AgentDetailwithinteraction_mode(Step 1),tools,author(must be a dictionary, e.g.,{"name": "agentstack"}), andprogramming_language. - Skills: Define
AgentSkillentries withid,name,description,tags, andexamples. - Function Docstring: The wrapper function’s docstring should be a concise summary shown in registries.
- Extensions: Identify if the agent needs optional platform capabilities (Step 8) like Citations, Secrets, or Trajectory.
Key elements
| Element | Purpose |
|---|---|
Server() |
Creates the AgentStack server instance |
@server.agent() |
Registers the function as an agent; function name becomes agent ID, docstring becomes description |
input: Message |
A2A message from the caller; use get_message_text(input) to extract the text |
context: RunContext |
Execution context (task_id, context_id, session store, history) |
yield AgentMessage(text=...) |
Stream one or more response chunks back to the caller |
yield AgentArtifact(...) / ArtifactChunk |
Return files, documents, or chunks of structured content back to the caller |
yield AuthRequired(...) |
Pause execution to request an OAuth or platform authentication token |
Metadata(...) |
Attach extension metadata (e.g., Citations, Canvas references) to an AgentMessage |
emit trajectory output |
Surface meaningful intermediate logs/progress separately from final user-facing response |
server.run(host, port) |
Starts the HTTP server |
Implementation: Conditional Workflows
Based on the classification in Step 1, follow exactly ONE of these workflows:
If the agent is Single-turn:
Follow this checklist for single-turn agents:
Single-turn Implementation:
- [ ] Extract user message with `get_message_text(input)`
- [ ] Only call `context.load_history()` if continuity is intentionally required
- [ ] Pass necessary inputs (from forms or text) to original agent logic
- [ ] Route intermediate progress steps to Trajectory output (Optional)
- [ ] Yield the final response via `AgentMessage(text=result)`
- [ ] Persist both input and response via `context.store()`
If the agent is Multi-turn:
Follow this checklist for agents requiring memory:
Multi-turn Implementation:
- [ ] Store input: Save incoming user message immediately with `await context.store(input)`
- [ ] Load history: Retrieve past conversation via `[msg async for msg in context.load_history() if isinstance(msg, Message)]`
- [ ] Execute agent: Pass the filtered history to the original agent logic
- [ ] Route traces: Emit intermediate multi-step reasoning to trajectory extension (Optional)
- [ ] Yield response: Return final answering chunks with `yield AgentMessage(text=...)`
- [ ] Store response: Save the final response with `await context.store(response)`
Step 4 â Wire LLM / Services via Extensions
OpenAI-compatible interface required. The agent must be designed to work with an OpenAI-compatible interface. If the original agent uses a different LLM provider (e.g., Anthropic, Google), you must install the necessary library (e.g., langchain-openai) and use that provider class, passing the configuration received from the LLM extension.
Do not read API keys from environment variables. Use AgentStack’s platform extensions to receive LLM configuration at runtime. (Note: Sometimes the exact structure of the credentials provided by the extension can only be fully explored and validated by running the agent and inspecting the injected objects).
Add llm: Annotated[LLMServiceExtensionServer, LLMServiceExtensionSpec.single_demand()] as an agent function parameter. Extract the config from llm.data.llm_fulfillments["default"] and pass api_key, api_base, api_model explicitly to the original agent.
If the default fulfillment is missing, declare a secrets parameter (for example secrets: Annotated[SecretsExtensionServer, SecretsExtensionSpec.single_demand(...)]), request required secrets through that declared parameter, then construct fulfillment-compatible values and pass api_key, api_base, and api_model explicitly.
Do not reference secrets.request_secrets() unless a secrets extension parameter is declared on the agent function.
If the original agent reads env vars for API keys internally, refactor it so keys are passed as explicit parameters instead. Always pass runtime LLM config explicitly, avoid provider/default fallback chains, and fail fast with a clear error if required values are missing.
See the chat agent and competitive-research agent on GitHub for real examples of LLM extension wiring.
Step 5 â Error Handling
Use the Error extension for user-visible failures. Do not report errors via a normal AgentMessage.
Implementation
- Standard Reporting: Simply
raisean exception (e.g.,ValueError,RuntimeError) inside the agent. The platform automatically catches and formats it. - Advanced Configuration: Add
error_ext: Annotated[ErrorExtensionServer, ErrorExtensionSpec(params=ErrorExtensionParams(include_stacktrace=True))]as an agent function parameter to enable stack traces in the UI. - Adding Context: You can attach diagnostic data to
error_ext.context(a dictionary) before raising an error. This context is serialized to JSON and shown in the UI. - Multiple Errors: Use
ExceptionGroup(Python 3.11+) to report multiple failures simultaneously. The extension will render them as a group in the UI.
Example
See the official error guide and chat agent example for practical implementation examples.
Step 6 â Forms (Single-Turn Structured Input)
If the original agent accepts named parameters (not just free text), map them to an initial_form using the Forms extension.
- Define a
FormRenderwith appropriate field types (TextField,DateField,CheckboxField, etc.). Always usefields=[...](notitems=[...]) andlabel="..."(nottitle="..."). - Create a Pydantic
BaseModelmatching the form fields - Add
form: Annotated[FormServiceExtensionServer, FormServiceExtensionSpec.demand(initial_form=form_render)]as an agent parameter - Parse input via
form.parse_initial_form(model=MyParams)
Only use forms when the agent has clearly defined, structured parameters. For free-text agents, the plain message input is sufficient.
For mid-conversation input:
- Single free-form question, use A2A
input-requiredevent. - Structured multi-field input, use dynamic form request extension (
FormRequestExtensionServer/FormRequestExtensionSpec).
See the form agent example on GitHub for a complete implementation.
Step 7 â Entrypoint
Create a run() / serve() function that calls server.run(host=os.getenv("HOST", "127.0.0.1"), port=int(os.getenv("PORT", 8000)), context_store=PlatformContextStore()) with an if __name__ == "__main__" guard.
The server defaults to an in-memory context store when context_store is omitted, so wrappers that persist or read context history must pass PlatformContextStore() explicitly.
For wrappers that implement context or history persistence via context.store() or context.load_history(), context_store=PlatformContextStore() is required.
Remove all CLI argument parsing (argparse.ArgumentParser, etc.). If the agent previously relied on CLI arguments for input (e.g. --repo-url), refactor the input to come from its wrapper function parameters (mapped from a Form or environment variable).
Only add configure_telemetry or auth_backend if the user explicitly requests platform integration.
Step 8 â Use Platform Extensions
Enhance the agent with platform-level capabilities by injecting extensions via Annotated function parameters. Use them if the original agent’s behavior warrants it.
| Extension | When to Use | Documentation |
| —————– | ————————————————————————————- | ————————————————————————————————————————————————————————————————— | — |
| Forms | Agent requires structured, named parameter inputs (not just free text) | Forms |
| LLM Service | Agent needs platform-provided language model access and credentials | LLM Proxy |
| Error | Agent needs to report structured, user-visible failures and stack traces | Error Handling |
| Files | Agent expects to read image or document files uploaded by the user | Files |
| Citations | Agent references documents or external URLs | Citations |
| Trajectory | Multi-step reasoning, tool calls, long-running progress, or explicit debugging traces | Trajectory |
| Secrets | Agent needs user-provided API keys or tokens at runtime | Secrets (Note: Check secrets.data and use request_secrets only through a declared secrets extension parameter if missing) |
| Settings | Agent has configurable behavior (e.g., “Thinking Mode”) | Settings |
| Env Variables | Agent requires custom environment-level deployment configuration variables | Environment Variables |
| Canvas | Agent needs to edit artifacts or code selected by user | Canvas |
| Approval | Agent performs sensitive tool calls requiring user consent | Tool Call Approval |
| MCP | Agent uses Model Context Protocol tools/servers | MCP Integration |
| Embedding | Agent performs vector search or uses RAG strategies | RAG / Embeddings | |
| Platform API | Agent calls AgentStack internal platform APIs securely via an injected client | Platform API |
For a complete overview of all available extensions: Agent Integration Overview
Trajectory Output Rule
Trajectory is optional for simple single-step responders.
Trajectory is required whenever the agent emits meaningful intermediate logs, execution steps, tool activity, or progress updates. Those intermediate signals must be surfaced as trajectory output, and the final user answer should remain focused on the final result.
Trajectory entries are metadata for transparency and observability. They are not a substitute for the agent’s user-facing response message.
User-facing text should be emitted as normal AgentMessage output. Trajectory should contain the intermediate context behind that answer.
For third-party framework callbacks (for example sync-only step callbacks), capture callback data and emit it later from the main agent handler so trajectory output remains consistent.
Step 9 â Update README
Update the project’s README.md (or create one if missing) with instructions on how to run the wrapped agent server. Include:
- Install dependencies using the project’s existing tooling (e.g.
uv pip install -r requirements.txtorpip install -r requirements.txt). - Environment Configuration â Document required
.envpatterns ifpython-dotenvis used. However, ensure the agent still receives configuration explicitly instead of reading env arguments internally. - Run the server with the appropriate command (e.g.
uv run server.pyorpython server.py). - Default address â mention that the server starts at
http://127.0.0.1:8000by default and can be configured viaHOSTandPORTenvironment variables.
Remove or replace any outdated CLI usage examples (e.g. argparse-based commands) that no longer apply after wrapping.
Anti-Patterns
When building and testing the wrapper, ensure you avoid these common pitfalls:
- Never hardcode API keys or LLM endpoints. Use the LLM proxy extension explicitly.
- Never log or print secrets. API keys/tokens must not appear in logs, responses, exceptions, or telemetry.
- Never assume history is auto-saved. If you need context continuity, explicitly call
await context.store(input)andawait context.store(response). - Never assume persistent history without
PlatformContextStore. Without it, context storage is in-memory and lost on process restart. - Never forget to filter history.
context.load_history()returns all items in the conversation (Messages, Artifacts). Always filter them usingisinstance(message, Message). - Never store individual streaming chunks. Accumulate the full response and store once using
context.store(). - Never hallucinate import paths. You must never guess imports.
a2aandagentstack_sdkare two separate packages. Always find the exact import name by inspecting the installed packages, and explicitly verify their functionality by running an import check. - Never assume extension availability. Check extension objects and payloads before using them.
- Never access
.textdirectly on aMessageobject. Message content is multipart. Always useget_message_text(input). - Never use synchronous functions for the agent handler. Agent functions must be
async defgenerators usingyield. - Never hide platform integration behind wrapper classes. Keep decorators, imports, and config visible in the main agent entrypoint file. Enterprise developers must be able to inspect exactly what the agent does.
- Never force trajectory on trivial wrappers. For simple single-step text responders, trajectory is optional.
- Never skip trajectory when meaningful intermediate logs or tool traces are emitted. Those signals must be surfaced as trajectory output.
- Never treat trajectory as the final answer channel. Trajectory is primarily metadata. User-visible answers must still be emitted as normal
AgentMessagetext. - Never bury meaningful intermediate logs in the final answer text. Keep progress/execution visibility separate from the final user-facing response.
- Never silently remove existing optional auth inputs. If the original agent supported optional tokens/keys for higher limits or private resources, preserve that optional path or document an approved behavior change.
- Never use forms for a single free-form question. Use the A2A
input-requiredevent instead if a simple free-text answer is needed. - Never mismatch form field IDs and model fields. When using Forms, mismatching IDs means values will fail to parse or silently drop.
- Never guess platform object attributes. For example:
FormRenderusesfields(notitems),TextFielduseslabel(nottitle), andAgentDetail.authormust be a dictionary. - Never assume all extension specs have
.demand(). For instance,TrajectoryExtensionSpec()can be instantiated directly, and others may use.single_demand(). Always verify the specific extension spec class. - Never skip null-path handling for forms. Handle
Nonefor cancelled or unsubmitted forms. - Never treat extension data as dictionaries. Data attached to extensions (e.g.,
llm.data.llm_fulfillments["default"]) are Pydantic objects, not dicts. Always access properties using dot notation (e.g.,config.api_key, notconfig.get("api_key")). - Never use
llm_config.identifieras the model name.identifierpoints to the provider binding (for examplellm_proxy), not to the deployable model. Usellm_config.api_modelfor model selection. - Never apply silent fallback when
llm.data.llm_fulfillments["default"]is missing. Either request secrets through a declaredsecretsextension and construct explicitapi_key/api_base/api_modelvalues, or raise a clear error. - Never rely on framework default LLM fallback chains. If the wrapped runtime tries alternate providers automatically, disable that path by passing explicit provider/client config from the extension contract.
- Never rewrite agent business logic. Only wrap the existing entry point. Never attempt to “fix” the original agent’s internal workings.
Failure Conditions
- If the project’s primary language is not Python, stop and report unsupported runtime.
- If fresh docs cannot be fetched, stop and report that execution cannot continue without current docs.
Finalization Report (Required)
Before completion, provide all of the following:
- Mapping summary: inbound mapping (A2A Message to agent input), outbound mapping (agent output to
AgentMessage), and selected streaming path. - Behavior changes list: if behavior changed, list each change with reason and impact.
- Business-logic statement: state whether business logic changed, and if it did, include approval and justification.
- Legacy endpoint compatibility result: state preserved, shimmed, or not applicable.
- Dockerfile prompt: Ask the user if they also want to add a
Dockerfile. If the user says yes, review the example athttps://github.com/i-am-bee/agentstack-starter/blob/main/Dockerfileand assemble aDockerfilefor the project. Do not force the use ofuvif the project does not use it. - Testing prompt: Ask the user if they want to test the agent functionality. If they say yes, start the agent first in one terminal, and then use a separate terminal to run
agentstack run AGENT_NAME. Do not attempt to interrupt theruncommand, as it may take a long time to complete. If the execution fails and an error is encountered, attempt to fix the error and run the test again. Critically, do not create any new files or scripts (e.g., Python test scripts using pexpect) to perform this test. You must interact with the terminals directly. Note thatagentstack runtriggers an interactive form; when testing programmatically via stdin, ensure you send precise literal newline characters to advance prompts.
Verification Checklist
After wrapping, confirm:
- Every
importresolves to a real, installed module - The agent function has a meaningful docstring (used as description in UI)
-
yield AgentMessage(text=...)is used for all responses - No env vars are used for API keys or model config (extensions used instead)
- Agent uses an OpenAI-compatible interface or has necessary provider libraries installed for other LLMs
- Wrapper passes explicit runtime LLM config from extensions and does not rely on framework/provider fallback defaults
- Single-turn vs multi-turn classification matches the actual agent behavior
- If single-turn with structured params â
initial_formis defined -
inputandresponseare stored viacontext.store()unless explicit stateless behavior is justified -
context.load_history()is required for multi-turn; for single-turn, it is used only when continuity is intentionally required - No business-logic changes were made to the original agent code unless explicitly approved per Constraint C10
- If business-logic change was required, explicit approval and justification are recorded
- No Dockerfile was added unless explicitly requested
- Temp files created at runtime are cleaned up
-
agentstack-sdk(pinned with~=) was added to the project’s existing dependency file - If
a2a-sdkis pinned directly, its version is explicitly compatible with the selectedagentstack-sdk - Errors raise exceptions (handled by Error extension), not yielded as
AgentMessage - Optional extensions are checked for presence/data before use
- If the agent references sources -> Citations extension is used
- If the agent has meaningful multi-step execution/tool traces, trajectory output is emitted for those steps
- Final user-facing answer is emitted as normal
AgentMessageoutput, not only as trajectory data - If the agent already used secrets -> Secrets extension is used (safe access pattern with
request_secretsthrough a declared secrets extension parameter). No new secrets added. - No secrets were logged, printed, persisted, or returned in responses/errors.
- No extra middleware, auth, or containerization added unless explicitly requested (Constraint C2)
- Imports follow import truth and validation rule (Constraint C6)
- No command-line arguments (
argparse) remain in the code (Constraint C9) - You have provided a Mapping summary showing inbound mapping (A2A Message to agent input), outbound mapping (agent output to AgentMessage), and streaming path selected.
-
context_store=PlatformContextStore()is present whenever the wrapper persists or reads context history. - If legacy HTTP endpoints were contract-tested, compatibility is preserved or shimmed.
- If behavior changed, the Finalization Report includes an explicit change list and impact.
- Agent responds at
/.well-known/agent-card.jsonwith HTTP 200 and a valid and parseable JSON. - Agent card includes required identity fields used for discovery.
- Validate the Agent Card by fetching
/.well-known/agent-card.jsonfrom the agent’s server (Make sure it is running, and pass the correcthost:portif it’s not on the default127.0.0.1:8000). Ensure it returns HTTP 200 and the JSON is valid. Show the full JSON output to the user. - The user was asked if they want to add a
Dockerfile(and if requested, it was generated based on the agentstack-starter example without forcinguv). - The user was asked if they want to test the agent’s functionality. If they said yes, the agent was started first, and then in a separate terminal, the
agentstack run AGENT_NAMEcommand was executed (do not activate the virtual environment before running this command). Valid inputs with appropriate literal newlines were sent to the interactive terminal. Theruncommand was allowed to run without interruption, and any errors encountered were investigated, fixed, and the test was rerun. No additional files or test scripts were created during testing.