platform-infrastructure
npx skills add https://github.com/liqiongyu/lenny_skills_plus --skill platform-infrastructure
Agent 安装分布
Skill 文档
Platform & Infrastructure
Scope
Covers
- Platform engineering / âpaved roadsâ: shared capabilities that multiple product teams reuse
- Infrastructure quality attributes: reliability, performance, privacy/safety, operability, cost
- Scalability planning: capacity limits, leading indicators, âdoomsday clockâ triggers, sequencing
- Instrumentation strategy: server-side event tracking, data quality, observability gaps
- Discoverability architecture for web platforms (optional): sitemap + internal linking
When to use
- âCreate a platform infrastructure plan to increase feature velocity without repeating work.â
- âTurn reliability/performance/privacy goals into concrete SLOs and an execution roadmap.â
- âWeâre approaching scaling limitsâdefine triggers and the next infra projects.â
- âOur analytics is messyâdesign a server-side tracking plan and event contract.â
- âFor a large web property, define sitemap + internal-linking requirements for crawlability.â
When NOT to use
- You are handling an active incident or outage (use incident response/runbooks first).
- You only need a single localized perf fix or refactor (just do the work).
- You need product strategy/positioning for a platform-as-product (use
platform-strategy). - You need a full feature spec or UX flows (use
writing-specs-designs/writing-prds). - SEO/content strategy is the primary workstream (use
content-marketing).
Inputs
Minimum required
- System boundary (services/apps) + primary users/customers
- Current pains (pick 1â3): reliability, performance, cost, privacy/security/compliance, developer velocity, data quality/analytics, SEO/discoverability
- Current architecture constraints (data stores, runtime, deployment model, key dependencies)
- Scale + trajectory (rough): current usage + expected growth + known upcoming spikes
- Constraints: deadlines, staffing/capacity, risk tolerance, compliance/privacy requirements
Missing-info strategy
- Ask up to 5 questions from references/INTAKE.md (3â5 at a time).
- If details remain missing, proceed with explicit assumptions and provide 2â3 options.
- If asked to change production systems or run commands, require explicit confirmation and include rollback guidance.
Outputs (deliverables)
Produce a Platform & Infrastructure Improvement Pack in Markdown (in-chat; or as files if requested), in this order:
- Context snapshot (scope, constraints, assumptions, stakeholders, success definition)
- Shared capabilities inventory + platformization plan (what to standardize, why, and how)
- Quality attributes spec (reliability/perf/privacy/safety targets; proposed SLOs/SLIs)
- Scaling âdoomsday clockâ + capacity plan (limits, triggers, lead time, projects)
- Instrumentation plan (observability gaps + server-side analytics event contract)
- Discoverability plan (optional) for web platforms (sitemap + internal linking requirements)
- Execution roadmap (sequencing, milestones, owners, dependencies, comms)
- Risks / Open questions / Next steps (always included)
Templates: references/TEMPLATES.md
Workflow (8 steps)
1) Intake + define âwhat decision will this enable?â
- Inputs: Context; references/INTAKE.md.
- Actions: Confirm scope boundaries, top pains, and time horizon. Write a 1â2 sentence decision statement (e.g., âWe will standardize X and commit to SLO Y by date Z.â).
- Outputs: Context snapshot (draft).
- Checks: A stakeholder can answer: âWhat will we do differently after reading this?â
2) Find repeatable product capabilities worth platformizing
- Inputs: Recent roadmap/initiatives; architecture overview; pain points.
- Actions: Inventory repeated âfeature componentsâ (e.g., export, filtering, permissions, audit logs, notifications). Identify 3â7 candidates for shared infrastructure. Define what becomes the platform contract vs what remains product-specific.
- Outputs: Shared capabilities inventory + platformization plan (draft).
- Checks: Each candidate has: (a) at least 2 consumers, (b) a clear API/contract idea, (c) a migration/rollout approach.
3) Define quality attributes and targets (make âinvisible workâ explicit)
- Inputs: Reliability/perf/privacy needs; customer expectations; compliance constraints.
- Actions: Write the quality attributes spec. Propose SLOs/SLIs for reliability and performance; document privacy/safety requirements (data residency, encryption, access controls, retention).
- Outputs: Quality attributes spec (draft).
- Checks: Targets are measurable and owned (even if initial numbers are estimates + confidence).
4) Build the scaling âdoomsday clockâ
- Inputs: Current bottlenecks/limits; growth expectations; lead times for major changes.
- Actions: Identify top 3â10 capacity limits (DB size/IOPS, queue depth, cache hit rate, deploy throughput, rate limits). Define thresholds that trigger scaling projects early enough (lead time-aware).
- Outputs: Doomsday clock table + capacity plan (draft).
- Checks: Each limit has a metric, an alert threshold, a lead time estimate, and a named mitigation project.
5) Decide instrumentation: observability + server-side analytics
- Inputs: Current logging/metrics/tracing; current analytics tracking approach.
- Actions: Specify observability gaps (must-have dashboards/alerts) and define an event contract for server-side analytics (names, properties, identity strategy, delivery guarantees, QA checks).
- Outputs: Instrumentation plan (draft).
- Checks: Event definitions are consistent across clients; key events are captured server-side; data-quality checks exist.
6) (Optional) Discoverability architecture for web platforms
- Inputs: If applicable: site/app information architecture; SEO importance; crawl constraints.
- Actions: Define sitemap requirements (categorization, pagination, freshness) and internal-linking rules (ârelated contentâ, indexability controls, canonicalization).
- Outputs: Discoverability plan (draft) or âNot applicableâ decision.
- Checks: A crawler can reach all indexable pages via links/sitemaps; ânoindexâ/canonicals are intentional.
7) Turn decisions into a sequenced execution roadmap
- Inputs: Draft deliverables; constraints; dependencies; capacity.
- Actions: Prioritize initiatives using impact à risk à effort à lead time. Create milestones, owners, and rollout plans (including deprecation/decommission for old paths).
- Outputs: Execution roadmap (draft).
- Checks: Roadmap has a first executable milestone, explicit dependencies, and measurable acceptance criteria.
8) Quality gate + finalize
- Inputs: Full draft pack.
- Actions: Run references/CHECKLISTS.md and score with references/RUBRIC.md. Tighten unclear contracts, add missing measures, and always include Risks / Open questions / Next steps.
- Outputs: Final Platform & Infrastructure Improvement Pack.
- Checks: A team can execute without extra meetings; unknowns are explicit and owned.
Quality gate (required)
- Use references/CHECKLISTS.md and references/RUBRIC.md.
- Always include: Risks, Open questions, Next steps.
Examples
Example 1 (shared capabilities): âUse platform-infrastructure for a B2B analytics app where every team keeps rebuilding export, filtering, and permissions. Output a platformization plan + roadmap + SLO targets.â
Example 2 (scaling readiness): âWe expect 5Ã traffic in 6 months. Define a doomsday clock for Postgres limits, propose scaling projects, and set reliability/performance SLOs. Also standardize server-side analytics.â
Boundary example: âWeâre mid-incident and pages are downâtell us what to do right now.â
Response: out of scope; recommend incident response first, then use this skill post-incident to create the scaling plan and reliability roadmap.