trace-audit
npx skills add https://github.com/joyco-studio/skills --skill trace-audit
Agent 安装分布
Skill 文档
Chrome DevTools Trace Audit
Analyze a Chrome DevTools Performance trace and produce a comprehensive performance audit report.
Usage
/trace-audit <path-to-trace.json>
The argument is the absolute path to a Chrome DevTools trace JSON file (exported from the Performance panel).
Workflow
Follow these steps in order. Use parallel tool calls wherever noted.
Step 1 â Validate the trace file
Read the first 100 lines of the file using the Read tool. Confirm it is a valid Chrome DevTools trace by checking for:
- A top-level
traceEventsarray, or a bare JSON array starting with[ - Event objects with
name,cat,ph,tsfields - Presence of
__metadataorTracingStartedInBrowserevents
If validation fails, tell the user this doesn’t appear to be a Chrome DevTools trace and stop.
Step 2 â Extract metadata
Use Grep on the trace file to extract (run these in parallel):
- Site URL â grep for
TracingStartedInBrowserornavigationStartorFrameCommittedInBrowserand look for a URL in theargs - Process names â grep for
process_nameorthread_nameto identify renderer, browser, GPU processes - Trace time range â grep for the first and last
"ts":values to compute trace duration
Step 3 â Run detection passes
Refer to detection-heuristics.md for the full set of patterns and thresholds. Run all detection categories in parallel using Grep. For each category:
- Use the specified grep pattern on the trace file
- Collect matching lines with surrounding context where helpful (
-C 1or-C 2) - Count matches and extract durations/values from the matched JSON
The detection categories are:
- Long Tasks (
RunTaskwith dur > 50000) - Layout Thrashing (
InvalidateLayoutâLayoutpairs) - Forced Reflows (
Layoutevents withstackTrace) - rAF Ticker Loops (
RequestAnimationFramefrequency) - Style Recalc Storms (
UpdateLayoutTreewith dur > 5000) - Paint Storms (
Paintevents with dur > 3000) - GC Pressure (
MajorGC/V8.GC_MARK_COMPACTOR) - CLS (
LayoutShiftcumulative score) - INP (
EventTimingmax duration) - Network Errors (
ResourceReceiveResponsewith statusCode >= 400) - Redundant Fetches (same URL fetched multiple times)
- Script Eval (
EvaluateScript/CompileScriptwith dur > 50000) - Long Animation Frames (
LoAF/LongAnimationFrame)
Step 4 â Aggregate findings
For each detection category:
- Compute total count of flagged events
- Extract the worst offender (max duration or highest score)
- Classify severity: Critical (red) or Warning (yellow) based on the thresholds in
detection-heuristics.md - Skip categories with zero findings
Step 5 â Identify timeline hotspots
Group flagged events by timestamp into time windows (e.g., 500ms buckets). Identify windows where multiple issue categories overlap â these are hotspot ranges that represent the most problematic sections of the trace.
Step 6 â Generate report
Output the report using the structure defined in report-format.md. The report should be:
- Actionable â every issue links to a concrete fix
- Scannable â use tables, severity badges, and clear headings
- Complete â cover all categories, even if just to say “no issues found”