reporting-pipelines
33
总安装量
33
周安装量
#6185
全站排名
安装命令
npx skills add https://github.com/bobmatnyc/claude-mpm-skills --skill reporting-pipelines
Agent 安装分布
antigravity
22
claude-code
22
gemini-cli
17
github-copilot
15
windsurf
15
Skill 文档
Reporting Pipelines
Overview
Your reporting pattern is consistent across repos: run a CLI or script that emits structured data, then export CSV/JSON/markdown reports with timestamped filenames into reports/ or tests/results/.
GitFlow Analytics Pattern
# Basic run
gitflow-analytics -c config.yaml --weeks 8 --output ./reports
# Explicit analyze + CSV
gitflow-analytics analyze -c config.yaml --weeks 12 --output ./reports --generate-csv
Outputs include CSV + markdown narrative reports with date suffixes.
EDGAR CSV Export Pattern
edgar/scripts/create_csv_reports.py reads a JSON results file and emits:
executive_compensation_<timestamp>.csvtop_25_executives_<timestamp>.csvcompany_summary_<timestamp>.csv
This script uses pandas for sorting and percentile calculations.
Standard Pipeline Steps
- Collect base data (CLI or JSON artifacts)
- Normalize into rows/records
- Export CSV/JSON/markdown with timestamp suffixes
- Summarize key metrics in stdout
- Store outputs in
reports/ortests/results/
Naming Conventions
- Use
YYYYMMDDorYYYYMMDD_HHMMSSsuffixes - Keep one output directory per repo (
reports/ortests/results/) - Prefer explicit prefixes (e.g.,
narrative_report_,comprehensive_export_)
Troubleshooting
- Missing output: ensure output directory exists and is writable.
- Large CSVs: filter or aggregate before export; keep summary CSVs for quick review.
Related Skills
universal/data/sec-edgar-pipelinetoolchains/universal/infrastructure/github-actions