gpu

📁 gpu-cli/gpu 📅 8 days ago
1
总安装量
1
周安装量
#55050
全站排名
安装命令
npx skills add https://github.com/gpu-cli/gpu --skill gpu

Agent 安装分布

replit 1
amp 1
opencode 1
kimi-cli 1
github-copilot 1

Skill 文档

GPU CLI

Run NVIDIA GPU workloads from your Mac. gpu run <command> provisions cloud GPUs, syncs code, streams output.

Current Status

!gpu status 2>/dev/null || echo "No active pods"

Config: !ls gpu.jsonc 2>/dev/null || echo "No config"

Commands

Command Purpose
gpu run <cmd> Execute on remote GPU
gpu use <template> One-click apps (ComfyUI, vLLM)
gpu status Show pods, jobs, costs
gpu logs [-f] View/stream job output
gpu stop Stop pod immediately
gpu events Stream all activity
gpu inventory List GPUs with pricing
gpu dashboard Interactive TUI

Routing

Read the appropriate reference based on user intent:

User Intent Reference File
Create project, run ML task, “I want to…” references/create.md
Error, OOM, failed, stuck, debug references/debug.md
Cost, GPU selection, optimize, pricing references/optimize.md
Config help, gpu.jsonc fields references/config.md
Volumes, persistent storage, large models references/volumes.md

Quick Config

{
  "$schema": "https://gpu-cli.sh/schema/v1/gpu.json",
  "project_id": "my-project",
  "gpu_types": [{ "type": "RTX 4090" }],
  "outputs": ["results/", "*.pt"]
}

GPU Quick Reference

VRAM GPU $/hr Best For
12GB RTX 4070 Ti $0.25 Small models
24GB RTX 4090 $0.44 SD, FLUX, 7B LLMs
48GB RTX A6000 $0.80 Large training
80GB A100 PCIe $1.79 70B LLMs, video

Sync Behavior

  • TO pod: .gitignore controls (gitignored files don’t sync)
  • FROM pod: outputs in config controls (only matching patterns sync back)