cc-skill-project-guidelines-example
184
总安装量
184
周安装量
#1454
全站排名
安装命令
npx skills add https://github.com/sickn33/antigravity-awesome-skills --skill cc-skill-project-guidelines-example
Agent 安装分布
claude-code
154
opencode
133
antigravity
130
cursor
104
codex
102
Skill 文档
Project Guidelines Skill (Example)
This is an example of a project-specific skill. Use this as a template for your own projects.
Based on a real production application: Zenith – AI-powered customer discovery platform.
When to Use
Reference this skill when working on the specific project it’s designed for. Project skills contain:
- Architecture overview
- File structure
- Code patterns
- Testing requirements
- Deployment workflow
Architecture Overview
Tech Stack:
- Frontend: Next.js 15 (App Router), TypeScript, React
- Backend: FastAPI (Python), Pydantic models
- Database: Supabase (PostgreSQL)
- AI: Claude API with tool calling and structured output
- Deployment: Google Cloud Run
- Testing: Playwright (E2E), pytest (backend), React Testing Library
Services:
âââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââ
â Frontend â
â Next.js 15 + TypeScript + TailwindCSS â
â Deployed: Vercel / Cloud Run â
âââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââ
â
â¼
âââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââ
â Backend â
â FastAPI + Python 3.11 + Pydantic â
â Deployed: Cloud Run â
âââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââ
â
âââââââââââââââââ¼ââââââââââââââââ
â¼ â¼ â¼
ââââââââââââ ââââââââââââ ââââââââââââ
â Supabase â â Claude â â Redis â
â Database â â API â â Cache â
ââââââââââââ ââââââââââââ ââââââââââââ
File Structure
project/
âââ frontend/
â âââ src/
â âââ app/ # Next.js app router pages
â â âââ api/ # API routes
â â âââ (auth)/ # Auth-protected routes
â â âââ workspace/ # Main app workspace
â âââ components/ # React components
â â âââ ui/ # Base UI components
â â âââ forms/ # Form components
â â âââ layouts/ # Layout components
â âââ hooks/ # Custom React hooks
â âââ lib/ # Utilities
â âââ types/ # TypeScript definitions
â âââ config/ # Configuration
â
âââ backend/
â âââ routers/ # FastAPI route handlers
â âââ models.py # Pydantic models
â âââ main.py # FastAPI app entry
â âââ auth_system.py # Authentication
â âââ database.py # Database operations
â âââ services/ # Business logic
â âââ tests/ # pytest tests
â
âââ deploy/ # Deployment configs
âââ docs/ # Documentation
âââ scripts/ # Utility scripts
Code Patterns
API Response Format (FastAPI)
from pydantic import BaseModel
from typing import Generic, TypeVar, Optional
T = TypeVar('T')
class ApiResponse(BaseModel, Generic[T]):
success: bool
data: Optional[T] = None
error: Optional[str] = None
@classmethod
def ok(cls, data: T) -> "ApiResponse[T]":
return cls(success=True, data=data)
@classmethod
def fail(cls, error: str) -> "ApiResponse[T]":
return cls(success=False, error=error)
Frontend API Calls (TypeScript)
interface ApiResponse<T> {
success: boolean
data?: T
error?: string
}
async function fetchApi<T>(
endpoint: string,
options?: RequestInit
): Promise<ApiResponse<T>> {
try {
const response = await fetch(`/api${endpoint}`, {
...options,
headers: {
'Content-Type': 'application/json',
...options?.headers,
},
})
if (!response.ok) {
return { success: false, error: `HTTP ${response.status}` }
}
return await response.json()
} catch (error) {
return { success: false, error: String(error) }
}
}
Claude AI Integration (Structured Output)
from anthropic import Anthropic
from pydantic import BaseModel
class AnalysisResult(BaseModel):
summary: str
key_points: list[str]
confidence: float
async def analyze_with_claude(content: str) -> AnalysisResult:
client = Anthropic()
response = client.messages.create(
model="claude-sonnet-4-5-20250514",
max_tokens=1024,
messages=[{"role": "user", "content": content}],
tools=[{
"name": "provide_analysis",
"description": "Provide structured analysis",
"input_schema": AnalysisResult.model_json_schema()
}],
tool_choice={"type": "tool", "name": "provide_analysis"}
)
# Extract tool use result
tool_use = next(
block for block in response.content
if block.type == "tool_use"
)
return AnalysisResult(**tool_use.input)
Custom Hooks (React)
import { useState, useCallback } from 'react'
interface UseApiState<T> {
data: T | null
loading: boolean
error: string | null
}
export function useApi<T>(
fetchFn: () => Promise<ApiResponse<T>>
) {
const [state, setState] = useState<UseApiState<T>>({
data: null,
loading: false,
error: null,
})
const execute = useCallback(async () => {
setState(prev => ({ ...prev, loading: true, error: null }))
const result = await fetchFn()
if (result.success) {
setState({ data: result.data!, loading: false, error: null })
} else {
setState({ data: null, loading: false, error: result.error! })
}
}, [fetchFn])
return { ...state, execute }
}
Testing Requirements
Backend (pytest)
# Run all tests
poetry run pytest tests/
# Run with coverage
poetry run pytest tests/ --cov=. --cov-report=html
# Run specific test file
poetry run pytest tests/test_auth.py -v
Test structure:
import pytest
from httpx import AsyncClient
from main import app
@pytest.fixture
async def client():
async with AsyncClient(app=app, base_url="http://test") as ac:
yield ac
@pytest.mark.asyncio
async def test_health_check(client: AsyncClient):
response = await client.get("/health")
assert response.status_code == 200
assert response.json()["status"] == "healthy"
Frontend (React Testing Library)
# Run tests
npm run test
# Run with coverage
npm run test -- --coverage
# Run E2E tests
npm run test:e2e
Test structure:
import { render, screen, fireEvent } from '@testing-library/react'
import { WorkspacePanel } from './WorkspacePanel'
describe('WorkspacePanel', () => {
it('renders workspace correctly', () => {
render(<WorkspacePanel />)
expect(screen.getByRole('main')).toBeInTheDocument()
})
it('handles session creation', async () => {
render(<WorkspacePanel />)
fireEvent.click(screen.getByText('New Session'))
expect(await screen.findByText('Session created')).toBeInTheDocument()
})
})
Deployment Workflow
Pre-Deployment Checklist
- All tests passing locally
-
npm run buildsucceeds (frontend) -
poetry run pytestpasses (backend) - No hardcoded secrets
- Environment variables documented
- Database migrations ready
Deployment Commands
# Build and deploy frontend
cd frontend && npm run build
gcloud run deploy frontend --source .
# Build and deploy backend
cd backend
gcloud run deploy backend --source .
Environment Variables
# Frontend (.env.local)
NEXT_PUBLIC_API_URL=https://api.example.com
NEXT_PUBLIC_SUPABASE_URL=https://xxx.supabase.co
NEXT_PUBLIC_SUPABASE_ANON_KEY=eyJ...
# Backend (.env)
DATABASE_URL=postgresql://...
ANTHROPIC_API_KEY=sk-ant-...
SUPABASE_URL=https://xxx.supabase.co
SUPABASE_KEY=eyJ...
Critical Rules
- No emojis in code, comments, or documentation
- Immutability – never mutate objects or arrays
- TDD – write tests before implementation
- 80% coverage minimum
- Many small files – 200-400 lines typical, 800 max
- No console.log in production code
- Proper error handling with try/catch
- Input validation with Pydantic/Zod
Related Skills
coding-standards.md– General coding best practicesbackend-patterns.md– API and database patternsfrontend-patterns.md– React and Next.js patternstdd-workflow/– Test-driven development methodology