apify automation
npx skills add https://github.com/composiohq/awesome-claude-skills --skill Apify Automation
Skill 文档
Apify Automation
Run Apify web scraping Actors and manage datasets directly from Claude Code. Execute crawlers synchronously or asynchronously, retrieve structured data, create reusable tasks, and inspect run logs without leaving your terminal.
Toolkit docs: composio.dev/toolkits/apify
Setup
- Add the Composio MCP server to your configuration:
https://rube.app/mcp - Connect your Apify account when prompted. The agent will provide an authentication link.
- Browse available Actors at apify.com/store. Each Actor has its own unique input schema — always check the Actor’s documentation before running.
Core Workflows
1. Run an Actor Synchronously and Get Results
Execute an Actor and immediately retrieve its dataset items in a single call. Best for quick scraping jobs.
Tool: APIFY_RUN_ACTOR_SYNC_GET_DATASET_ITEMS
Key parameters:
actorId(required) — Actor ID in formatusername/actor-name(e.g.,compass/crawler-google-places)input— JSON input object matching the Actor’s schema. Each Actor has unique field names — check apify.com/store for the exact schema.limit— max items to returnoffset— skip items for paginationformat—json(default),csv,jsonl,html,xlsx,xmltimeout— run timeout in secondswaitForFinish— max wait time (0-300 seconds)fields— comma-separated list of fields to includeomit— comma-separated list of fields to exclude
Example prompt: “Run the Google Places scraper for ‘restaurants in New York’ and return the first 50 results”
2. Run an Actor Asynchronously
Trigger an Actor run without waiting for completion. Use for long-running scraping jobs.
Tool: APIFY_RUN_ACTOR
Key parameters:
actorId(required) — Actor slug or IDbody— JSON input object for the Actormemory— memory limit in MB (must be power of 2, minimum 128)timeout— run timeout in secondsmaxItems— cap on returned itemsbuild— specific build tag (e.g.,latest,beta)
Follow up with APIFY_GET_DATASET_ITEMS to retrieve results using the run’s datasetId.
Example prompt: “Start the web scraper Actor for example.com asynchronously with 1024MB memory”
3. Retrieve Dataset Items
Fetch data from a specific dataset with pagination, field selection, and filtering.
Tool: APIFY_GET_DATASET_ITEMS
Key parameters:
datasetId(required) — dataset identifierlimit(default/max 1000) — items per pageoffset(default 0) — pagination offsetformat—json(recommended),csv,xlsxfields— include only specific fieldsomit— exclude specific fieldsclean— remove Apify-specific metadatadesc— reverse order (newest first)
Example prompt: “Get the first 500 items from dataset myDatasetId in JSON format”
4. Inspect Actor Details
View Actor metadata, input schema, and configuration before running it.
Tool: APIFY_GET_ACTOR
Key parameters:
actorId(required) — Actor ID in formatusername/actor-nameor hex ID
Example prompt: “Show me the details and input schema for the apify/web-scraper Actor”
5. Create Reusable Tasks
Configure reusable Actor tasks with preset inputs for recurring scraping jobs.
Tool: APIFY_CREATE_TASK
Configure a task once, then trigger it repeatedly with consistent input parameters. Useful for scheduled or recurring data collection workflows.
Example prompt: “Create an Apify task for the Google Search scraper with default query ‘AI startups’ and US location”
6. Manage Runs and Datasets
List Actor runs, browse datasets, and inspect run details for monitoring and debugging.
Tools: APIFY_GET_LIST_OF_RUNS, APIFY_DATASETS_GET, APIFY_DATASET_GET, APIFY_GET_LOG
For listing runs:
- Filter by Actor and optionally by status
- Get
datasetIdfrom run details for data retrieval
For dataset management:
APIFY_DATASETS_GET— list all your datasets with paginationAPIFY_DATASET_GET— get metadata for a specific dataset
For debugging:
APIFY_GET_LOG— retrieve execution logs for a run or build
Example prompt: “List the last 10 runs for the web scraper Actor and show logs for the most recent one”
Known Pitfalls
- Actor input schemas vary wildly: Every Actor has its own unique input fields. Generic field names like
queriesorsearch_termswill be rejected. Always check the Actor’s page on apify.com/store for exact field names (e.g.,searchStringsArrayfor Google Maps,startUrlsfor web scrapers). - URL format requirements: Always include the full protocol (
https://orhttp://) in URLs. Many Actors require URLs as objects with aurlproperty:{"startUrls": [{"url": "https://example.com"}]}. - Dataset pagination cap:
APIFY_GET_DATASET_ITEMShas a maxlimitof 1000 per call. For large datasets, loop withoffsetto collect all items. - Enum values are lowercase: Most Actors expect lowercase enum values (e.g.,
relevancenotRELEVANCE,allnotALL). - Sync timeout at 5 minutes:
APIFY_RUN_ACTOR_SYNC_GET_DATASET_ITEMShas a maximumwaitForFinishof 300 seconds. For longer runs, useAPIFY_RUN_ACTOR(async) and poll withAPIFY_GET_DATASET_ITEMS. - Data volume costs: Large datasets can be expensive to fetch. Prefer moderate limits and incremental processing to avoid timeouts or memory pressure.
- JSON format recommended: While CSV/XLSX formats are available, JSON is the most reliable for automated processing. Avoid CSV/XLSX for downstream automation.
Quick Reference
| Tool Slug | Description |
|---|---|
APIFY_RUN_ACTOR_SYNC_GET_DATASET_ITEMS |
Run Actor synchronously and get results immediately |
APIFY_RUN_ACTOR |
Run Actor asynchronously (trigger and return) |
APIFY_RUN_ACTOR_SYNC |
Run Actor synchronously, return output record |
APIFY_GET_ACTOR |
Get Actor metadata and input schema |
APIFY_GET_DATASET_ITEMS |
Retrieve items from a dataset (paginated) |
APIFY_DATASET_GET |
Get dataset metadata (item count, etc.) |
APIFY_DATASETS_GET |
List all user datasets |
APIFY_CREATE_TASK |
Create a reusable Actor task |
APIFY_GET_TASK_INPUT |
Inspect a task’s stored input |
APIFY_GET_LIST_OF_RUNS |
List runs for an Actor |
APIFY_GET_LOG |
Get execution logs for a run |
Powered by Composio