cognite

📁 cognite/com 📅 Jan 29, 2026
4
总安装量
4
周安装量
#27778
全站排名
安装命令
npx skills add https://docs.cognite.com

Agent 安装分布

claude-code 3
github-copilot 3
codex 3
gemini-cli 3
opencode 3

Skill 文档

Capabilities

Cognite Data Fusion (CDF) is an industrial DataOps platform that enables organizations to stream, integrate, transform, and analyze industrial data at scale. Agents can leverage CDF to build data pipelines, create AI-powered applications, manage industrial assets, and automate complex workflows. The platform provides comprehensive APIs, SDKs, and tools for data ingestion, contextualization, visualization, and intelligent automation across industrial operations.

Skills

Data Integration and Extraction

  • Extract data from multiple sources: Connect to OPC UA servers, PI Data Archive, SAP systems, databases (SQL Server, Oracle, PostgreSQL, MySQL), and other industrial systems using prebuilt extractors
  • Stream real-time data: Use OPC UA extractor for real-time subscriptions and time series streaming with parallel historical backfill
  • Batch extract data: Extract data from databases using ODBC drivers, PI Asset Framework elements, SAP OData endpoints, and WITSML sources
  • Hosted extractors: Configure serverless extractors for REST APIs, Kafka, MQTT, EventHub, and custom data sources without managing infrastructure
  • Extract from files: Use file extractor to ingest data from SharePoint, local file systems, and other file sources
  • Extraction pipelines: Monitor and manage extraction pipelines with state tracking and heartbeat reporting

Data Transformation and Contextualization

  • Transform data: Use built-in SQL-based transformations to reshape, enrich, and validate data quality
  • Entity matching: Apply machine learning and rules engines to match entities from different sources to assets with confidence scoring
  • Document parsing: Extract structured data from documents and PDFs using AI-powered parsing into data model views
  • Diagram parsing: Parse engineering diagrams and P&IDs to extract relationships and contextualize assets
  • Build relationships: Create and manage relationships between assets, time series, files, and events to build a connected knowledge graph

Data Modeling and Knowledge Graph

  • Create flexible data models: Define custom data models using GraphQL Data Modeling Language (DML) with containers, views, and instances
  • Property graph queries: Query complex relationships across the knowledge graph using advanced filters, recursive edge traversal, and aggregations
  • GraphQL interface: Query and mutate data using GraphQL with list, get, search, and aggregate operations
  • Core data models: Use Cognite’s pre-built core data model for assets, time series, files, and activities
  • Data spaces and instances: Organize data into spaces and create typed instances with properties and relationships
  • Full-text search: Search instances across the knowledge graph using semantic search capabilities

Time Series and Events Management

  • Ingest time series: Stream numeric, string, and state time series data with microsecond resolution
  • Time series aggregations: Calculate aggregations (average, sum, min, max, count) over time ranges
  • Synthetic time series: Create calculated time series based on expressions and other time series
  • Events management: Store complex events with start/end times, metadata, and relationships to assets
  • Data point subscriptions: Subscribe to changes in time series data for real-time monitoring
  • State time series: Track state changes over time (on/off, connected/disconnected, etc.)

Asset and Resource Management

  • Asset hierarchies: Build and manage asset hierarchies representing physical plant structures
  • Asset contextualization: Link assets to time series, files, 3D models, and events
  • Files and documents: Store and manage documents, images, and files linked to assets
  • Labels and classifications: Create managed terms to annotate and group assets, files, and relationships
  • Sequences: Store ordered data sequences with metadata and asset relationships
  • Raw data staging: Use RAW database for staging unstructured data before transformation

3D Models and Visualization

  • Upload 3D models: Support CAD formats (RVM, OBJ, FBX, STEP, IFC, DWG, SOLIDWORKS, Parasolid, NWD)
  • Point cloud management: Upload and visualize point cloud data (LAS, LAZ, E57, PTX, PTS formats)
  • 360-degree images: Upload and contextualize 360-degree images for spatial visualization
  • 3D contextualization: Link 3D model nodes to assets and time series for interactive visualization
  • Scene management: Create unified 3D scenes combining CAD, point clouds, and 360 images
  • REVEAL 3D SDK: Integrate 3D visualizations in web applications with high-performance rendering

AI and Automation

  • Atlas AI agents: Build and deploy generative AI agents for industrial workflows using language models and knowledge graphs
  • Agent tools: Access built-in tools for REST API calls, Python code execution, and data queries
  • Agent evaluation: Test agents with specific prompts to verify performance and identify improvements
  • Data workflows: Automate multi-step processes with scheduled or event-triggered workflows
  • Workflow tasks: Execute transformations, functions, API requests, and simulations as workflow tasks
  • Cognite Functions: Deploy serverless Python functions for custom logic, calculations, and integrations
  • Function scheduling: Schedule functions to run on intervals or trigger on data changes

Dashboards and Analytics

  • Grafana integration: Connect CDF as a data source for Grafana dashboards with real-time monitoring
  • Power BI and Excel: Query CDF data using OData and REST API connectors for analysis and reporting
  • OData services: Access time series, events, and data model instances through OData V4 endpoints
  • Custom queries: Build complex queries with filtering, aggregation, and parameterization
  • Template variables: Create dynamic dashboards with nested asset hierarchy navigation
  • Incremental refresh: Implement efficient data refresh strategies for large datasets

Field Operations and Maintenance

  • InField application: Enable field workers to complete tasks, capture measurements, and record observations
  • Work order management: Plan, schedule, and assign maintenance work orders from SAP or other systems
  • Checklists and templates: Create reusable checklists from templates for field activities
  • Observations and measurements: Capture field data including images, videos, and sensor readings
  • Maintenance orders: Track maintenance activities with status, priority, and asset relationships
  • Activity tracking: Monitor field work progress and completion status in real-time

APIs and SDKs

  • REST API: Access all CDF resources through comprehensive RESTful endpoints with versioning support
  • Python SDK: Full-featured SDK for data science, backend development, and high-throughput operations
  • JavaScript SDK: Browser and Node.js support for web applications with React and Angular integration
  • Specialized SDKs: Java, Scala, .NET, Rust SDKs for various development scenarios
  • Pygen: Generate Python SDKs from data models for type-safe data access
  • Extractor Utils: Python and .NET libraries for building custom extractors with configuration and state management

Access Control and Security

  • Identity and access management: Integrate with OIDC providers (Microsoft Entra ID, Amazon Cognito, Cognite CDF)
  • Capabilities-based access: Define granular permissions by resource type, action, and scope
  • Groups and roles: Manage user and service account permissions through group membership
  • API keys and tokens: Authenticate applications with API keys or bearer tokens
  • Private endpoints: Configure private link access for secure cloud connectivity (AWS, Azure)
  • Data set scoping: Restrict access to specific data sets and asset subtrees

Workflows

Build a Complete Data Pipeline

  1. Extract data: Use OPC UA extractor to stream real-time time series from industrial equipment
  2. Stage data: Store raw data in RAW database for initial validation
  3. Transform: Apply SQL transformations to reshape data and match entities to assets
  4. Contextualize: Use entity matching to link time series to asset hierarchies
  5. Model: Create data model instances representing equipment, measurements, and relationships
  6. Query: Use GraphQL to retrieve contextualized data for analytics
  7. Visualize: Display results in Grafana dashboards with real-time updates

Deploy an AI Agent for Maintenance

  1. Prepare data: Ensure assets, time series, maintenance orders, and documents are properly linked
  2. Build agent: Create Atlas AI agent with prompts for maintenance analysis
  3. Configure tools: Enable REST API calls and Python code execution for agent actions
  4. Test: Evaluate agent responses to maintenance-related questions
  5. Deploy: Activate agent for production use
  6. Monitor: Track agent performance and refine prompts based on results

Automate Field Operations

  1. Import work orders: Extract maintenance work orders from SAP using SAP extractor
  2. Create templates: Define field checklists and task templates in InField
  3. Assign work: Create and assign checklists to field workers from work orders
  4. Capture data: Field workers record observations, measurements, and images
  5. Sync results: Automatically sync completed work back to work management system
  6. Track progress: Monitor field work completion and asset status updates

Implement Real-Time Monitoring Dashboard

  1. Connect data sources: Configure OPC UA, PI, or database extractors
  2. Create time series: Stream sensor data into CDF time series service
  3. Link to assets: Contextualize time series with asset hierarchies
  4. Build queries: Create Grafana queries with filtering and aggregation
  5. Design dashboard: Build interactive dashboards with template variables
  6. Set alerts: Configure event annotations for threshold violations
  7. Share insights: Publish dashboards across organization

Integration

  • Industrial systems: OPC UA, PI Data Archive, SAP, databases, WITSML, OSDU, Studio for Petrel
  • Cloud platforms: AWS (private link), Microsoft Azure (private link), Google Cloud
  • BI tools: Grafana, Power BI, Excel with OData and REST connectors
  • Data platforms: PostgreSQL gateway for ETL tools, Kafka, MQTT, EventHub for streaming
  • CI/CD systems: GitHub Actions, Azure DevOps, GitLab for automated deployments
  • External APIs: Call any REST API from workflows, functions, and Atlas AI agents
  • File systems: SharePoint, local file systems, cloud storage via file extractor
  • Simulators: PROSPER and GAP simulator connectors for reservoir and gap analysis

Context

Industrial DataOps: CDF is purpose-built for industrial operations, handling high-volume time series data, asset hierarchies, and complex relationships between equipment, documents, and operational events.

Knowledge Graph Architecture: Data is organized as a property graph where assets, time series, files, and events are interconnected through relationships, enabling multi-hop queries and semantic understanding.

Real-time and Historical Data: The platform handles both streaming real-time data (microsecond resolution) and historical data backfill, supporting both operational monitoring and historical analysis.

Contextualization First: Raw data from multiple sources is automatically matched and linked to create a unified, contextualized view of industrial assets and operations.

Scalability: Designed for enterprise scale with support for millions of assets, billions of time series data points, and complex data models across global operations.

Security and Compliance: Enterprise-grade security with OIDC authentication, granular access control, private endpoints, and compliance with industrial data governance requirements.

Extensibility: Agents can extend CDF with custom functions, data models, extractors, and integrations while maintaining data governance and access control.


For additional documentation and navigation, see: https://docs.cognite.com/llms.txt