integrate-flowlines-sdk-python

📁 flowlines-ai/skills 📅 2 days ago
3
总安装量
3
周安装量
#59771
全站排名
安装命令
npx skills add https://github.com/flowlines-ai/skills --skill integrate-flowlines-sdk-python

Agent 安装分布

opencode 3
gemini-cli 3
github-copilot 3
codex 3
kimi-cli 3
cursor 3

Skill 文档

Flowlines SDK for Python — Agent Skill

What is Flowlines

Flowlines is an observability SDK for LLM-powered Python applications. It instruments LLM provider APIs using OpenTelemetry, automatically capturing requests, responses, timing, and errors. It filters telemetry to only LLM-related spans and exports them via OTLP/HTTP to the Flowlines backend.

Supported LLM providers: OpenAI, Anthropic, AWS Bedrock, Cohere, Google Generative AI, Vertex AI, Together AI. Supported frameworks/tools: LangChain, LlamaIndex, MCP, Pinecone, ChromaDB, Qdrant.

Installation

Requires Python 3.11+.

pip install flowlines

Then install instrumentation extras for the providers used in the project:

# Single provider
pip install flowlines[openai]

# Multiple providers
pip install flowlines[openai,anthropic]

# All supported providers
pip install flowlines[all]

Available extras: openai, anthropic, bedrock, cohere, google-generativeai, vertexai, together, pinecone, chromadb, qdrant, langchain, llamaindex, mcp.

Integration

There are three integration modes. Pick the one that matches the project’s OpenTelemetry situation.

Mode A — No existing OpenTelemetry setup (default)

Use this when the project does NOT already have its own OpenTelemetry TracerProvider. This is the most common case.

from flowlines import Flowlines

flowlines = Flowlines(api_key="<FLOWLINES_API_KEY>")

This single call:

  1. Creates an OpenTelemetry TracerProvider
  2. Auto-detects which LLM libraries are installed and instruments them
  3. Filters spans to only export LLM-related telemetry
  4. Sends data to the Flowlines backend via OTLP/HTTP

Mode B1 — Existing OpenTelemetry setup (has_external_otel=True)

Use this when the project already manages its own TracerProvider.

from flowlines import Flowlines
from opentelemetry.sdk.trace import TracerProvider

flowlines = Flowlines(api_key="<FLOWLINES_API_KEY>", has_external_otel=True)

provider = TracerProvider()

# Add the Flowlines span processor to the existing provider
processor = flowlines.create_span_processor()
provider.add_span_processor(processor)

# Instrument providers using the Flowlines instrumentor registry
for instrumentor in flowlines.get_instrumentors():
    instrumentor.instrument(tracer_provider=provider)
  • create_span_processor() must be called exactly once.
  • get_instrumentors() returns instrumentor instances only for libraries that are currently installed.

Mode B2 — Traceloop already initialized (has_traceloop=True)

Use this when Traceloop SDK is already initialized. Traceloop must be initialized BEFORE Flowlines.

from flowlines import Flowlines

flowlines = Flowlines(api_key="<FLOWLINES_API_KEY>", has_traceloop=True)

Flowlines adds its span processor to the existing Traceloop TracerProvider. No instrumentor registration needed.

Critical rules

  1. Initialize Flowlines BEFORE creating LLM clients. The Flowlines() constructor must run before any LLM provider client is instantiated (e.g., OpenAI(), Anthropic()). If the client is created first, its calls will not be captured.

  2. Flowlines is a singleton. Only one Flowlines() instance may exist. A second call raises RuntimeError. Store the instance and reuse it. Do NOT instantiate it multiple times.

  3. has_external_otel and has_traceloop are mutually exclusive. Setting both to True raises ValueError.

  4. user_id is mandatory in context(). The context manager requires user_id as a keyword argument. conversation_id is optional.

  5. Context does not auto-propagate to child threads/tasks. If using threads or async tasks, set context in each thread/task explicitly.

User and conversation tracking

Tag LLM calls with user/conversation IDs using the context manager:

with flowlines.context(user_id="user-42", conversation_id="conv-abc"):
    client.chat.completions.create(...)  # this span gets user_id and conversation_id

conversation_id is optional:

with flowlines.context(user_id="user-42"):
    client.chat.completions.create(...)

For cases where a context manager doesn’t fit (e.g., across request boundaries in web frameworks), use the imperative API:

token = Flowlines.set_context(user_id="user-42", conversation_id="conv-abc")
try:
    client.chat.completions.create(...)
finally:
    Flowlines.clear_context(token)

set_context() / clear_context() are static methods on the Flowlines class.

Constructor parameters

Flowlines(
    api_key: str,                    # Required. The Flowlines API key.
    endpoint: str = "https://ingest.flowlines.ai",  # Backend URL.
    has_external_otel: bool = False,  # True if project has its own TracerProvider.
    has_traceloop: bool = False,      # True if Traceloop is already initialized.
    verbose: bool = False,            # True to enable debug logging to stderr.
)

Public API summary

Method / attribute Description
Flowlines(api_key, ...) Constructor. Initializes the SDK (singleton).
flowlines.context(user_id=..., conversation_id=...) Context manager to tag spans with user/conversation.
Flowlines.set_context(user_id=..., conversation_id=...) Static. Imperative context setting; returns a token.
Flowlines.clear_context(token) Static. Restores previous context using the token.
flowlines.create_span_processor() Returns a SpanProcessor. Mode B1 only. Call once.
flowlines.get_instrumentors() Returns list of available instrumentor instances.
flowlines.shutdown() Flush and shut down. Called automatically via atexit.

Imports

The public API is exported from the top-level package:

from flowlines import Flowlines
from flowlines import FlowlinesExporter  # only needed for advanced use

Verbose / debug mode

Pass verbose=True to print debug information to stderr:

flowlines = Flowlines(api_key="...", verbose=True)

This logs instrumentor discovery, span filtering, and export results.

Shutdown

flowlines.shutdown() is registered as an atexit handler automatically. It is idempotent — safe to call multiple times. You can call it explicitly if you need to ensure spans are flushed before the process ends (e.g., in serverless environments).

Common mistakes to avoid

  • Do NOT create the LLM client before initializing Flowlines — spans will be missed.
  • Do NOT instantiate Flowlines() more than once — it raises RuntimeError.
  • Do NOT set both has_external_otel=True and has_traceloop=True.
  • Do NOT forget to install the instrumentation extras for the providers you use (e.g., flowlines[openai]).
  • Do NOT assume context propagates to child threads — set it explicitly in each thread/task.