Command Palette

Search for a command to run...

Page Inspect

https://langfuse.com/
Internal Links
60
External Links
5
Images
4
Headings
1

Page Content

Title:LangfuseGitHub
Description:Traces, evals, prompt management and metrics to debug and improve your LLM application. Integrates with Langchain, OpenAI, LlamaIndex, LiteLLM, and more.
HTML Size:68 KB
Markdown Size:3 KB
Fetched At:November 18, 2025

Page Structure

h1Open Source LLM Engineering Platform

Markdown Content

Langfuse

Launch Week: All launches →Langfuse Launch Week: See all launches →

Hiring in Berlin and SFLooking for GOATS!

ProductResourcesDocsSelf HostingGuidesIntegrationsFAQHandbookChangelogPricingLibrarySecurity & Compliance

GitHub

18K

Get DemoAppSign Up

Light

Backed by

# Open Source LLM Engineering Platform
Traces, evals, prompt management and metrics to debug and improve your LLM application.

Sign upGet DemoView docs

Auto-advance is active. Press Escape to pause auto-advance.

Observability

Metrics

Prompt Management

Playground

Evaluation

Annotations

Public API

Python SDK

JS/TS SDK

from langfuse import observe

# drop-in wrapper adds OpenTelemetry tracing to OpenAI
# many other llm/agent integrations are available
from langfuse.openai import openai

@observe()  # decorate any function; all nested calls are auto-linked
def handle_request(text: str) -> str:
res = openai.chat.completions.create(
model="gpt-5",
messages=[
{"role": "system", "content": "Summarize in one sentence."},
{"role": "user", "content": text},
],
)
return res.choices[0].message.content

from langfuse import observe

# drop-in wrapper adds OpenTelemetry tracing to OpenAI
# many other llm/agent integrations are available
from langfuse.openai import openai

@observe()  # decorate any function; all nested calls are auto-linked
def handle_request(text: str) -> str:
res = openai.chat.completions.create(
model="gpt-5",
messages=[
{"role": "system", "content": "Summarize in one sentence."},
{"role": "user", "content": text},
],
)
return res.choices[0].message.content

Quickstart

Observability

Capture complete traces of your LLM applications/agents. Use traces to inspect failures and build eval datasets. Based on OpenTelemetry with support for all popular LLM/agent libraries.

DocumentationWatch Demo

Light

* * *

Platform

- LLM Tracing
- Prompt Management
- Evaluation
- Human Annotation
- Datasets
- Metrics
- Playground

Integrations

- Python SDK
- JS/TS SDK
- OpenAI SDK
- Langchain
- Llama-Index
- Litellm
- Dify
- Flowise
- Langflow
- Vercel AI SDK
- Instructor
- API

Resources

- Documentation
- Interactive Demo
- Video demo (10 min)
- Changelog
- Roadmap
- Pricing
- Enterprise
- Self-hosting
- Open Source
- Why Langfuse?
- AI Engineering Library
- Status
- 🇯🇵 Japanese
- 🇰🇷 Korean
- 🇨🇳 Chinese

About

- Blog
- Careers
- About us
- Customers
- Support
- Talk to us
- OSS Friends
- Twitter
- LinkedIn

Legal

- Security
- Imprint
- Terms
- Privacy
- * * *
- SOC 2 Type II
- ISO 27001
- GDPR
- HIPAA

© 2022-2025 Langfuse GmbH / Finto Technologies Inc.