Connect
LLMs & GenAI with OpenLIT

Open Source Platform for AI Engineering

Openlit - One click observability & evals for LLMs & GPUs | Product HuntOpenlit Fazier
header

Privacy first

See exactly what our code does. Or host it yourself.

OpenLIT allows you to simplify your AI development workflow, especially for Generative AI and LLMs. It streamlines essential tasks like experimenting with LLMs, organizing and versioning prompts, and securely handling API keys.

Visualize your Traces

Application and Request Tracing

Provides end-to-end tracing of requests across different providers to improve performance visibility.

Detailed Span Tracking
Monitor each span for response time and efficiency.
Supporting OpenTelemetry
Automatically track your AI apps with OpenTelemetry to gain insights into performance and behavior​
Cost tracking
Tracks your cost for making it easier for you to take revenue decisions.
Visualize your Traces

Exceptions Monitoring

Track Application Errors

Monitors and logs application errors to help detect and troubleshoot issues.

Automatic Exception Monitoring
With our SDKs for Python and TypeScript, monitor exceptions seamlessly without altering your application code base significantly.
Detailed Stacktraces
Access comprehensive stacktrace information for all caught exceptions, providing insight into where things went wrong.
Integration with Traces
Leverage OpenTelemetry-powered trace data to capture exceptions within request flows.
Exceptions Monitoring

Explore Openground

Openlit PlayGround

Test and compare different LLMs side-by-side based on performance, cost, and other key metrics

Side-by-Side Comparison
Simultaneously evaluate multiple LLMs to understand how they perform in real-time across various scenarios.
Cost Analysis
Evaluate the cost implications of using different LLMs, helping you balance budget constraints with performance needs.
Comprehensive Reporting
Generate detailed reports that compile and visualize comparison data, supporting informed decision-making.
Explore Openground

Manage your prompts

Centralized Prompt Repository

Allows for organized storage, versioning, and usage of prompts with dynamic variables across different applications.

Prompt Management
Create, edit, and track different versions of your prompts.
Versioning
Supports major, minor, and patch updates for clear version management. You can even create a draft state.
Variable Substitution
Customize prompts using specific variables using {{variableName}} convention to update on runtime.
Manage your prompts

Secure Secrets Management

Vault Hub

Vault offers a secure way to store and manage sensitive application secrets.

Secrets Management
Seamlessly create, edit, and monitor the secrets associated with your applications.
Secure Access
Retrieve secrets based on keys or tags, and safely integrate them into your Node.js or Python environments.
Environment Integration
Set secrets directly as environment variables for ease of use in applications using our SDKs.
Secure Secrets Management

Easy to integrate

Just add `openlit.init()` to start collecting data from your llm application.

Open source project

Open source LLM & GenAI Observability tool, easy to start, just run `docker-compose up -d`

OpenTelemetry native

Seamless integration: OpenLIT's native support makes adding it to your projects feel effortless and intuitive.

Granular Usage Insights

Analyze LLM, Vectordb & GPU performance and costs to achieve maximum efficiency and scalability.

Real-Time Data Streaming

Streams data to let you visualise your data and make quick decisions and modifications.

Low Latency

Ensures that data is processed quickly without affecting the performance of your application.

Prompt & Vault Management

OpenLIT helps you manage your prompts and secrets to ease the development of your application.

Observability Platforms

Connect to popular observability systems with ease, including Datadog and Grafana Cloud, to export data automatically.