Skip to main content
Tracing captures the details of each step of an AI application’s execution. This lets you debug issues, understand model behavior, and optimize performance in production.
Wrap your AI client with Braintrust to trace all LLM requests. Maximum flexibility and control. Works with any integration.

1. Sign up

If you’re new to Braintrust, sign up free at braintrust.dev.

2. Get API keys

Create API keys for:Set them as environment variables:
export BRAINTRUST_API_KEY="<your-braintrust-api-key>"
export OPENAI_API_KEY="<your-openai-api-key>" # or ANTHROPIC_API_KEY, GEMINI_API_KEY, etc.
This quickstart uses OpenAI. For other providers, see Integrations.

3. Install SDKs

Install the Braintrust SDK and AI provider SDK for your programming language:
# pnpm
pnpm add braintrust openai ts-node
# npm
npm install braintrust openai ts-node

4. Trace LLM calls

Make a simple LLM request and see it automatically traced in Braintrust. Initialize Braintrust and wrap your OpenAI client:
  • TypeScript & Python: Use wrapOpenAI / wrap_openai wrapper functions
  • Go: Use the tracing middleware with the OpenAI client
  • Ruby: Use Braintrust::Trace::OpenAI.wrap to wrap the OpenAI client
  • Java: Use the tracing interceptor with the OpenAI client
  • C#: Use BraintrustOpenAI.WrapOpenAI() to wrap the OpenAI client
quickstart.ts
import { initLogger, wrapOpenAI } from "braintrust";
import OpenAI from "openai";

// Initialize Braintrust logger
const logger = initLogger({ projectName: "Tracing quickstart" });
const client = wrapOpenAI(new OpenAI());

// All API calls are automatically logged
const result = await client.responses.create({
  model: "gpt-5-mini",
  input: [
    { role: "system", content: "You are a helpful assistant." },
    { role: "user", content: "What is machine learning?" },
  ],
});

console.log(result.output_text);
Run this code:
npx ts-node quickstart.ts
All API calls are automatically logged to Braintrust.

5. View traces

In the Braintrust UI, go to the “Tracing quickstart” project and select Logs. You’ll see a trace for each request.Click into any trace to see:
  • Complete input prompt and model output
  • Token counts, latency, and cost
  • Model configuration (temperature, max tokens, etc.)
  • Request and response metadata
This is the value of observability - you can see every request, identify issues, and understand how your application behaves in production.

Troubleshoot

Check your API key:
echo $BRAINTRUST_API_KEY
Make sure it’s set and starts with sk-.Verify the project name: Check that you’re looking at the correct project in the UI.Look for errors: Check your console output for any error messages from Braintrust. Common issues:
  • Invalid API key
  • Network connectivity issues
  • Firewall blocking requests to api.braintrust.dev
Check wrapper coverage: Make sure you’re wrapping the client before making API calls. Calls made with an unwrapped client won’t be traced.Verify async/await: If using async functions, ensure you’re awaiting API calls properly. Unawaited promises may not be fully traced.Check for errors: If your LLM call throws an error, the trace may be incomplete. Check logs for error messages.

Next steps