Skip to main content
Scorers evaluate AI output quality by assigning scores between 0 and 1 based on criteria you define like factual accuracy, helpfulness, or correct formatting.

Overview

Braintrust offers three types of scorers:
  • Autoevals - Pre-built, battle-tested scorers for common evaluation tasks like factuality checking, semantic similarity, and format validation. Best for standard evaluation needs where reliable scorers already exist.
  • LLM-as-a-judge - Use language models to evaluate outputs based on natural language criteria and instructions. Best for subjective judgments like tone, helpfulness, or creativity that are difficult to encode in deterministic code.
  • Custom code - Write custom evaluation logic in TypeScript or Python with full control over the scoring algorithm. Best for specific business rules, pattern matching, or calculations unique to your use case.
You can define scorers in three places:
  • Inline in SDK code - Define scorers directly in your evaluation scripts for local development, access to complex dependencies, or application-specific logic that’s tightly coupled to your codebase.
  • Pushed via CLI - Define scorers in code files and push them to Braintrust for version control in Git, team-wide sharing across projects, and automatic evaluation of production logs.
  • Created in UI - Build scorers in the Braintrust web interface for non-technical users to create evaluations, rapid prototyping of scoring ideas, and simple LLM-as-a-judge scorers.
Most teams prototype in the UI, develop complex scorers inline, then push production-ready scorers to Braintrust for team-wide use.

Score with autoevals

The autoevals library provides pre-built, battle-tested scorers for common evaluation tasks like factuality checking, semantic similarity, and format validation. Autoevals are open-source, deterministic (where possible), and optimized for speed and reliability. They can evaluate individual spans, but not entire traces. Available scorers include:
  • Factuality: Check if output contains factual information
  • Semantic: Measure semantic similarity to expected output
  • Levenshtein: Calculate edit distance from expected output
  • JSON: Validate JSON structure and content
  • SQL: Validate SQL query syntax and semantics
See the autoevals library for the complete list.
Use scorers inline in your evaluation code:
import { Eval, initDataset } from "braintrust";
import { Factuality } from "autoevals";

Eval("My Project", {
  experimentName: "My experiment",
  data: initDataset("My Project", { dataset: "My Dataset" }),
  task: async (input) => {
    // Your LLM call here
    return await callModel(input);
  },
  scores: [Factuality],
  metadata: {
    model: "gpt-5-mini",
  },
});
Autoevals automatically receive these parameters when used in evaluations:
  • input: The input to your task
  • output: The output from your task
  • expected: The expected output (optional)
  • metadata: Custom metadata from the test case

Score with LLMs

LLM-as-a-judge scorers use a language model to evaluate based on natural language criteria. They are best for subjective judgments like tone, helpfulness, or creativity that are difficult to encode in code. They can evaluate individual spans, but not entire traces. Your prompt template can reference these variables:
  • {{input}}: The input to your task
  • {{output}}: The output from your task
  • {{expected}}: The expected output (optional)
  • {{metadata}}: Custom metadata from the test case
Use scorers inline in your evaluation code:
llm_scorer.eval.ts
import { Eval } from "braintrust";
import { LLMClassifierFromTemplate } from "autoevals";
import OpenAI from "openai";

const client = new OpenAI();

// Inline dataset: movie descriptions and expected titles (same as Python llm_eval_simple.py)
const MOVIE_DATASET = [
  {
    input:
      "A detective investigates a series of murders based on the seven deadly sins.",
    expected: "Se7en",
  },
  {
    input:
      "A thief who steals corporate secrets through the use of dream-sharing technology is given the inverse task of planting an idea into the mind of a C.E.O.",
    expected: "Inception",
  },
];

async function task(input: string): Promise<string> {
  const response = await client.responses.create({
    model: "gpt-4o-mini",
    input: [
      {
        role: "system",
        content:
          "Based on the following description, identify the movie. Reply with only the movie title.",
      },
      { role: "user", content: input },
    ],
  });
  return response.output_text ?? "";
}

// LLM-as-judge: scores whether the model's output matches the expected title (exact or equivalent)
const llmJudge = LLMClassifierFromTemplate({
  name: "Correctness",
  promptTemplate: `
You are evaluating a movie-identification task.

Output (model's answer): {{output}}
Expected (correct movie): {{expected}}

Does the output correctly identify the same movie as the expected answer?
Consider alternate titles (e.g. "Harry Potter 1" vs "Harry Potter and the Sorcerer's Stone") as correct.

Return "correct" if the output is the right movie (exact or equivalent title).
Return "incorrect" otherwise.
`,
  choiceScores: { correct: 1, incorrect: 0 },
  model: "gpt-4o-mini",
});

Eval("Movie Matcher", {
  data: MOVIE_DATASET,
  task,
  scores: [llmJudge],
});

Score with custom code

Write custom evaluation logic in TypeScript or Python. Custom code scorers give you full control over the evaluation logic and can use any packages you need. They are best when you have specific rules, patterns, or calculations to implement. Custom code scorers can evaluate individual spans or entire traces.

Score spans

Span-level scorers evaluate individual operations or outputs. Use them for measuring single LLM responses, checking specific tool calls, or validating individual outputs. Each matching span receives an independent score.
Define scorers in code and push to Braintrust.Your handler function receives these parameters:
  • input: The input to your task
  • output: The output from your task
  • expected: The expected output (optional)
  • metadata: Custom metadata from the test case
Return a number between 0 and 1, or an object with score and optional metadata.
code_scorer.ts
import braintrust from "braintrust";
import { z } from "zod";

const project = braintrust.projects.create({ name: "my-project" });

project.scorers.create({
  name: "Equality scorer",
  slug: "equality-scorer",
  description: "Check if output equals expected",
  parameters: z.object({
    output: z.string(),
    expected: z.string(),
  }),
  handler: async ({ output, expected }) => {
    const matches = output === expected;
    return {
      score: matches ? 1 : 0,
      metadata: { exact_match: matches },
    };
  },
  metadata: {
    __pass_threshold: 0.5,
  },
});
Push to Braintrust:
npx braintrust push code_scorer.ts
braintrust push code_scorer.py
Important notes for Python scorers:
  • Scorers must be pushed from within their directory (e.g., braintrust push scorer.py); pushing with relative paths (e.g., braintrust push path/to/scorer.py) is unsupported and will cause import errors.
  • Scorers using local imports must be defined at the project root.
  • Braintrust uses uv to cross-bundle dependencies to Linux. This works for binary dependencies except libraries requiring on-demand compilation.
In TypeScript, Braintrust uses esbuild to bundle your code and dependencies. This works for most dependencies but does not support native (compiled) libraries like SQLite.If you have trouble bundling dependencies, file an issue in the braintrust-sdk repo.
Python scorers created via the CLI have these default packages:
  • autoevals
  • braintrust
  • openai
  • pydantic
  • requests
For additional packages, use the --requirements flag.For scorers with external dependencies:
scorer-with-deps.py
import braintrust
from langdetect import detect  # External package
from pydantic import BaseModel

project = braintrust.projects.create(name="my-project")

class LanguageMatchParams(BaseModel):
    output: str
    expected: str

@project.scorers.create(
    name="Language match",
    slug="language-match",
    description="Check if output and expected are same language",
    parameters=LanguageMatchParams,
    metadata={"__pass_threshold": 0.5},
)
def language_match_scorer(output: str, expected: str):
    return 1.0 if detect(output) == detect(expected) else 0.0
Create requirements file:
langdetect==1.0.9
Push with requirements:
braintrust push scorer-with-deps.py --requirements requirements.txt

Score traces

Trace-level scorers evaluate entire execution traces including all spans and conversation history. Use these for assessing multi-turn conversation quality, overall workflow completion, or when your scorer needs access to the full execution context. The scorer runs once per trace. Your handler function receives the trace parameter, which provides two methods for accessing execution data:
  • trace.getThread() / trace.get_thread(): Returns an array of conversation messages extracted from LLM spans. Use for evaluating conversation quality and multi-turn interactions.
  • trace.getSpans({ spanType: ["llm"] }) / trace.get_spans(span_type=["llm"]): Returns spans matching the filter. Each span includes input, output, metadata, span_id, and span_attributes. Omit the filter to get all spans, or pass multiple types like ["llm", "tool"].
Use scorers inline in your evaluation code:
trace_code_scorer.eval.ts
import { Eval, type Scorer } from "braintrust";

// Inline dataset
const CONVERSATION_DATASET = [
  {
    input: "What is the capital of France?",
    expected: "multi-turn",
  },
  {
    input: "Tell me about quantum physics",
    expected: "multi-turn",
  },
];

// Simulated multi-turn conversation task
async function conversationTask(input: string): Promise<string> {
  // In a real scenario, this would be a multi-turn conversation
  return `Here's information about ${input}`;
}

// Trace-level scorer using conversation thread
const threadLengthScorer: Scorer = async ({ trace }) => {
  if (!trace) return 0;

  // Get preprocessed conversation thread
  const thread = await trace.getThread();

  const conversationLength = thread.length;

  return {
    name: "Thread length",
    score: conversationLength >= 3 ? 1 : 0,
    metadata: {
      conversation_length: conversationLength,
      has_minimum_turns: conversationLength >= 3,
    },
  };
};

// Trace-level scorer using span analysis
const llmCallCounter: Scorer = async ({ trace }) => {
  if (!trace) return 0;

  // Get only LLM spans for workflow analysis
  const llmSpans = await trace.getSpans({ spanType: ["llm"] });

  // Or get all spans: await trace.getSpans()
  // Or multiple types: await trace.getSpans({ spanType: ["llm", "tool"] })

  return {
    name: "LLM call count",
    score: llmSpans.length > 0 ? 1 : 0,
    metadata: { llm_count: llmSpans.length },
  };
};

Eval("Conversation Quality", {
  data: CONVERSATION_DATASET,
  task: conversationTask,
  scores: [threadLengthScorer, llmCallCounter],
  traceScorers: true, // Enable trace-level scoring
});

Set pass thresholds

Define minimum acceptable scores to automatically mark results as passing or failing. When configured, scores that meet or exceed the threshold are marked as passing (green highlighting with checkmark), while scores below are marked as failing (red highlighting).
Add __pass_threshold to the scorer’s metadata (value between 0 and 1):
metadata: {
  __pass_threshold: 0.7,  // Scores below 0.7 are considered failures
}
Example with a custom code scorer:
project.scorers.create({
  name: "Quality checker",
  slug: "quality-checker",
  handler: async ({ output, expected }) => {
    return output === expected ? 1 : 0;
  },
  metadata: {
    __pass_threshold: 0.8,
  },
});

Create reusable scorers

Test scorers

Scorers need to be developed iteratively against real data. When creating or editing a scorer in the UI, use the Run section to test your scorer with data from different sources. Each variable source populates the scorer’s input parameters (like input, output, expected, metadata) from a different location.

Test with manual input

Best for initial development when you have a specific example in mind. Use this to quickly prototype and verify basic scorer logic before testing on larger datasets.
  1. Select Editor in the Run section.
  2. Enter values for input, output, expected, and metadata fields.
  3. Click Test to see how your scorer evaluates the example
  4. Iterate on your scorer logic based on the results

Test with a dataset

Best for testing specific scenarios, edge cases, or regression testing. Use this when you want controlled, repeatable test cases or need to ensure your scorer handles specific situations correctly.
  1. Select Dataset in the Run section.
  2. Choose a dataset from your project.
  3. Select a record to test with.
  4. Click Test to see how your scorer evaluates the example.
  5. Review results to identify patterns and edge cases.

Test with logs

Best for testing against actual usage patterns and debugging real-world edge cases. Use this when you want to see how your scorer performs on data your system is actually generating.
  1. Select Logs in the Run section.
  2. Select the project containing the logs you want to test against.
  3. Filter logs to find relevant examples:
    • Click Add filter and choose just root spans, specific span names, or a more advanced filter based on specific input, output, metadata, or other values.
    • Select a timeframe.
  4. Click Test to see how your scorer evaluates real production data.
  5. Identify cases where the scorer needs adjustment for real-world scenarios.
To create a new online scoring rule with the filters automatically prepopulated from your current log filters, click Online scoring. This enables rapid iteration from logs to scoring rules. See Create scoring rules for more details.

Scorer permissions

Both LLM-as-a-judge scorers and custom code scorers automatically receive a BRAINTRUST_API_KEY environment variable that allows them to:
  • Make LLM calls using organization and project AI secrets
  • Access attachments from the current project
  • Read and write logs to the current project
  • Read prompts from the organization
For custom code scorers that need expanded permissions beyond the current project (such as logging to other projects, reading datasets, or accessing other organization data), you can provide your own API key using the PUT /v1/env_var endpoint.

Optimize with Loop

Generate and improve scorers using Loop: Example queries:
  • “Write an LLM-as-a-judge scorer for a chatbot that answers product questions”
  • “Generate a code-based scorer based on project logs”
  • “Optimize the Helpfulness scorer”
  • “Adjust the scorer to be more lenient”
Loop can also tune scorers based on manual labels from the playground.

Best practices

Start with autoevals: Use pre-built scorers when they fit your needs. They’re well-tested and reliable. Be specific: Define clear evaluation criteria in your scorer prompts or code. Use multiple scorers: Measure different aspects (factuality, helpfulness, tone) with separate scorers. Choose the right scope: Use trace scorers (custom code with trace parameter) for multi-step workflows and agents. Use output scorers for simple quality checks. Test scorers: Run scorers on known examples to verify they behave as expected. Version scorers: Like prompts, scorers are versioned automatically. Track what works. Balance cost and quality: LLM-as-a-judge scorers are more flexible but cost more and take longer than custom code scorers.

Next steps