Create a LLM result

An LLM result captures both the generated output from a LLM provider and associated metadata like token usage. There are two methods to generate an LLM result:

  • By calling an experiment endpoint: This randomly selects one of the two prompt configurations you've set up, based on the assigned weights.
  • By calling a specific prompt configuration endpoint: This uses the exact prompt template you specify.

Endpoints

Every endpoint requires your AB Prompt API key, which you can locate in the workspace dashboard. Include this key in the HTTP Authorization header as Bearer <api-key> for each request.

Create a LLM result for a specific prompt configuration.

Each prompt configuration is assigned a unique ID. You can find this id by navigating to the prompt configuration's page within the AB Prompt dashboard.

Endpoint

POST https://api.abprompt.dev/prompt-configurations/:promptConfigurationId/llm-results

Parameters

Include the following fields as JSON in the body of your request:

  • openAIAPIKey (string): Your OpenAI API key.

  • templateVars (object): Key-value pairs to dynamically populate your prompt template.

    • Example: For a template like What is the capital of {{ countryName }}?, use { "countryName": "France" }.
  • metadata (object, optional): Extra key-value pairs, visible in the experiment result dashboard.

Returns

The JSON response body includes:

  • id: Unique idof the LLM result.
  • createdAt: Timestamp of creation.
  • input: The prompt sent to the LLM provider.
  • output: Generated output from the LLM provider.
  • inputTokens: Token count in the input.
  • outputTokens: Token count in the output.
  • promptConfigurationId: idof the used prompt configuration.
  • responseTime: Time to generate the result in ms.

Example:

Request:

POST https://api.abprompt.dev/prompt-configurations/RkQPmtCt_hehoQAWG/llm-results

{
  "openAIAPIKey": "YOUR_API_KEY_HERE",
  "templateVars": {
    "countryName": "France"
  },
  "metadata": {
    "userId": "123"
  }
}

Response:

{
  "id": "NO62BvEe-FHYxbfO35NzZ",
  "createdAt:" "2023-10-10T00:00:00.000Z",
  "input": "What is the capital of France?",
  "output": "The capital of France is Paris.",
  "metadata": {
    "userId": "123"
  },
  "promptConfigurationId": "y6hRIHHgUi",
  "responseTime": 1883
}

Create a LLM result from an experiment

Each experiment is assigned a unique ID. You can find this id by navigating to the experiment's page within the AB Prompt dashboard.

Endpoint

POST https://api.abprompt.dev/experiments/:experimentId/llm-results

Parameters

Include the following fields as JSON in the body of your request:

  • openAIAPIKey (string): Your OpenAI API key.

  • templateVars (object): Key-value pairs to dynamically populate your prompt template.

    • Example: For a template like What is the capital of {{ countryName }}?, use { "countryName": "France" }.
  • metadata (object, optional): Extra key-value pairs, visible in the experiment result dashboard.

Returns

The JSON response body includes:

  • id: Unique idof the LLM result.
  • createdAt: Timestamp of creation.
  • input: The prompt sent to the LLM provider.
  • output: Generated output from the LLM provider.
  • inputTokens: Token count in the input.
  • outputTokens: Token count in the output.
  • promptConfigurationId: idof the used prompt configuration.
  • responseTime: Time to generate the result in ms.

Example:

Request:

POST https://api.abprompt.dev/experiments/LvC1ZB8T4O629/llm-results

{
  "openAIAPIKey": "YOUR_API_KEY_HERE",
  "templateVars": {
    "countryName": "France"
  },
  "metadata": {
    "userId": "123"
  }
}

Response:

{
  "id": "NO62BvEe-FHYxbfO35NzZ",
  "createdAt:" "2023-10-10T00:00:00.000Z",
  "input": "What is the capital of France?",
  "output": "The capital of France is Paris.",
  "metadata": {
    "userId": "123"
  },
  "promptConfigurationId": "y6hRIHHgUi",
  "responseTime": 1883
}