Introduction

What is AB Prompt

AB Prompt is a low code tool designed to streamline your team's workflow by simplifying the management, experimentation, and monitoring of prompts for Large Language Model (LLM) API calls.

Managing prompts

Currently, managing prompts largely falls on the engineering team. This means prompt configurations are often tucked away in the codebase or in a database, making them hard to access for other teams. As a result, if any modifications are needed, teams have to wait for engineers to be available, leading to delays and inefficiencies. AB Prompt changes this. It allows any team member to create a prompt template, use template variables for dynamic data, and select an LLM model like gpt-3.5-turbo or gpt-4, along with setting the temperature.

Experimenting with prompt variations

Experimenting with prompt variations is essential in honing the quality of the output. While you might observe promising results during local sample testing, production environments often present unforeseen challenges. AB Prompt empowers your team to tackle this by facilitating:

  1. The gradual rollout of new prompts with the option to easily revert to the previous versions if necessary.
  2. A/B testing with varying tones, prompt styles (such as few-shot prompting), and instructions to pinpoint the configuration that aligns best with your business objectives.
  3. Comparing the performance of different LLM models to choose the most effective one.

Monitoring prompts in production

The AB Prompt dashboard is your central hub for gaining deep insights into your prompt performance. It allows teams to monitor token usage, response times, as well as rendered prompt inputs and generated outputs, providing a transparent view of the input-output dynamics. By closely observing these metrics, teams can iterate on and fine-tune their prompt configurations more effectively, ensuring optimal results with each modification. This iterative approach helps you find and use the best configurations quickly.