Quick Start

Building a Web Scraper with AB Prompt

To show what AB Prompt can do, let's dive into a real-world example. We're going to build a web scraper for the monthly "Who’s Hiring" post on Hacker News. We'll target the August 2023 post and use OpenAI's GPT to change unstructured job descriptions into structured JSON. This will make it easier for tasks like filtering jobs by salary or checking out changes in popular tech stacks.

For this experiment, we'll check out two models: GPT-3.5 Turbo and GPT-4. GPT-3.5 Turbo is fast and doesn't cost much, but GPT-4 is known for thinking things through a bit better. We want to see if GPT-4 is really better for our needs and if it's worth the extra cost.

Pre-requisites

Before we jump in, make sure you have:

  • An OpenAI API key
  • An AB Prompt API key (If you don't have one, just sign up to get it!)
  • Node.js version 18.0.0 or higher

Preparing a prompt

Let's set up our prompt to guide GPT. Our goal is to take a job description and transform it into structured JSON that matches a Zod schema, a popular Javascript validation library.


Take the following job description and output JSON that adheres to the provided Zod (the popular javascript validation framework) schema.

Zod schema:

```ts
import { z } from "zod";

z.object({
  companyName: z.string().describe("the name of the hiring company"),
  companyDescription: z
    .string()
    .describe("a brief description of what the company does"),
  openings: z
    .array(
      z.object({
        title: z.string().describe("the position of the opening"),
        compensation: z
          .object({
            salaryRange: z
              .tuple([
                z.number().describe("the minimum salary"),
                z.number().describe("the maximum salary"),
              ])
              .optional(),
            currency: z.string().describe("the currency of the salary"),
            other: z
              .array(z.string())
              .describe("other forms of compensation")
              .optional(),
          })
          .optional(),
        seniority: z
          .enum(["intern", "junior", "intermediate", "senior", "senior+"])
          .describe(
            "the estimated target seniority for this position. 'senior+' being anything above the 'senior' level.",
          ),
        tech: z
          .array(z.string())
          .describe("If stated, the tech that is used for this opening"),
        location: z.string().describe("the location of the role"),
      }),
    )
    .describe("A list of objects describing the advertised openings"),
  howToApply: z
    .string()
    .describe("instructions for how to apply to the position"),
});
```

Job description:

===
{{ jobPosting }}
===

Take note of the {{ jobPosting }} placeholder at the end. This mechanism allows us to inject dynamic data into our prompt when we make the API call to AB Prompt.

Setting Up Your Experiment

Kick things off by heading to your dashboard:

  1. Click on New experiment.
  2. For Experiment name, enter “HN Who’s Hiring”.
  3. Configure Variant A:
    • Name: “GPT 3.5”
    • Temperature: 1
    • Template: Copy and paste the prompt we crafted earlier.
  4. Now, set up Variant B:
    • Name: “GPT 4”
    • Model: “GPT 4”
    • Temperature: 1
    • Template: Again, use the prompt from our previous section.
  5. Once everything looks good, hit Create experiment.

After creating, you'll notice an Experiment Id and some sample code. Hold onto those; we'll put them to use in a moment.

Building the Application

The Hacker News “Who’s Hiring” thread is populated with top-level comments, each showcasing one or multiple job openings. To build our application, we'll take the following steps:

  1. Fetch the Hacker News' "Who’s Hiring" post: This will be our primary data source.
  2. Ask GPT for Conversion: For each top-level comment, instruct GPT to transform the job description into structured JSON.
  3. JSON Parsing: We’ll use a simple heuristic to parse the JSON from the response text.
  4. Data Storage: Save the JSON collection for further user.

1. Fetch the Hacker News’ “Who’s Hiring" post

Let's kick things off by fetching the data we need. To do this, we'll tap into the Hacker News API.

A simple GET request to https://hacker-news.firebaseio.com/v0/item/${id}.json will do the trick. And, for our purposes, we’ll be using the post for August 2023 which conveniently comes with an id of 36956867.

When we make the request, the response will feature a bunch of JSON data. Crucially, it'll have a kids key containing an array of ids for top-level comments — and these are our job postings! With this in mind, let’s and start coding.

Fire up your preferred code editor and create a new app.js file. Add the following code:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
const WHOS_HIRING_POST_ID = 36956867;

async function fetchHackerNewsItem(id) {
  const response = await fetch(
    `https://hacker-news.firebaseio.com/v0/item/${id}.json`,
  );
  return response.json();
}

async function generateJobPostingJSON() {
  const whosHiringPost = await fetchHackerNewsItem(WHOS_HIRING_POST_ID);

  for (const kid of whosHiringPost.kids) {
    // Each comment in a HN post is treated as an 'item' and has a 'text' field.
    // In our case, this 'text' is the job description.
    const { text } = await fetchHackerNewsItem(kid);
  }
}

generateJobPostingJSON();

2. Convert Each Top-Level Comment to JSON with GPT

For this step we’ll need:

  1. Your Experiment Id: Found within the experiment dashboard.
  2. Your OpenAI API Key.
  3. Your AB Prompt API Key.

With these details in hand, we'll make a call to the AB Prompt API using our previously set up experiment's prompt configuration to convert the job description into JSON.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
const WHOS_HIRING_POST_ID = 36956867

async function fetchHackerNewsItem(id) {
  const response = await fetch(
    `https://hacker-news.firebaseio.com/v0/item/${id}.json`
  )

  return response.json()
}

function createCompletion(jobPosting) {
  // IMPORTANT: Replace <EXPERIMENT-ID> with your specific experiment ID
  const response = await fetch(
    "https://api.abprompt.dev/experiments/<EXPERIMENT-ID>/run",
    {
      method: "POST",
      headers: {
        // IMPORTANT: Replace <AB-PROMPT-API-KEY> with your actual API key.
        Authorization: "Bearer <AB-PROMPT-API-KEY>",
        "Content-Type": "application/json",
      },
      body: JSON.stringify({
        // IMPORTANT: Replace <OPEN-AI-API-KEY> with your OpenAI API key.
        openAIAPIKey: "<OPEN-AI-API-KEY>",
        // This injects our job posting into the prompt template.
        templateVars: { jobPosting } ,
      }),
    }
  );
  const json = await response.json();
  // AB Prompt will relay the exact JSON from OpenAI. For more info, refer to:
  // https://platform.openai.com/docs/api-reference/chat/object
  return json.choices[0].message.content;
}

function generateJobPostingJSON() {
  const whosHiringPost = await fetchHackerNewsItem(WHOS_HIRING_POST_ID)

  for (const kid of whosHiringPost.kids) {
    // Each HN comment is an 'item' with a 'text' field. Here, this 'text' is our job description.
    const { text } = await fetchHackerNewsItem(kid)
    const completion = await createCompletion(text)
  }
}

generateJobPostingJSON()

3. JSON Parsing

Responses from GPT can come in various formats due to its unstructured nature. The formats we might encounter include:

  1. A response that only includes JSON.
  2. A response where JSON is wrapped in Markdown code blocks (either ``` or ```json).
  3. Text that has embedded JSON.
  4. Responses with no JSON at all.

To handle these varying structures, we'll implement a heuristic to extract the JSON. First, we'll try to interpret the entire response string as JSON. If that's unsuccessful, we'll search for content within Markdown code blocks and attempt to extract the JSON from there.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
const WHOS_HIRING_POST_ID = 36956867;

async function fetchHackerNewsItem(id) {
  // ...
}

async function createCompletion(jobPosting) {
  // ...
}

function extractJSON(completion) {
  try {
    // First, attempt to interpret the completion text as pure JSON
    const json = JSON.parse(completion);
    return json;
  } catch (e) {}

  // If the above fails, try to find a Markdown code block and interpret its contents as JSON
  const regex = /```(?:json)?\s*([\s\S]*?)\s*```/g;
  const match = regex.exec(completion);

  if (match && match[1]) {
    try {
      return JSON.parse(match[1].trim());
    } catch (e) {}
  }

  console.warn("No valid JSON found in completion");
  return null;
}

async function generateJobPostingJSON() {
  const whosHiringPost = await fetchHackerNewsItem(WHOS_HIRING_POST_ID);

  for (const kid of whosHiringPost.kids) {
    // ...
    const json = extractJSON(completion);
  }
}

generateJobPostingJSON();

4. Save The Output

For this demo, we’ll save the results to an output.json file. In a more advanced application, one would typically store this data in a database for future queries and analysis.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
const fs = require('fs');
const WHOS_HIRING_POST_ID = 36956867;

async function fetchHackerNewsItem(id) {
  // ...
}

function createCompletion(jobPosting) {
  // ...
}

function extractJSON(completion) {
  // ...
}

async function generateJobPostingJSON() {
  const whosHiringPost = await fetchHackerNewsItem(WHOS_HIRING_POST_ID);

  const results = [];
  for (const kid of whosHiringPost.kids) {
    // ...
    const json = extractJSON(completion);
    if(json) {
      results.push(json);
    }
  }

  // Save the gathered job postings to a local JSON file
  fs.writeFileSync("./output.json", JSON.stringify(results, null, 2));
}

generateJobPostingJSON();

Your basic application is now ready to roll! You can execute it using node app.js in your terminal. Subsequently, review the experiment dashboard to monitor how our prompt configurations are performing.

Observe and Tweak

The effectiveness of AI experiments, especially with models like GPT, often hinges on continuous observation and iteration. By studying the generated outputs and evaluating them against your desired outcomes, you can refine the process to get even better results.

Within the experiment dashboard, you’ll find a table displaying a row for each GPT inference call. By clicking on a row, you can inspect the input and output of that specific call. This enables you to assess how the prompt configuration is performing.

From these observations, you can gather several insights:

  1. Configuration Effectiveness:
    • Which configuration most consistently produces outputs that adhere to our desired schema?
  2. Data Extraction Accuracy:
    • Which setup more frequently extracts the correct information? For instance, does it reliably identify the elements that should populate the tech array?
  3. Model Cost vs. Value:
    • Does opting for a pricier model justify its enhanced reasoning capabilities in the context of our specific use case?

Based on these insights, you can fine-tune the ongoing experiment. This might involve allocating more weight to the more effective configuration or revising the prompt text based on the patterns observed. Remember, the key is to maintain a cycle of observation, tweaking, and re-observation to continually hone the performance of your application.