1️Creating and Running a Prompt

For this example, we will be creating a template (prompt) on the Prompt Workshop and walking it through the various stages of the evaluation lifecycle.

Check out the Getting Set Up page to set up access keys before running your first prompt!

  1. Upon logging into the Evaluable AI portal, create a template on the Prompt Workshop. Edit the name of the template that briefly describes the intent of this prompt by clicking on the pencil icon.

  2. Next, you can add tag(s). Each tag functions as an identifier, enabling users to group prompt runs and responses for easier access and analysis. You can choose either a preexisting tag from the dropdown list, or begin typing to create a new tag. If you choose not to add tags in this step, you will be able to do so later in the Prompt Runs page after running your prompt. Additionally, when you choose the model(s) you want to run your prompt against (next step), those tags are automatically attached to the prompt run.

  3. Choose the Model(s) you want to your prompt to generate a response for from the dropdown menu. You can then expand the "Model Configuration" section to modify the temperature, frequency penalty, logprobs, max tokens, n-value, presence penalty, and Top P. For this example, we will be keeping all the default values as is.

  4. Enter the prompt (template). Think of this as a sentence where you can swap out certain words. In this example, we will be asking the model "Who is the CEO of {{company}}?", where the variable company is going to be the swapped out word. At this point, the page should look similar to the following screenshot:

  1. You can then add context and metadata if you so choose.

    1. Context: Information provided within the prompt that helps the model understand what is expected in its response. This text guides the model's focus and helps determine the relevance of the generated response.

    2. Metadata: On Evaluable AI's platform, metadata offers supplementary details about prompts that users can provide if they consider it crucial for evaluating responses on the platform. This metadata, structured as nested JSON, does not influence the inference model's processing or its responses. Instead, it is solely utilized during the evaluation of the response to enhance the assessment process.

    For this example, we will be leaving these two fields blank.

  2. Lastly, you can choose to add data functionality to add values for the variables you have provided. The Evaluable AI platform automatically generates the add data section based on the number of variables in the prompt. There is no limit to how many rows you can add -- all you would need to do is press the "Add row" button at the bottom. Pressing "Delete row" will delete the last added row. For this example, we will be adding two data points: two companies and their expected outputs based on the prompt.

  1. After inputting the information you want for this template, click on "Run Prompt" at the top right corner of the page. The results of this run will open a window to the right side of the same page, shown below.

Next, we will open up the Prompt Runs page to view these results more in detail, and prepare to run our first evaluation!

Last updated