✏️Prompt Workshop

This page is your workspace for crafting prompts and testing them with various models. It's designed to kickstart your evaluation process by creating prompts on our platform.


  • Prompt Templates: Easily define a template once and run it across multiple models to see how each responds.

  • Model Settings: Select from a range of settings to fine-tune your experiments:

    • temperature

    • frequency penalty

    • logprobs

    • max tokens

    • n count

    • presence penalty

    • top P

  • Access to Leading Models: Directly use popular models like GPT, Mistral, and Gemini — all directly on our platform.

  • Upload Your Model: You can also upload and test your own model. This option is currently in beta — please reach out to us at contactus@evaluable.ai for access.

  • Contextual Responses: Add context to your prompts for more relevant answers, enhancing the realism and utility of the responses.

    • Context: Information provided within the prompt that helps the model understand what is expected in its response. This text guides the model's focus and helps determine the relevance of the generated response.

    • Metadata: On Evaluable AI's platform, metadata offers supplementary details about prompts that users can provide if they consider it crucial for evaluating responses on the platform. This metadata, structured as nested JSON, does not influence the inference model's processing or its responses. Instead, it is solely utilized during the evaluation of the response to enhance the assessment process.

  • Variable Integration: Incorporate variables into your prompts and context, allowing for extensive testing with a single template setup.

Creating a Prompt

See Creating and Running a Prompt for an in-depth walkthrough of how to run your first prompt.

Last updated