Skip to main content

· 3 min read
Aurélien Franky

Over the past month we have improved the developer experience when working with prompt templates. The latest version of Prompt Studio allows you to include external text into your prompts like lists and files. We have also been busy preparing a new feature allowing you to create knowledge bases from your files and integrate them into your prompts and chats. Come back next week to learn more!

Here is what we have been working over the past weeks:

Prompt Versions and Completions

We have reworked how we present prompt completions to you, every time you create a completion with Prompt Studio that version of your prompt is saved. All completions done with the same prompt, but with different models or settings (Temperature, Top P or maximum length), will be grouped together, and you can now compare prompt completions side by side. This makes it really easy to compare how different language models perform, or how parameters like "temperature" affect your results.


A draft is a prompt created from your prompt template that you have not saved yet. In the example below your prompt template results in two different drafts because the option "generate prompt for each item" is selected for the "storyTone" variable. When clicking run Prompts, all your drafts will be run at once, so that you can easily compare results, and revert to a previous version of your prompt at a later stage.


Preview a Prompt

You can preview the content of your prompt by clicking on the draft item in the prompt version list. All variables are inserted into the prompt and you can make sure everything is configured correctly before running it.


Comparing completions

Once you run your prompts, completions will be generated and you can now compare them with each other:


With that foundation in place, here are features we will be adding soon:

  • Save a specific version of a prompt template
  • Revert to a version of your prompt template
  • Delete Prompt versions and completions you don't need
  • Run individual prompts instead of all draft prompts

Working with Files

To test your prompts you need to try them out on different sets of data, the same prompt might perform very differently given different user inputs. This is why we added a new feature allowing you to create files and link them into your prompts. This way you can check if your prompt performs the way you expect in different scenarios. You can also preview the number of tokens a file might have.


Currently you need to copy paste the file content into Prompt Studio, we know this is not ideal and we are working on improving this. Down the line you will be able to:

  • Drag and drop files into the editor
  • Tokenize PDFs and other Documents directly in Prompt Studio


Interacting in a chat format allows you to build up a question or request across several prompts. This is incredibly useful when you have more complex tasks for a language model. Our current chat interface is a starting point for many new features we intend to add in the coming weeks.


There are many things we want to add to this early preview, but we need your help to understand what features you need the most, and what we should work on first. Reach out to us on Github, or get in touch on Discord.

· 4 min read
Aurélien Franky


We are excited to release an early preview version of Prompt Studio! After several weeks of work, we feel that we have a solid foundation for something that will be useful to anyone building applications on top of Language Models. Prompt Studio is an attempt at providing Prompt Engineers with the kind of tools we are accustomed to in Software development. How Prompt Studio will shape up will depend on you and other early adopters.

Our goal is to tailor this tool to provide the best developer experience for anyone needing to experiment, fine-tune or test prompts. Our mission is to help you find the right words to interact with your AI systems.

Let us know what features you need on Github, or get in touch on Discord.

Why Prompt Studio?

Last month I created a video game as part of the "All by AI" game jam competition on The game is a digital novel entirely generated by GPT-3 where each action you take changes what happens next in the story. Each scene, in turn, defines what actions are available to you, those actions are also created by GPT-3.


The challenge was finding the correct words to not only get text that makes sense but also follows a certain writing style and exhibits a certain mood.

A prompt in the game would look like this:

const prompt = `You are a narrator writing in the second person.
The story you tell is a sci-fi story set in the distant future.

Previously the following happened:
1. The protagonists spaceship crashed on a desertic planet.

Lastly, the protagonist did the following action:
The protagonist explored the surroundings.

Describe what happens next in the story, within a paragraph of 80 words or less.`;

This prompt is composed of several variable pieces that each can affect the quality of the result in different ways. For example, in the case above we need to feed everything that has happened in the story so far back to the model since it has no memory.

But how do you know what would happen if you change a word or the order of a sentence? There are no good tools for this type of work as of now (I looked).

So I decided to build my own.

Working with Prompt Studio

When building more complicated applications you quickly outgrow tools like GPT playground. I wanted a tool that allows me to:

  1. keep track of my prompts and their completions
  2. easily compare results between prompts
  3. create large numbers of prompts from a template
  4. compare results across different language models


Over the past month I have been working on implementing these features. There is still work ahead but first I want to check what features you might need from such a tool.

Checkout our documentation to learn more about what the current capabilities of prompt studio are.

Prompt Engineering is taking off

As AI becomes capable of performing various tasks at levels similar to that of a humans, we will rely more on it to automate certain work. The more we rely on language models for automated tasks, the more prompts and how we write them become important. This is not a transitory phase while our AI models become better.

Human language is not precise and while models might get better at discerning intent in the requests we make towards them, there will always be a gap inherent to natural language. With people we are accustomed to a certain back and forth to clear misunderstandings but we are much less tolerant of these inefficiencies when it comes to machines.

From this perspective Prompt Engineering is the search for a way to formalize intent in natural language and I think that as AI becomes more human like in its capabilities, making sure the results it produces are aligned with our intents will only become more important.

If you want to learn more about Prompt Engineering here are a few amazing resources:


What's next for Prompt Studio?

Our development roadmap is open and you are welcome to tell us what we should focus on next.

Check it out on Github.

We are looking forward to hear from you!