Skip to main content

5 posts tagged with "features"

View All Tags

· 3 min read
Aurélien Franky

Today is a very special day for me. I've decided to leave my full-time job at Klarna and dedicate myself to building Prompt Studio and hopefully become part of something we are all witnessing at the moment, this amazing push towards AI becoming a greater part of our daily lives. Until now I could only invest a few hours of my free time every day, a compromise that left me day dreaming of what Prompt Studio could be and left me unhappy with the progress I was making. Today is my first day working full-time on Prompt Studio and I know it is the start of an exciting journey. So what do we want Prompt Studio to become?

Prompt Engineering and Reasoning Engines

Language models are obviously great at processing language, from translations to text generation. But what excites me the most about them, is their ability to reason about things. If you haven't seen it yet, this presentation by Andrej Karpathy provides a comprehensive overview of the topic. If we draw a parallel between a language model and the ways we think then a pretrained model is somewhat akin to system 1 thinking, it is fast and automatic.

With Prompt Engineering we add another layer, a more deliberate and targeted approach to get the results we want. We are seeing many novel approaches emerging, from frameworks like Tree of Thought and prompting languages like Guidance. We want Prompt Studio to not only be the place where you build and track text inputs for language models but also where you can build and share these more complex processes that form system 2 thinking on top of language models.

More than a Collection of Libraries

Prompt Engineering will become a lot more mainstream and people working with prompts will come from a variety of backgrounds. With the current shortage of developers, and a continuous need for software, we think most prompt engineers will come from other domains. This is why we want Prompt Studio to bridge the gap between a tool that is only useful for software engineers and a tool that can be used by everyone. Our main focus needs to be its usability and collaboration features.

Becoming Open Core

We cannot keep up with the enormous strides in the development of AI we have seen in the past months on our own. This is why we want to focus our efforts on the aspects of prompt engineering where our expertise matters the most. We want to provide a layer for collaboration and real time editing/tools that is much needed for large organizations while providing an open source version of Prompt Studio that can be adapted and modified by anyone for their own needs. This way the editor will always be free and open source, with a layer of additional functionality for paying customers that will allow us to dedicate our time in making Prompt Studio better.

Thank you for your support and stay tuned for more updates!

· 2 min read
Aurélien Franky

Until now, OpenAI was the only Provider for Language Models you could connect to from Prompt Studio. We have been working the past week to decouple ourselves from the OpenAI API and allow you to connect to other providers as well. Here is a list of providers we plan to integrate soon:

Let us know what providers you are interested in connecting with here!

Custom APIs

Being LLM agnostic means you can now use the new Custom API Model Provider to connect to a language model you are hosting yourself with your own API. This requires a bit of configuration, but is the most flexible setup and you can already use it while we add the integrations listed above. The image below shows the setup to integrate to a custom GPT-2 model deployed on Huggingface.

huggingface setup

Above is a deployed GPT-2 API on Huggingface Inference Endpoints. Below is the corresponding configuration for the Custom API Model Provider.


You can now select the custom provider when making your requests:

using custom provider

Prompt Studio and your Local Language models

One feature we are looking forward to, is to have a desktop version of Prompt Studio available that would make it easy for you to connect to a language model on your machine. With the work we have done with custom LLM Providers we are one step closer to this.

Trying out Prompt Studio

You want to try out Prompt Studio, but you don’t want to add your own OpenAI key? No problem! You now have 10 prompts per day to try out Prompt Studio without the need of providing your own OpenAI API Key. To do that, you simply need to select “Prompt Studio” as your LLM provider in the editor to use your free prompts towards OpenAI.

What’s next?

Next on our roadmap are collaborative features. We will start work on adding workspaces, sharing prompts, and teams to Prompt Studio. Stay tuned!

· 2 min read
Aurélien Franky

This week we made working with files a lot easier by introducing inline assets. You no longer need to create separate file assets and pass them to your prompts or knowledge bases. You can now drop the file directly onto your prompt variable or onto your knowledge base.

Stay tuned for next week when we bring a new custom API provider for you to connect to your own language models.

Inline Assets

Inline assets are just like normal assets with the difference that they live only within the scope of another asset. This makes it a lot easier for you to keep your workspace de-cluttered. If you are only going to use one of your files in a single knowledge base there is no point for it to live on your workspace. Simply create the inline file asset by dropping the file onto the knowledge base. If you no longer need the knowledge base, all inline assets that were created as part of it will be deleted along with it. If you later want to use your file elsewhere, you can always convert it into a primary asset.

Adding files

To add a file to a knowledge base in Prompt Studio, you can now simply drag and drop it onto the knowledge base:

drop file on kb

You can do the same in a prompt. Keep in mind that the file size might push your prompt above the limit for your selected model.

drop file on prompt

UX improvements

This week we further improved the how prompt versions are presented in prompt studio. This new setup separates the template view from the prompts and completions so that you can view all of them at the same time. Figuring out the most convenient setup for this new type of development experience is still a work in progress, let us know what you think of our most recent changes!

prompt version list

· 2 min read
Aurélien Franky

Our focus this week was adding knowledge bases to Prompt Studio, allowing you to circumvent limitations with prompt lengths, and test interactions with your own data.

Knowledge Bases

When generating content through prompts you sometimes want to make some information available to the language model that it was not trained upon. This could be because the data was not available at the time of training or because the data you want it to use is very specific.

To get the results you want, you need to pass that information as part of the prompt you send to the language model. With that context the model will be more accurate in its responses.

However language models limit the number of tokens that can be provided as part of a prompt, this means that if you have a lot of information that you want to be include in the prompt, you need a way to decide what parts are the most relevant in a given situation. There are many ways to do that, a very popular approach is to do a vector similarity search, where the parts of the text that are most similar with the user query are passed as context.

You can now setup a knowledge base to do just that in Prompt Studio.

Adding files to a knowledge base

To build a knowledge base in Prompt Studio, create a new asset of type "knowledge base". And link the files you want to be part of the knowledge base. Then click "generate knowledge base".

generate knowledge base

Chat Context

We added the concept of chat contexts to Prompt Studio. You can use your knowledge base in a chat by selecting it under "chat context". This allows you to ask questions in the chat about the files in your knowledge base.

generate knowledge base

Let us know how you use knowledge bases and what features you would like to see added next.

UX improvements

This week we also improved the useability of prompt versions. You can now easily revert to a previous version of your prompt template, including previewing the number of tokens each prompt will use.

generate knowledge base

· 3 min read
Aurélien Franky

Over the past month we have improved the developer experience when working with prompt templates. The latest version of Prompt Studio allows you to include external text into your prompts like lists and files. We have also been busy preparing a new feature allowing you to create knowledge bases from your files and integrate them into your prompts and chats. Come back next week to learn more!

Here is what we have been working over the past weeks:

Prompt Versions and Completions

We have reworked how we present prompt completions to you, every time you create a completion with Prompt Studio that version of your prompt is saved. All completions done with the same prompt, but with different models or settings (Temperature, Top P or maximum length), will be grouped together, and you can now compare prompt completions side by side. This makes it really easy to compare how different language models perform, or how parameters like "temperature" affect your results.


A draft is a prompt created from your prompt template that you have not saved yet. In the example below your prompt template results in two different drafts because the option "generate prompt for each item" is selected for the "storyTone" variable. When clicking run Prompts, all your drafts will be run at once, so that you can easily compare results, and revert to a previous version of your prompt at a later stage.


Preview a Prompt

You can preview the content of your prompt by clicking on the draft item in the prompt version list. All variables are inserted into the prompt and you can make sure everything is configured correctly before running it.


Comparing completions

Once you run your prompts, completions will be generated and you can now compare them with each other:


With that foundation in place, here are features we will be adding soon:

  • Save a specific version of a prompt template
  • Revert to a version of your prompt template
  • Delete Prompt versions and completions you don't need
  • Run individual prompts instead of all draft prompts

Working with Files

To test your prompts you need to try them out on different sets of data, the same prompt might perform very differently given different user inputs. This is why we added a new feature allowing you to create files and link them into your prompts. This way you can check if your prompt performs the way you expect in different scenarios. You can also preview the number of tokens a file might have.


Currently you need to copy paste the file content into Prompt Studio, we know this is not ideal and we are working on improving this. Down the line you will be able to:

  • Drag and drop files into the editor
  • Tokenize PDFs and other Documents directly in Prompt Studio


Interacting in a chat format allows you to build up a question or request across several prompts. This is incredibly useful when you have more complex tasks for a language model. Our current chat interface is a starting point for many new features we intend to add in the coming weeks.


There are many things we want to add to this early preview, but we need your help to understand what features you need the most, and what we should work on first. Reach out to us on Github, or get in touch on Discord.