Skip to main content

· 2 min read
Aurélien Franky

One thing you will notice quickly when working with prompts is that regardless how well you craft them, they by themselves are no guarantee for consistent results. The language model you use as well as the parameters you give it, all affect the end result. A prompt that works well with GPT-4 might generate very different content on another LLM like Mistral, and even the same LLM might produce different results after an update. LLM providers are constantly improving their models but while a model might give better results on general tasks after the update, there is no guarantee that it will still perform well with your prompt.

AI Presets

Another thing you will notice is that each LLM has different parameters. Parameters that can drastically change the tone, accuracy and quality of the results. Even when LLMs have parameters with the same name, their effects on the result will vary. One common parameter, “temperature” has a wide variety of ranges across models and sometimes different meanings. The same setting will generate great results with one LLM and incoherent blabber in another.

All this of course makes it tricky to know what settings to pick for what model to achieve a specific goal. To help you manage these settings centrally within your organization, you can now group LLMs and their parameters in what we call “AI Presets”. These AI Presets can be reused across your projects. We provide several AI Presets out of the box: “Accurate”, “Creative”, “Casual”, … but you will be able to expand this list with your own creations, including connecting your own fine tuned models.

What are instructions?

Another concept we added to the latest version of Prompt Studio are instructions, instructions are a combination of a Prompt, and an AI Preset. This distinction helps you keep the relevant pieces you need to generate content together. This is especially useful for versioning purposes and when monitoring the quality of results you get.

Next week we will talk more about how you can combine instructions into chains in the new version of Prompt Studio.

· 3 min read
Sara Fatih

In the past few months, we have been on a mission to make AI accessible to anybody regardless of their technical background. We spoke with over 40 domain experts and learned so much in the process!

So, back in August, we designed our visual editor where users could build AI workflows. It's all drag-and-drop and totally code-free. We were pretty sure that this was the ultimate way to simplify AI for everyone. Then came our closed beta launch in October, and that was just the start of a very interesting ride.

During our closed-beta phase, we teamed up with some amazing companies using AI in production. The insights provided by these domain experts were incredibly thorough, as they meticulously examined every aspect of our editor. Initially, we believed our solution had perfectly met the requirements, but turns out, we were a bit off the mark.

Roksana, our new UX designer, came on board just after we kicked off our closed beta. And that was quite pivotal for us, she's got a great eye for stuff that we engineers often overlook. She noticed that our editor, which we thought was super straightforward, was actually kind of speaking in code. It was a bit of a surprise to us. She showed us how we were actually making AI interaction more complex without realizing it. That was a real lightbulb moment for us!

We started digging deeper with our pilot customers and domain experts. Their insights were pure gold. It dawned on us that we weren’t just supposed to ditch code; we needed to rethink the whole shebang, putting the domain expert's perspective front and center.

When we set out to make things easier for domain experts, we ended up creating a visual form of coding. We slowly started to realize that the visual editor was only obvious to people who already knew programming. It hit us then – we were wrapped up in our own tech language, which wasn't really helping the real-world experts. They needed something in their own language, not the engineer's speak. We thought we'd cracked it, but really, we were just thinking like engineers again.

It was time to head back to the drawing board to create something as intuitive as a text editor. And that is exactly what we plan to release soon – a brand new way to interact with AI. Stay tuned, We are launching in just a couple of weeks 🚀🌟

· 4 min read
Sara Fatih

With the latest generation of LLMs (Large language models), artificial intelligence (AI) applications are becoming increasingly important for businesses, large and small. AI can be used to process large amounts of data quickly and accurately, automate tedious tasks, and open up new opportunities in many industries. But as this is relatively new, teams are still trying to navigate different ways of working to build AI applications. In this article, I will discuss the differences between traditional software development and AI development, how domain experts are better placed to judge the quality of outputs of LLMs, and the importance of collaboration between domain experts and software developers in order to create high quality AI solutions.

AI software development is very different from traditional software development

AI software development presents a unique set of challenges that are distinct from traditional software development. Building an AI application is about guiding the AI to perform different tasks, effectively connecting the dots in a way that AI cannot do on its own. Figuring out the clearest and most effective prompts for LLMs is essential for building on top of LLMs. Although the fundamentals of algorithms remain the same for building on top of LLMs and traditional software, with conditionals, loops, and logical sequencing still being necessary, the non-deterministic nature of LLMs makes it difficult to predict the output of LLM applications. Consequently, AI development requires much more testing than traditional software development, as there is no single source of truth when it comes to outputs from AI models. In traditional software development, the behavior of the application can be tested to a very high degree, which is not the case with LLM applications. This calls for a different mindset in AI development, as the goal is to create systems that can learn and improve over time.

In a sense, LLMs have opened the door to a new programming language that does not require any syntax but is based entirely on logic. It’s almost like pure algorithm logic guided by the specific domain that the tasks are about. This is powerful! This paradigm shift means that anyone can “code” now, but it’s a different way of coding where domain expertise is now the most essential part. That’s why the entire team, including domain experts and software engineers are going to need to collaborate when building, testing and maintaining AI applications.

Domain experts are better placed to judge the quality of outputs of LLMs than software engineers

Software developers may not always have the necessary skills and knowledge to accurately evaluate an LLM’s output because they may not be familiar with the specific domain or industry that the LLM is being used for. In contrast, domain experts know the ins and outs of the specific industry that the LLM is being used for and they understand the language nuances of this domain. For example, consider a financial advice chatbot that uses an LLM to respond to customer queries. A domain expert in the financial industry would be able to identify whether the LLM is producing accurate and relevant financial advice. In contrast, a software developer without this background may not have the ability to effectively evaluate the output of an LLM.

Language nuances are at the core of interacting with LLMs

This was a pivotal realisation for me as I was speaking with different teams working with LLMs. Every word and punctuation that you use when writing your prompt can make a big difference with the output of the LLM. Elizabeth, a fiction writer well versed in the field of prompt engineering, has been exploring different prompts for fiction writing with AI. She is running Future Fiction Academy, an online community for fiction writers who want to use AI in their writing. And I think that If I hadn’t met her, it would have taken me a lot longer before I realize that software engineers shouldn’t be the ones writing prompts. I am a software engineer myself and as I was watching Elizabeth craft her prompts, I couldn’t believe how many language subtleties I was taking for granted when writing my prompts.

Team dynamics and the tooling around building with LLMs need to address the gap between domain and software engineers

If your team is building software and you are using LLMs for domain specific tasks, the whole team should be involved in the development. Domain experts should be able to prototype and test LLM-powered product features without relying on engineers. And the tooling around that needs to reflect this. In the next blog posts, I will write about how such dynamics and tools could look like.

· 3 min read
Aurélien Franky

During the 1990s, as computers reached a level of processing power capable of supporting 3D animation, several 3D modeling software emerged, providing artists with the opportunity to express themselves in this new medium.

The technology at that time had limitations; rendering complex scenes was time-consuming due to extensive computational requirements. Additionally, achieving realism and intricate detail was almost impossible, and artists wanting to enter the field encountered a steep learning curve. Nonetheless, these obstacles did not prevent the 3D modeling software of that era from setting the stage for significant progress in digital art and animation.

3D animation wasn't just another method for crafting traditional animations; it introduced an entirely new and distinct style of animation, replete with its own distinct challenges and opportunities. Today, both approaches coexist and complement one another.

We currently experience a similar revolution with text. The content we generate through LLMs has a distinct feel to it and sometimes struggles to match our expectations in terms of quality or accuracy. But it also comes with its own advantages: we can shape it using instructions and guidelines and explore different ideas simultaneously and automatically. This procedural text invites interaction and, when provided with the right boundaries and structure, has the potential to independently develop into substantial narratives.

The big question is how do we retain creative control over it and how will the tools we use to interact and shape it look like. So let me talk a bit about the approach we are taking at Prompt Studio.

Modifiers

A modifier is simply a tool that transforms your text without making irreversible changes. A modifier contains some rules, and given text it will return a modified version of it. A very simple modifier is a template. A template takes text and variables. Given the text “hello {{name}}!” and the variable { name: “mumin”} , the template returns the text “hello mumin!”.

There are thousands of modifiers we can think of, from collapsing text above a certain number of tokens by summarizing it, to transforming text into a different writing style, to extending text with information from a knowledge base.

Since Prompt Studio allows you to work with text and chat, our modifiers will support both formats as well.

modifiers_blender

A stack of modifiers in Blender.

Flows

Sometimes you need to create flows to achieve the results you are looking for. A flow is a combination of modifiers, adapters (external tools) and inputs (UI/API), that generates either text or a chat message. In a flow you pass data from one node to another by connecting them, giving you full control over how text is generated, without having to write code. These editors are great for prototyping and are very common when working with procedural assets in other tools like Blender.

While we continue working on collaboration features we are also migrating our backend to work with flows. Under the hood our prompt and chat interaction editors will be editing a flow, and tracking its results. This makes our whole platform a lot more flexible and modular. Adding new functionality can simply be done by creating new modifiers/adapters. While we currently don’t allow flows to be edited directly, we plan to add a node editor to Prompt Studio when we introduce our workflow features.

nodes_blender

Shader editor in Blender used to create Materials.

Both our flow engine and our modifiers are open source and available (as early versions) on Github.

Have ideas for modifiers you would like to see us add to Prompt Studio? Let us know on our Discord!

· 2 min read
Aurélien Franky

Over the past weeks, we have met with many of you and learned a lot about the problems you encounter when working with language models. All this helps guide us towards what Prompt Studio needs to become and what features we should provide to best help you in your daily tasks.

For most people, working with language models is a new thing and a lot of the workflows that involve language models are lacking from a useability point of view. Tedious copy pasting between different tools and manually iterating over prompts is far from what we imagine the AI powered future to be.

LLMs are incredibly powerful tools that are now at our disposal and can help us become more efficient but the user experience hasn't caught up with it yet.

This is why we are building Prompt Studio, a collaborative prompt engineering platform for teams that work with LLMs. We want to provide you with features that can help you interact, test and build tools to automate text generation for your specific use case.

Our Roadmap

With new feature requests appearing every day, we now need to organize our roadmap accordingly. We are making it public so you know what we are working on and when we will implement the features you need.

You can find our roadmap on github. And as always, you are welcome to make feature requests and get in touch on discord.

roadmap

Our current tasks

These are the tasks at the top of our backlog. These are foundational changes that take some time to implement but are important things to get right from the start.

  1. Collaborative workspaces work with your team, share your prompts, invite team members and manage roles. Most work nowadays is collaborative, prompt engineering is no different.
  2. Prompt Studio on Premise keeping your company data on your own cloud was a common request. By making Prompt Studio cloud agnostic we are also one step closer to our vision of what an open source version of Prompt Studio should be.

Thank you for using Prompt Studio, and stay tuned for future updates!

· 3 min read
Aurélien Franky

Managing and organizing your Language Model initiatives for automation can be a real challenge. If you and your colleagues are constantly exploring different approaches to generate content and automate tasks, you're likely familiar with the frustration of switching between ChatGPT and Office tools to save and share prompts. And if you want to reuse your best ideas, it often involves a time-consuming process of searching for them and copy-pasting them back in ChatGPT.

Workspaces

workspaces

The latest version of Prompt Studio offers you a solution to effortlessly organize your initiatives into dedicated workspaces. Each workspace provides you with a convenient playground where you can access prompts, files, and chats specific to that project.

Currently, the workspaces you create are private, but we have an exciting team collaboration feature in the pipeline that will soon enable you to share your workspaces with your colleagues, enhancing collaboration and productivity.

Chats

chats

Last week, we made improvements to the chat feature in Prompt Studio. These enhancements include several important features that are particularly useful for those who wish to explore different scenarios for chatbots.

  • System Prompt: A system prompt enables you to specify how the language model should behave regardless of the user input. By using this prompt, you can partially override the default behavior of the language model.
  • Roles: Each message in the chat can be assigned a role, such as system, user, assistant, or function. This tagging helps the language model understand how to handle each message. Currently, we follow the OpenAI format for roles, but in the future, it may vary depending on the selected provider.
  • Completions: Every message within the chat contains a set of completions. These can be messages generated by the language model itself or provided by a user. You can easily switch between different completions to compare and evaluate them without worrying about accidentally overwriting your work.
  • Disable Messages: If you want to see how a completion would appear with or without a specific message, there's no need to delete the message entirely. Instead, you can simply disable it temporarily, allowing you to assess the impact of that message on the overall conversation.
  • Editor and Completion Modes: In completion mode, the chat operates like ChatGPT, where you make a request and the language model responds. On the other hand, in editor mode, you have the flexibility to manually add new messages and define their content. This way, you can explore and observe how your language model behaves in specific scenarios.

Are you looking for additional features when exploring chat scenarios for your chatbots?

Reach out to us on Discord, we’re happy to help.

· 3 min read
Aurélien Franky

Today is a very special day for me. I've decided to leave my full-time job at Klarna and dedicate myself to building Prompt Studio and hopefully become part of something we are all witnessing at the moment, this amazing push towards AI becoming a greater part of our daily lives. Until now I could only invest a few hours of my free time every day, a compromise that left me day dreaming of what Prompt Studio could be and left me unhappy with the progress I was making. Today is my first day working full-time on Prompt Studio and I know it is the start of an exciting journey. So what do we want Prompt Studio to become?

Prompt Engineering and Reasoning Engines

Language models are obviously great at processing language, from translations to text generation. But what excites me the most about them, is their ability to reason about things. If you haven't seen it yet, this presentation by Andrej Karpathy provides a comprehensive overview of the topic. If we draw a parallel between a language model and the ways we think then a pretrained model is somewhat akin to system 1 thinking, it is fast and automatic.

With Prompt Engineering we add another layer, a more deliberate and targeted approach to get the results we want. We are seeing many novel approaches emerging, from frameworks like Tree of Thought and prompting languages like Guidance. We want Prompt Studio to not only be the place where you build and track text inputs for language models but also where you can build and share these more complex processes that form system 2 thinking on top of language models.

More than a Collection of Libraries

Prompt Engineering will become a lot more mainstream and people working with prompts will come from a variety of backgrounds. With the current shortage of developers, and a continuous need for software, we think most prompt engineers will come from other domains. This is why we want Prompt Studio to bridge the gap between a tool that is only useful for software engineers and a tool that can be used by everyone. Our main focus needs to be its usability and collaboration features.

Becoming Open Core

We cannot keep up with the enormous strides in the development of AI we have seen in the past months on our own. This is why we want to focus our efforts on the aspects of prompt engineering where our expertise matters the most. We want to provide a layer for collaboration and real time editing/tools that is much needed for large organizations while providing an open source version of Prompt Studio that can be adapted and modified by anyone for their own needs. This way the editor will always be free and open source, with a layer of additional functionality for paying customers that will allow us to dedicate our time in making Prompt Studio better.

Thank you for your support and stay tuned for more updates!

· 2 min read
Aurélien Franky

Until now, OpenAI was the only Provider for Language Models you could connect to from Prompt Studio. We have been working the past week to decouple ourselves from the OpenAI API and allow you to connect to other providers as well. Here is a list of providers we plan to integrate soon:

Let us know what providers you are interested in connecting with here!

Custom APIs

Being LLM agnostic means you can now use the new Custom API Model Provider to connect to a language model you are hosting yourself with your own API. This requires a bit of configuration, but is the most flexible setup and you can already use it while we add the integrations listed above. The image below shows the setup to integrate to a custom GPT-2 model deployed on Huggingface.

huggingface setup

Above is a deployed GPT-2 API on Huggingface Inference Endpoints. Below is the corresponding configuration for the Custom API Model Provider.

config

You can now select the custom provider when making your requests:

using custom provider

Prompt Studio and your Local Language models

One feature we are looking forward to, is to have a desktop version of Prompt Studio available that would make it easy for you to connect to a language model on your machine. With the work we have done with custom LLM Providers we are one step closer to this.

Trying out Prompt Studio

You want to try out Prompt Studio, but you don’t want to add your own OpenAI key? No problem! You now have 10 prompts per day to try out Prompt Studio without the need of providing your own OpenAI API Key. To do that, you simply need to select “Prompt Studio” as your LLM provider in the editor to use your free prompts towards OpenAI.

What’s next?

Next on our roadmap are collaborative features. We will start work on adding workspaces, sharing prompts, and teams to Prompt Studio. Stay tuned!

· 2 min read
Aurélien Franky

This week we made working with files a lot easier by introducing inline assets. You no longer need to create separate file assets and pass them to your prompts or knowledge bases. You can now drop the file directly onto your prompt variable or onto your knowledge base.

Stay tuned for next week when we bring a new custom API provider for you to connect to your own language models.

Inline Assets

Inline assets are just like normal assets with the difference that they live only within the scope of another asset. This makes it a lot easier for you to keep your workspace de-cluttered. If you are only going to use one of your files in a single knowledge base there is no point for it to live on your workspace. Simply create the inline file asset by dropping the file onto the knowledge base. If you no longer need the knowledge base, all inline assets that were created as part of it will be deleted along with it. If you later want to use your file elsewhere, you can always convert it into a primary asset.

Adding files

To add a file to a knowledge base in Prompt Studio, you can now simply drag and drop it onto the knowledge base:

drop file on kb

You can do the same in a prompt. Keep in mind that the file size might push your prompt above the limit for your selected model.

drop file on prompt

UX improvements

This week we further improved the how prompt versions are presented in prompt studio. This new setup separates the template view from the prompts and completions so that you can view all of them at the same time. Figuring out the most convenient setup for this new type of development experience is still a work in progress, let us know what you think of our most recent changes!

prompt version list

· 2 min read
Aurélien Franky

Our focus this week was adding knowledge bases to Prompt Studio, allowing you to circumvent limitations with prompt lengths, and test interactions with your own data.

Knowledge Bases

When generating content through prompts you sometimes want to make some information available to the language model that it was not trained upon. This could be because the data was not available at the time of training or because the data you want it to use is very specific.

To get the results you want, you need to pass that information as part of the prompt you send to the language model. With that context the model will be more accurate in its responses.

However language models limit the number of tokens that can be provided as part of a prompt, this means that if you have a lot of information that you want to be include in the prompt, you need a way to decide what parts are the most relevant in a given situation. There are many ways to do that, a very popular approach is to do a vector similarity search, where the parts of the text that are most similar with the user query are passed as context.

You can now setup a knowledge base to do just that in Prompt Studio.

Adding files to a knowledge base

To build a knowledge base in Prompt Studio, create a new asset of type "knowledge base". And link the files you want to be part of the knowledge base. Then click "generate knowledge base".

generate knowledge base

Chat Context

We added the concept of chat contexts to Prompt Studio. You can use your knowledge base in a chat by selecting it under "chat context". This allows you to ask questions in the chat about the files in your knowledge base.

generate knowledge base

Let us know how you use knowledge bases and what features you would like to see added next.

UX improvements

This week we also improved the useability of prompt versions. You can now easily revert to a previous version of your prompt template, including previewing the number of tokens each prompt will use.

generate knowledge base