Skip to main content

From Code to Logic - The Paradigm Shift in AI Software Development

· 4 min read
Sara Fatih

With the latest generation of LLMs (Large language models), artificial intelligence (AI) applications are becoming increasingly important for businesses, large and small. AI can be used to process large amounts of data quickly and accurately, automate tedious tasks, and open up new opportunities in many industries. But as this is relatively new, teams are still trying to navigate different ways of working to build AI applications. In this article, I will discuss the differences between traditional software development and AI development, how domain experts are better placed to judge the quality of outputs of LLMs, and the importance of collaboration between domain experts and software developers in order to create high quality AI solutions.

AI software development is very different from traditional software development

AI software development presents a unique set of challenges that are distinct from traditional software development. Building an AI application is about guiding the AI to perform different tasks, effectively connecting the dots in a way that AI cannot do on its own. Figuring out the clearest and most effective prompts for LLMs is essential for building on top of LLMs. Although the fundamentals of algorithms remain the same for building on top of LLMs and traditional software, with conditionals, loops, and logical sequencing still being necessary, the non-deterministic nature of LLMs makes it difficult to predict the output of LLM applications. Consequently, AI development requires much more testing than traditional software development, as there is no single source of truth when it comes to outputs from AI models. In traditional software development, the behavior of the application can be tested to a very high degree, which is not the case with LLM applications. This calls for a different mindset in AI development, as the goal is to create systems that can learn and improve over time.

In a sense, LLMs have opened the door to a new programming language that does not require any syntax but is based entirely on logic. It’s almost like pure algorithm logic guided by the specific domain that the tasks are about. This is powerful! This paradigm shift means that anyone can “code” now, but it’s a different way of coding where domain expertise is now the most essential part. That’s why the entire team, including domain experts and software engineers are going to need to collaborate when building, testing and maintaining AI applications.

Domain experts are better placed to judge the quality of outputs of LLMs than software engineers

Software developers may not always have the necessary skills and knowledge to accurately evaluate an LLM’s output because they may not be familiar with the specific domain or industry that the LLM is being used for. In contrast, domain experts know the ins and outs of the specific industry that the LLM is being used for and they understand the language nuances of this domain. For example, consider a financial advice chatbot that uses an LLM to respond to customer queries. A domain expert in the financial industry would be able to identify whether the LLM is producing accurate and relevant financial advice. In contrast, a software developer without this background may not have the ability to effectively evaluate the output of an LLM.

Language nuances are at the core of interacting with LLMs

This was a pivotal realisation for me as I was speaking with different teams working with LLMs. Every word and punctuation that you use when writing your prompt can make a big difference with the output of the LLM. Elizabeth, a fiction writer well versed in the field of prompt engineering, has been exploring different prompts for fiction writing with AI. She is running Future Fiction Academy, an online community for fiction writers who want to use AI in their writing. And I think that If I hadn’t met her, it would have taken me a lot longer before I realize that software engineers shouldn’t be the ones writing prompts. I am a software engineer myself and as I was watching Elizabeth craft her prompts, I couldn’t believe how many language subtleties I was taking for granted when writing my prompts.

Team dynamics and the tooling around building with LLMs need to address the gap between domain and software engineers

If your team is building software and you are using LLMs for domain specific tasks, the whole team should be involved in the development. Domain experts should be able to prototype and test LLM-powered product features without relying on engineers. And the tooling around that needs to reflect this. In the next blog posts, I will write about how such dynamics and tools could look like.