home
navigate_next
Blog
navigate_next

How to Build Software With LLMs (Part 1)

How to Build Software With LLMs (Part 1)
On how LLMs have given rise to a new computational paradigm.
How to Build Software With LLMs (Part 1)

by Iulian Serban

A New Computational Paradigm

Traditional software development is dead. Large Language Models (LLMs) and “Copilots” are fundamentally changing how we build software. Soon software engineers will no longer “write code”. Instead, they will create software by using natural language and combining intelligent building blocks to rapidly create new systems and new experiences.

In an interview a few weeks ago, Thomas Dohmke, Github’s CEO, said:

“Sooner than later, 80% of the code is going to be written by Copilot.”

This is coming from the CEO of Github, the world’s largest software hosting platform. They are the creators of Copilot. They are also home to almost all major open source projects and have built 1000s of software engineering tools and processes. They also have millions of data points from millions of developers and companies. They know software development inside out, and they know when there is going to be a fundamental change in the software engineering industry.

Replit is one of the fastest growing browser-based IDEs with 20+ million users. Similar to Github, they have also adopted LLMs and are pursuing an ultra-rapid approach to software development. They recently put up a blog post about the same fundamental change stating:

“The newest LLM chat apps can generate code for full programs with simple natural language prompts, enabling the creation of full websites with no coding experience in minutes.”

This is a really, really big deal. And no doubt it’s going to be hard for many people to grasp it.

Let me walk you through it.

The “Naive View” : LLMs Are Just Another Tool in the Toolbox

One viewpoint today is that LLMs are just a new tool in the software developer’s toolbox.

This viewpoint says that LLMs are a powerful tool, which can be accessed as an API or microservice and used to solve many types of tasks. However it maintains that LLMs are essentially just another tool no different from other APIs and microservices.

This viewpoint in turn gives rise to a problem-solving framework centered around the API query.

It might look something like this:

Step 1. Gather data necessary for query prompt

Step 2. Construct prompt

Step 3. Query LLM API with prompt (e.g. OpenAI’s GPT-4 or Llama 2)

Step 4. Parse the results and inject them into your application

This is a deterministic, linear flow. This framework treats LLMs as an API with a fixed set of inputs and outputs. This framework can be expanded to run multiple queries (chains) with inputs from one feeding to another. One can simply repeat the flow for every additional query.

Although simple, it’s a powerful framework with a huge potential. One could even show that given an appropriate LLM and an arbitrary long chain of queries, such a system is Turing complete (i.e. that it be used to implement any arbitrary, computable algorithm).

Despite its simplicity and ability to tackle arbitrarily complex problems, I call this the “naive view” because it puts LLMs into a traditional API format and ignores both what’s unique about LLMs and the rest of the software development life-cycle.

This naive, simplistic view is fundamentally flawed and limited, because LLMs are unique and extremely different from other types of software systems and tools.

LLMs Are General-Purpose Computational Units

LLMs are probabilistic machine learning models trained on vast, vast amounts of data. They have multiple core and emergent properties which make LLMs different from any other type of software and cloud services:

• General-Purpose Problem Solvers: LLMs are extremely powerful, general-purpose problem solvers. When coupled with the right prompts and few-shot learning (examples of input-output pairs), they can quickly solve hard problems which a few years ago even some of the world’s most sophisticated ML/software systems couldn’t solve.

• Natural Language Prompts: LLMs can receive prompts (such as instructions, examples etc.) in natural language, including ambiguous and even incomplete and inconsistent language.

• Multi-Modal Processors: LLMs can input and output natural language text, numbers, json objects, csv files, tables, images, videos, Excel sheets, source code etc.

• Probabilistic Generators: Given the same input, LLMs can generate multiple outputs with different probabilities associated to each one. This is fundamentally different from deterministic algorithms, which always output the same answer given the same input.

• Composable, Recursive Structure: The output from one LLM can be given as an input to another LLM (or back to itself) often leading to better performance for complex tasks requiring multiple steps of reasoning. This has enabled the rise of a powerful, new problem-solving framework called Chain of Thought (CoT), which we’ll discuss later.

• Few-Shot Learners: LLMs can learn a new task with just a few examples often reaching a high level of accuracy or performance. For example, an LLM can be “taught” to replicate any Python function given pairs of inputs and outputs.

In theory, LLMs can replicate any given function with arbitrary accuracy, including any Python function, microservice or blackbox API. Therefore we should really think of LLMs as a new type of computational unit - a general-purpose computational unit.

We’ve never had anything like this in the history of software development. We’ve had plenty of sophisticated software systems. We’ve even had software systems that can intelligently process vast amounts of structured data as input, but we’ve never had systems that can process human natural language as input let alone solve such a wide range of problems across domains.

Unlike most software systems and services, these computational units are probabilistic by design and can be “taught” to solve new tasks through few-shot learning and fine-tuning. For the same reasons and thanks to their ability to parse natural language prompts, they are also readily more adept at being integrated into human-in-the-loop (HITL) systems. This means that we need to adopt new design patterns when building software.1
1. I am using the word “design pattern” broadly here to mean a general, reusable solution to a commonly occurring problem within a given context in software design.

LLMs Are A New Computational Paradigm

If LLMs are a general-purpose computational unit, if they are extremely powerful problem solvers, if they can replicate any existing function, if they have their own core and emergent properties, if they are probabilistic, if they can be taught to solve new tasks with new design patterns, then we must embrace them as a fundamentally new computational paradigm.

We must think differently about how we build software.

We need a new model for software engineering.

In the next post, we’ll generalize the concept of computational units, put them into the context of a flowchart and discuss the probabilistic flow of data and human-in-the-loop systems.

----------

Korbit has been building their AI Mentor with LLMs for well over a year now. LLMs are a new computational paradigm. They've applied this in practice and learned a ton about how LLMs work, how to build real-world applications with them and what architectures and design patterns make them work.
Try Korbit AI Mentor:
https://www.korbit.ai/get-started
Further Reading:
How to Build Software with LLMs (Part 2)
How to Build Software with LLMs (Part 3)
arrow_back
Back to blog