2nd-edition-featured-image

You Don’t Need AI Agents!

So you’ve heard all the hype about AI agents and now you think you’re behind the curve, right? 

Well, yes and no… 

Yes, because you should understand how they work if you want to stay competitive. 

No, because you don’t need to use AI agents for 99% of your problems. 

Let’s talk about it –>

Defining Terms

There’s a clear distinction between AI workflows and AI agents, according to Anthropic

“AI workflows are systems where LLMs and tools are orchestrated through predefined code paths.” 

“AI agents are systems where LLMs dynamically direct their own processes and tool usage, maintaining control over how they accomplish tasks.” 

In simpler terms, LLMs are responsible for decision-making in agentic systems, but not in workflows. 

So with those definitions, let’s look at why you actually don’t want to use AI agents in most situations… 

Problems With AI Agents

This is by no means an exhaustive list, but these are probably the most important issues you should be aware of: 

Reliability

Ask yourself: Would you rather have a product that works extraordinarily well only 10% of the time, or a product that gets the job done 90% of the time? 

Probably the latter right?

That’s the advantage AI workflows have over AI agents. 

It’s extremely difficult to get reliable results from an AI agent because you’re handing over decision-making to an LLM. 

In other words, you may not know how it’s going to carry out the task you’ve given it, how long it’s going to take, whether it’s even able to complete the task, etc. 

There’s a lot more uncertainty with AI agents because by definition, you only use agents when you can’t predict all possible ways for carrying out a particular task. 

This is most commonly seen when organizations are trying to tackle large problems that require the solution to be flexible and scalable, which is when it makes sense to invest in agents. 

But for simpler problems, they’re currently just not worth the effort to develop. 

Cost

The way most LLMs price themselves is based on how many tokens you use. 

The more tokens you use, the more money your LLM application is going to cost you. 

Here’s how Open AI defines tokens: 

Source: https://platform.openai.com/tokenizer 

Keep in mind that the total number of tokens includes both the prompt you give the model, and the output that it generates. 

So what does this mean for agentic applications? 

Since it’s very difficult to predict how long it will take an AI agent to carry out a particular task, it’s also hard to predict how many tokens the agent will use to complete that task. 

As a result, you likely won’t have fixed costs when using AI agents… 

And unless those costs are tied to performance, which isn’t the case here, you probably aren’t a fan of that 🙂

For example, it’s entirely possible that an agent takes several hours to work on a task you give it, only to leave it incomplete. 

Even though it didn’t do its job, you’ll still have to pay for all the tokens the agent consumed… 

Now this isn’t to say that there’s no uncertainty with how many tokens AI workflows consume – there absolutely is. 

However, it’s much better when compared to agents because designing AI workflows requires you to know what the LLM is going to output at any given stage, so it’s easier to predict costs. 

Complexity

So the word “complexity” is a little misleading here… 

Agentic patterns are actually the least “complex” because you aren’t predefining any paths (refer to Figure 3 below). 

Rather than having different paths in the workflow mapped out for every possible scenario, you just have a cycle of iteration and feedback between a human and the AI agent. 

Complexity in this context really refers to how difficult it is to extract consistent results from an agent. 

This is because you need to get a lot of things right: 

  • Data
  • Prompt 
  • RAG (retrieval augmented generation), which is like a library of extra info for LLMs 
  • Tools that the agent has access to 
  • Environment
  • And more… 

Also, with this increased difficulty comes a couple of things that you as a business owner probably aren’t a fan of, such as: 

  • Higher prices from vendors who can build these applications 
  • A longer time-commitment because of how much testing is required 
  • Lower chances of succeeding if you try to build it yourself (but this is still a great way to learn!) 

So those were some problems with AI agents, which don’t make them ideal for most situations. Now, let’s answer the following question: 

Which AI is the right solution?

More specifically, should you use AI workflows or AI agents? 

In short, the answer is: you should use both (kinda).

Why?

Because the level of autonomy an AI system has is a spectrum

In other words, there are different levels to how “agentic” a system can be.

For example, consider three different AI systems: 

Figure 1: Example System 1

The first system is designed such that all decisions have already been made by the person who designed it. 

The LLM makes no decisions. It just does what it’s told. 

Figure 2: Example System 2

The second system is designed to be a little more agentic than the first. 

Here, the LLM gets to decide which path in the workflow to take given some criteria that it screens the input against. 

So it gets to make decisions within a very limited scope. 

Figure 3: Example System 3

The third system is the most agentic of the three. 

The LLM is now making most of the decisions and only checking in with a human when needed. 

As you can see, it’s not always straight-forward to categorize systems as purely AI workflows or AI agents – there’s a middle ground. 

With all of that being said, a great way to solve any kind of problem using AI is to start with a workflow that’s not very agentic. 

Once that workflow can consistently solve the problem, then you can start to improve the system by making it more agentic (if it’s even necessary). 

Remember, when it comes to systems, simple is always better 😉

How can you start using AI in your business this week?

A great way for you to get started is by learning about common patterns that engineers use to design AI systems. 

By understanding these patterns, you’ll start to come up with ideas for how you can solve different problems in your business. 

This will also help you determine which problems are worth the trouble, and which ones aren’t. 

Let’s walk through a simple pattern here to get you started: 

Figure 4: Prompt-Chaining Example Workflow

This pattern is called prompt-chaining

It’s basically when you break down a larger task into smaller sub-tasks that you can complete sequentially using separate LLM calls that build on top of each other.

For example, say you’re writing a blog post. 

The first step would be gathering information, which can be accomplished using “LLM Call 1” as shown in Figure 4. 

Once you have that information, assuming the output is correct (passes “Check 1”), then you can give that information to “LLM Call 2” where you generate an outline for the post. 

At the end of the workflow shown in Figure 4, the output will be a nice outline for your blog post. 

Obviously, you can extend this to write the entire blog post. 

The next LLM call would be to write the introduction based on the outline you’ve generated, then paragraph 1, then paragraph 2, and so on… 

The benefit of prompt-chaining is that by giving each LLM smaller tasks to complete, it’s more likely to do them well. 

The downside is that the system will take longer to run. 

So you’re trading speed for accuracy here. 

If you want to learn more about common patterns like this, consider checking out this fantastic blog post from Anthropic.

TL;DR

  • The main difference between AI workflows and AI agents is that LLMs make their own decisions in agentic systems, but not in workflows. 
  • The three major problems that AI agents currently have are: reliability, cost, and complexity. 
  • How “agentic” an AI system is can vary – it’s a spectrum. 
  • Prompt-chaining is a common pattern used by AI engineers. 
  • Prompt-chaining involves breaking a task down into smaller subtasks, and completing each subtask one at a time using separate LLM calls. 

That’s all for this week. 

See you next Saturday 🙂 

Whenever you’re ready, here’s how we can help you: 
  1. Newsletters: Our newsletters provide tactical information that innovative entrepreneurs, investors, and other forward-thinking people can use to scale their impact.
  2. Community: Coming soon! You’ll automatically be added to the waitlist by joining any of our newsletters.