Posted on:
If you have a really great prompt containing lots of careful instructions, and provide all the data needed to perform that task, plus lots of chances to reflect and try again, with as much regular code as possible, LLMs are going to be able to perform that task. If you have a whole lot of these collections of prompts and data and code you can create an agent that can perform *lots* of tasks. At that point, it's tempting to look at somebody's whole job and say "this job is really just these 20 tasks, I have created an agent that can do all of these tasks, therefore I can replace this person". It's tempting but I have never, ever seen it work.
Jobs are more than collections of tasks. Jobs require prioritization, judgement of exceptional situations, the ability to communicate ad-hoc with other sources of information like colleagues or regulations, the ability to react to entirely unforseen circumstances, and a whole lot of experience. As I said, LLMs can deal with a certain amount of ambiguity and complexity, but the less the better. Giving them a whole, human-sized job is way more ambiguity and complexity than they can handle.
Source: https://seldo.com/posts/what-ive-learned-about-writing-ai-apps-so-far
This is exactly what I was inelegantly trying to express here.
The rest of Laurie's piece is excellent.
hat tip to Simon Willison for the pointer to Laurie's piece.
More posts:
- Next: Heynote is great
- Previous: PocketFlow