“Don’t Believe His Lies.”
In the film Memento, the protagonist, Leonard, has memory loss issues that frequently reset his brain – and so he leaves this note attached to a photograph of a man called Teddy to continually remind himself not to trust him.
At the end of the film, however, we learn – spoiler – that maybe Teddy was actually telling the truth, and the note was more about Leonard's desire to lie to himself in order to avoid difficult truths.
To use a current phrase, Leonard fed himself bad context – which ultimately led to bad decisions.
Like Leonard, AI models forget everything between interactions. Each time you call them, you’re starting from scratch. And so, you have to feed them all previous context when they wake up – or else they won't remember anything that's happened before.
But if the context you feed them is messy, irrelevant, contradictory – or just plain wrong – they’ll still use it with confidence. Like Leonard, they'll be working with bad context in ways that result in them getting distracted, drawing wrong conclusions, or – with sufficient autonomy – acting on really bad decisions.
In my latest article for diginomica, I dive into the emerging discipline of context engineering – a new approach that focuses not just on giving models guidance via initial prompts, or enriching them using external data techniques such as RAG, but on continually managing, enhancing and pruning the evolving context in which a model does its work.
Intentionally shaping their worldview to reduce the risk they’ll fool themselves by trusting the wrong data.
🔗 https://xmrrwallet.com/cmx.plnkd.in/gQcqHwwY
#AI #LLMs #ContextEngineering #AgenticAI #EnterpriseAI #PromptEngineering #Automation