Wednesday, April 22, 2026
What Claude Can't See — and What to Do About It
Something happened earlier this year that I've been thinking about ever since.
I was working with claude.ai on an outreach letter — a carefully considered piece of communication with real stakes. We'd drafted it, reviewed it, refined it. Then new information arrived. A single fact I hadn't known. Within moments, the letter we'd built together looked completely different. Not because anything we'd written was wrong. Because the box had shifted.
That's what I want to talk about.
The box
Claude doesn't have persistent memory. It doesn't browse the internet in real time. It doesn't know what's happening outside the conversation. What Claude can see, reason about, and work with is bounded by what's currently in context — the conversation, the documents, the facts you've shared, the framing you've established.
That boundary is the box.
This isn't a Claude quirk. It's how all large language models work. The box just has different dimensions depending on the tool.
Most people understand this at some level. What's less understood is that the box is not fixed. It shifts. And it shifts through three distinct mechanisms — only one of which most people think about.
Three ways the box moves
New external data. Information arrives from outside that wasn't there before. You share a document, a link, a finding. The box expands. Claude can now see things it couldn't see before, and the work adjusts accordingly. This is the mechanism most people imagine when they think about context.
Previously known, newly relevant. The information was already in the box. But something — a new question, a different angle, a connection that wasn't visible before — makes it suddenly significant. No new data arrived. The box reconfigured around what was already there.
Human input. You tell Claude something. It might be accurate. It might be mistaken. It might be incomplete. It might be deliberately deceptive.
From inside the box, all three of these look identical in the moment they arrive. The box adjusts either way.
That last one is the one worth sitting with.
The key insight
Claude cannot distinguish a correction from a deception when it arrives.
Both shift the box. Both change what Claude sees. Both change what Claude produces. That's not a malfunction — it's a structural feature of how the system works.
What follows from that is important: the human in the collaboration carries the responsibility for what goes into the box. You are not just directing the work. You are curating the context. What you put in, how you frame it, what you withhold — all of it shapes what Claude can see. A skilled practitioner manages the box deliberately. An unskilled one manages it accidentally.
What this means for "hallucinations"
Most explanations of AI hallucinations make them sound like lying, or malfunction, or a model that simply invents things. That framing makes the problem feel unpredictable and the tool feel untrustworthy.
Here's a more accurate frame: an AI error is often the result of data selection happening in an instant — inference filling a gap the model didn't know was there. The reasoning is coherent. The box was incomplete. The output reflected the box.
That's a different problem than "the AI made something up." It's a problem of curation. And curation is a human skill.
The practical implication: before you ask what went wrong with the output, ask what was in the box when it was produced. Often that's the answer.
The box shifts inside a session too
Here's something that happened to me recently that illustrates the subtler version of this problem.
Early in a working session, I asked Claude to produce a document I could use in Word — a specific deliverable, clearly stated. Claude confirmed. We moved on. Over the next hour, other work accumulated in the conversation. New decisions were made. New context arrived.
When the document came, it was a Markdown file.
The commitment hadn't been forgotten. It had dropped out of the visible portion of the box as context accumulated. Claude wasn't being unreliable. The box had shifted, and the earlier agreement had moved out of frame.
This is the version of the problem that catches serious practitioners off guard — not the obvious gaps, but the accumulated drift. Two things address it:
Standing rules loaded at session open. If the commitment matters across sessions, it belongs in a document that gets loaded at the start of every session — not left to persist in a single conversation thread.
Explicit confirmation before delivery. "I'm about to send you a Word file — confirm this is correct before I proceed." A two-second checkpoint before action. The cost is nearly zero. The catch rate is high.
Both solutions come back to the same principle: the human manages the box. The tool works within it.
Managing the box deliberately
A few things that follow from this model:
When you correct a fact mid-session, don't assume the correction propagated. Claude doesn't maintain a unified working model — the corrected version and the original may both exist in context. Ask explicitly for an audit of all instances.
What you withhold shapes the output as much as what you include. A box that's missing a constraint produces work that ignores the constraint — not because Claude failed, but because the box didn't contain it.
Clean context produces clean work. The infrastructure I've built for multi-session AI collaboration — handoff documents, decision logs, session protocols — exists largely to keep the box well-curated at every session open. The forgetting isn't the problem. A poorly curated box is the problem.
The system, briefly
The box model is one of the principles I've been developing while building a real publishing project with Claude as my collaborator — a series of consumer reference guides, now spanning multiple states. The architecture that makes serious multi-session AI work possible is becoming a book: Building a Book With Claude, a practitioner's guide for anyone doing long-form work with an AI tool.
If you're working seriously with AI and want to understand the mechanism, not just the output — watch for it.
In the meantime: you manage the box. The work will follow.
Do you have standing rules for your use of AI? I'd like to hear about them. Find me on LinkedIn at linkedin.com/in/jayelkes — or connect if this problem is yours too.
Jay Elkes finds the simple solution hiding inside the complicated problem. He has a talent for finding the right shoulders to stand on. He's been assembling a toolkit for doing that — across domains, in practice — long enough to give it a name: Extreme Common Sense. Follow along at extremecommonsense.net.
Subscribe to:
Comments (Atom)

