Law 3: Machines Don’t Read Like We Do
Welcome back to the Ten Laws of AIO.
Here’s a funky question. How do the LLMs read, and does it matter?
First, it matters a lot. How we structure our content will make a big difference in whether the robots–the algorithms–read it and get it.
Do it right, you make it into the LLM summaries and the reference links.
Do it wrong, you’re invisible to the bots.
Let’s get back to how the machines read.
In jargon, they push text through neural network architecture.
In practice, they break text down into small units called tokens. They can be words or just characters.
The tokens get converted into numerical IDs for the model to understand.
Then the LLMs analyze patterns in the text to focus on important words and relationships.
The process lets LLMs predict the most likely next word or token in a sequence. The more information LLMs gather about these relationships, the better they can generate meaningful and coherent outputs.
LLMs read by constantly refining their understanding of text through these directional or vector-based computations.
By the way, theorists believe the human mind also works on distributed patterns–vectors–that encode meaning. So you can predict what word is coming next, too…
But let’s not get off track.
The big difference is that the LLMs care about answering questions. Not about storytelling and narrative to keep them going. What writer Steven King has called the “forward flow,” and what you don’t want to interrupt it when it comes to people.
The bots don’t give a hoot about “forward flow.”
They only care about those things from the standpoint of the mathematical relationships between the tokens or word groups. So they can fulfill their mission and answer your query.
So don’t forget law three when you’re trying to get discovered by answer engines.
Machines don’t read like we do.
Watch our next video to learn about law number four. Write for Robots, Not Just People.



