Last reviewed on April 24, 2026.
The traditional textbook picture treats perception as a one-way street: light hits the retina, signals travel up the visual hierarchy, the brain assembles features into objects, and the result is delivered to higher cognition for use. Predictive processing turns this picture around. It claims that the brain is, throughout the hierarchy, generating predictions about the input it is about to receive, and that what most of the system actually transmits is not raw signal but error — the difference between predicted and actual input.
This idea, also known as predictive coding or in some formulations the Bayesian brain, has become one of the most discussed unifying frameworks in contemporary cognitive science. It is associated with Karl Friston, Andy Clark and Jakob Hohwy, among others. The aim of this page is to explain what the framework actually claims, what evidence has built up around it, what it does well, and where it stops.
The core claim in plain terms
Three intertwined claims sit at the centre.
- The brain is a prediction machine. At every level of processing, neural populations encode a model of what the input below is expected to look like, given everything the system currently believes about the world.
- What flows up is mostly error. When the input matches the prediction, little signal needs to travel upward; when it does not, the mismatch — the prediction error — is propagated to update the model. This is an unusually efficient design: the network spends its bandwidth on what it did not already know.
- Action minimises error too. The system can reduce prediction error in two ways: by updating its model to fit the world (perception) or by acting on the world to make it fit the model (action). On this view, perception and action are two sides of the same inferential machinery.
This last move is what gives the framework its sweep. If acting and perceiving are both ways of minimising the gap between a model and its inputs, then a single computational principle is doing work that previously required separate stories.
Why "Bayesian"?
Bayesian inference is a normative procedure for updating beliefs in light of evidence. A learner combines a prior — what they already believe — with a likelihood — how compatible the new data are with each possibility — to produce a posterior, an updated belief. It is the standard mathematical answer to "how should a rational agent change its mind?"
Predictive processing is often called the Bayesian brain because the prediction-error machinery can be read as an approximate, neurally tractable way of doing Bayesian inference. The brain's predictions correspond to priors; sensory input corresponds to likelihood; the updated representation corresponds to the posterior. Strictly speaking, the brain is not doing exact Bayesian computation — exact inference is intractable for almost any real-world problem. What the framework claims is that the brain is implementing a clever approximation, often called variational inference, that converges on the same kind of answer.
Evidence that has built up around the framework
Perception against expectation
If perception is partly driven by predictions, then perception should be measurably different when predictions and inputs disagree. A large literature on repetition suppression, oddball responses and illusions fits this pattern. Predictable stimuli produce smaller neural responses than unpredictable ones; expectations bias what people see in ambiguous figures; classical illusions like the hollow-mask illusion can be read as the system enforcing strong priors against the actual data.
Action and forward models
Motor neuroscience has long used forward models: internal models that predict the sensory consequences of one's own actions. They explain why you cannot tickle yourself (your brain has cancelled the predicted sensation) and why patients with certain motor disorders perceive their own movements as foreign. Predictive processing inherits these forward models and generalises their logic.
Attention as precision weighting
The framework reinterprets attention as precision weighting: turning up the influence of prediction errors that are expected to be reliable, and turning down the influence of those that are not. This unifies several attention phenomena under the same machinery as perception. It also generates testable predictions about how attention changes the gain of neural signals at different levels of the hierarchy.
Learning and developmental change
If the system is constantly updating its model based on prediction error, then learning is no longer a separate process bolted onto perception. Predictive processing offers a single mechanism for moment-to-moment perception, longer-term skill learning and the developmental shifts described on the cognitive development page. Whether the same machinery really covers all of these is contested, but the unification is part of the appeal.
Clinical applications
Researchers have applied predictive-processing accounts to autism, schizophrenia, anxiety and chronic pain, often by proposing that priors are weighted unusually strongly or weakly. These accounts are still under active development and many specific proposals are debated, but the framework has given clinical research a common vocabulary it previously lacked.
What predictive processing does well
The framework's appeal is partly aesthetic — a single computational story across many phenomena — and partly practical. It pulls together perception, action, attention and learning in a way the older modular picture struggled to do. It connects neatly to connectionist models in machine learning, where similar gradient-based update rules are used to train deep networks. And it provides a natural way to think about embodied claims: if action is part of the inference loop, then the body and environment are not adjuncts to cognition but components of it.
Common misconceptions
- "The brain just hallucinates and corrects itself." The slogan is catchy but misleading. Top-down predictions and bottom-up signals interact at every level; neither dominates the other in general. What predictive processing claims is that the relative weight depends on how reliable each signal is, not that perception is fundamentally a hallucination.
- "Bayesian means the brain is rational." Approximate Bayesian inference can systematically diverge from exact rationality. The framework predicts certain biases — reliance on strong priors, slow updating in low-precision contexts — that look very much like the cognitive biases studied in decision research.
- "It explains everything, so it explains nothing." This is a real worry, but the right response is not dismissal — it is asking which specific predictions distinguish the framework from rivals. Several have been formulated and tested; others remain to be.
Where the framework stops
Predictive processing is a strong candidate for a unifying computational story, but it is not a finished theory. Several questions remain open.
- Implementation. Cortical microcircuits compatible with predictive processing have been proposed, but the mapping from the abstract framework to actual neural mechanisms is far from settled.
- Scope. How well the framework handles symbolic reasoning, language production and abstract thought — the domains where classical, symbol-manipulating models still have purchase — is debated. The article on problem solving and reasoning covers the kinds of cognition that strain a purely error-minimising account.
- Free-energy formulations. The most ambitious version, Friston's free-energy principle, claims that all self-organising systems — not just brains — minimise variational free energy. Whether this principle is empirically substantive or a near-tautology is one of the most active arguments in the field.
Why it matters even if you do not buy it
Even readers sceptical of the strongest version benefit from understanding the framework. Predictive processing has reshaped how researchers describe perception, attention and clinical phenomena; it has imported a precise mathematical vocabulary into discussions that previously relied on metaphor; and it has given the field a serious candidate for the kind of unifying theory that cognitive science has wanted since its founding. Whether or not it ends up being the right story, learning to think in its terms is now part of cognitive-scientific literacy.
The page on research methods covers the computational and imaging tools used to test predictive-processing claims, and the glossary defines the technical terms — Bayesian inference, predictive processing, reinforcement learning — that appear here.