In neuroscience, predictive coding (also known as predictive processing) is a theory of brain function which postulates that the brain is constantly generating and updating a "mental model" of the environment. According to the theory, such a mental model is used to predict input signals from the senses that are then compared with the actual input signals from those senses.[1] With the rising popularity of representation learning, the theory is being actively pursued and applied in machine learning and related fields.[2][3]
The phrase 'predictive coding' is also used in several other disciplines such as signal-processing technologies and law in loosely-related or unrelated senses.
Origins
Theoretical ancestors to predictive coding date back as early as 1860 with Helmholtz's concept of unconscious inference.[4] Unconscious inference refers to the idea that the human brain fills in visual information to make sense of a scene. For example, if something is relatively smaller than another object in the visual field, the brain uses that information as a likely cue of depth, such that the perceiver ultimately (and involuntarily) experiences depth. The understanding of perception as the interaction between sensory stimuli (bottom-up) and conceptual knowledge (top-down) continued to be established by Jerome Bruner who, starting in the 1940s, studied the ways in which needs, motivations and expectations influence perception, research that came to be known as 'New Look' psychology. In 1981, McClelland and Rumelhart in their seminal paper[5] examined the interaction between processing features (lines and contours) which form letters, which in turn form words. While the features suggest the presence of a word, they found that when letters were situated in the context of a word, people were able to identify them faster than when they were situated in a non-word without semantic context. McClelland and Rumelhart's parallel processing model describes perception as the meeting of top-down (conceptual) and bottom-up (sensory) elements.
In the late 1990s, the idea of top-down and bottom-up processing was translated into a computational model of vision by Rao and Ballard.[6] Their paper demonstrated that there could be a generative model of a scene (top-down processing), which would receive feedback via error signals (how much the visual input varied from the prediction), which would subsequently lead to updating the prediction. The computational model was able to replicate well-established receptive field effects, as well as less understood extra-classical receptive field effects such as end-stopping.
In 2004,[7] Rick Grush proposed a model of neural perceptual processing according to which the brain constantly generates predictions based on a generative model (what Grush called an ‘emulator’), and compares that prediction to the actual sensory input. The difference, or ‘sensory residual’ would then be used to update the model so as to produce a more accurate estimate of the perceived domain. On Grush’s account, the top-down and bottom up signals would be combined in a way sensitive to the expected noise (aka uncertainty) in the bottom-up signal, so that in situations in which the sensory signal was known to be less trustworthy, the top-down prediction would be given greater weight, and vice-versa. The emulation framework was also shown to be hierarchical, with modality-specific emulators providing top-down expectations for sensory signals as well as higher-level emulators providing expectations of the distal causes of those signals. Grush applied the theory to visual perception, visual and motor imagery, language, and theory of mind phenomena.
Today, the fields of computer science and cognitive science incorporate these same concepts to create the multilayer generative models that underlie machine learning and neural nets.[8]
General framework
Most of the research literature in the field has been about sensory perception, particularly vision, which is more easily conceptualized. However, the predictive coding framework could also be applied to different neural systems. Taking the sensory system as an example, the brain solves the seemingly intractable problem of modelling distal causes of sensory input through a version of Bayesian inference. It does this by modelling predictions of lower-level sensory inputs via backward connections from relatively higher levels in a cortical hierarchy.[9] Constrained by the statistical regularities of the outside world (and certain evolutionarily prepared predictions), the brain encodes top-down generative models at various temporal and spatial scales in order to predict and effectively suppress sensory inputs rising up from lower levels. A comparison between predictions (priors) and sensory input (likelihood) yields a difference measure (e.g. prediction error, free energy, or surprise) which, if it is sufficiently large beyond the levels of expected statistical noise, will cause the generative model to update so that it better predicts sensory input in the future.
In general, it can be more easily stated that it minimizes the amount of surprise (the measure of difference). This is also the reason for what is nowadays called confirmation bias or what might historically be prejudice (although the latter has more negative connotations) since it better fits one's individual experience accumulated so far and supports consistency. Therefore, this turns out to be rather a disadvantage in today's world.[10]
If, instead, the model accurately predicts driving sensory signals, activity at higher levels cancels out activity at lower levels, and the posterior probability of the model is increased. Thus, predictive coding inverts the conventional view of perception as a mostly bottom-up process, suggesting that it is largely constrained by prior predictions, where signals from the external world only shape perception to the extent that they are propagated up the cortical hierarchy in the form of prediction error.
In predictive coding, errors are neither good nor bad, but simply signal the difference between the expected and actual input. The exception is in reward processing, where a better than expected reward produces a positive prediction error and a disappointing result produces a negative prediction error.[11]
Precision weighting
Expectations about the precision (or inverse variance) of incoming sensory input are crucial for effectively minimizing prediction error in that the expected precision of a given prediction error can inform confidence in that error, which influences the extent to which the error is weighted in updating predictions.[12] Given that the world we live in is loaded with statistical noise, precision expectations must be represented as part of the brain's generative models, and they should be able to flexibly adapt to changing contexts. For instance, the expected precision of visual prediction errors likely varies between dawn and dusk, such that greater conditional confidence is assigned to errors in broad daylight than errors in prediction at nightfall.[13] It has recently been proposed that such weighting of prediction errors in proportion to their estimated precision is, in essence, attention,[14] and that the process of devoting attention may be neurobiologically accomplished by ascending reticular activating systems (ARAS) optimizing the “gain” of prediction error units.
Active inference
The same principle of prediction error minimization has been used to provide an account of behavior in which motor actions are not commands but descending proprioceptive predictions. In this scheme of active inference, classical reflex arcs are coordinated so as to selectively sample sensory input in ways that better fulfill predictions, thereby minimizing proprioceptive prediction errors.[14] Indeed, Adams et al. (2013) review evidence suggesting that this view of hierarchical predictive coding in the motor system provides a principled and neurally plausible framework for explaining the agranular organization of the motor cortex.[15] This view suggests that “perceptual and motor systems should not be regarded as separate but instead as a single active inference machine that tries to predict its sensory input in all domains: visual, auditory, somatosensory, interoceptive and, in the case of the motor system, proprioceptive."[15]
Neural theory in predictive coding
Evaluating the empirical evidence that suggests a neurologically plausible basis for predictive coding is a broad and varied task. For one thing, and according to the model, predictive coding occurs at every iterative step in the perceptual and cognitive processes; accordingly, manifestations of predictive coding in the brain include genetics, specific cytoarchitecture of cells, systemic networks of neurons, and whole brain analyses. Due to this range of specificity, different methods of investigating the neural mechanisms of predictive coding have been applied, where available; more generally, however, and at least as it relates to humans, there are significant methodological limitations to investigating the potential evidence and much of the work is based on computational modeling of microcircuits in the brain. Notwithstanding, there has been substantial (theoretical) work that has been applied to understanding predictive coding mechanisms in the brain. This section will focus on specific evidence as it relates to the predictive coding phenomenon, rather than analogues, such as homeostasis (which are, nonetheless, integral to our overall understanding of Bayesian inference but already supported heavily; see Clark for a review[9]).
Much of the early work that applied a predictive coding framework to neural mechanisms came from sensory neurons, particularly in the visual cortex.[6][16]
More generally, however, what seems to be required by the theory are (at least) two types of neurons (at every level of the perceptual hierarchy): one set of neurons that encode incoming sensory input, so called feed-forward projections; one set of neurons that send down predictions, so called feed-backward projections. It is important to note that these neurons must also carry properties of error detection; which class of neurons has these properties is still up for debate.[17][18] These sort of neurons have found support in superficial and non-superficial pyramidal neurons.
At a more whole-brain level, there is evidence that different cortical layers (aka laminae) may facilitate the integration of feedforward and feed-backward projections across hierarchies. These cortical layers, divided into granular, agranular, and dysgranular, which house the subpopulations of neurons mentioned above, are divided into 6 main layers. The cytoarchitecture within these layers are the same, but they differ across layers. For example, layer 4 of the granular cortex contain granule cells which are excitatory and distribute thalamocortical inputs to the rest of the cortex. According to one model:
“...prediction neurons... in deep layers of agranular cortex drive active inference by sending sensory predictions via projections ...to supragranular layers of dysgranular and granular sensory cortices. Prediction-error neurons ….in the supragranular layers of granular cortex compute the difference between the predicted and received sensory signal, and send prediction-error signals via projections...back to the deep layers of agranular cortical regions. Precision cells … tune the gain on predictions and prediction error dynamically, thereby giving these signals reduced (or, in some cases, greater) weight depending on the relative confidence in the descending predictions or the reliability of incoming sensory signals.”[19]
The theory that the unit of prediction is the cortical column[20] is based on the remarkable correspondence between the microcircuitry of the cortical column and the connectivity implied by predictive coding.[21]
Applying predictive coding
Perception
The empirical evidence for predictive coding is most robust for perceptual processing. As early as 1999, Rao and Ballard proposed a hierarchical visual processing model in which higher-order visual cortical area sends down predictions and the feedforward connections carry the residual errors between the predictions and the actual lower-level activities.[6] According to this model, each level in the hierarchical model network (except the lowest level, which represents the image) attempts to predict the responses at the next lower level via feedback connections, and the error signal is used to correct the estimate of the input signal at each level concurrently.[6] Emberson et al. established the top-down modulation in infants using a cross-modal audiovisual omission paradigm, determining that even infant brains have expectation about future sensory input that is carried downstream from visual cortices and are capable of expectation-based feedback.[22] Functional near-infrared spectroscopy (fNIRS) data showed that infant occipital cortex responded to unexpected visual omission (with no visual information input) but not to expected visual omission. These results establish that in a hierarchically organized perception system, higher-order neurons send down predictions to lower-order neurons, which in turn sends back up the prediction error signal.
Interoception
There have been several competing models for the role of predictive coding in interoception.
In 2013, Anil Seth proposed that our subjective feeling states, otherwise known as emotions, are generated by predictive models that are actively built out of causal interoceptive appraisals.[18] In relation to how we attribute internal states of others to causes, Sasha Ondobaka, James Kilner, and Karl Friston (2015) proposed that the free energy principle requires the brain to produce a continuous series of predictions with the goal of reducing the amount of prediction error that manifests as “free energy”.[23] These errors are then used to model anticipatory information about what the state of the outside world will be and attributions of causes of that world state, including understanding of causes of others’ behavior. This is especially necessary because, to create these attributions, our multimodal sensory systems need interoceptive predictions to organize themselves. Therefore, Ondobaka posits that predictive coding is key to understanding other people's internal states.
In 2015, Lisa Feldman Barrett and W. Kyle Simmons proposed the Embodied Predictive Interoception Coding model, a framework that unifies Bayesian active inference principles with a physiological framework of corticocortical connections.[19] Using this model, they posited that agranular visceromotor cortices are responsible for generating predictions about interoception, thus, defining the experience of interoception.
Contrary to the inductive notion that emotion categories are biologically distinct, Barrett proposed later the theory of constructed emotion, which is the account that a biological emotion category is constructed based on a conceptual category—the accumulation of instances sharing a goal.[24][25] In a predictive coding model, Barrett hypothesizes that, in interoception, our brains regulate our bodies by activating "embodied simulations" (full-bodied representations of sensory experience) to anticipate what our brains predict that the external world will throw at us sensorially and how we will respond to it with action. These simulations are either preserved if, based on our brain's predictions, they prepare us well for what actually subsequently occurs in the external world, or they, and our predictions, are adjusted to compensate for their error in comparison to what actually occurs in the external world and how well-prepared we were for it. Then, in a trial-error-adjust process, our bodies find similarities in goals among certain successful anticipatory simulations and group them together under conceptual categories. Every time a new experience arises, our brains use this past trial-error-adjust history to match the new experience to one of the categories of accumulated corrected simulations that it shares the most similarity with. Then, they apply the corrected simulation of that category to the new experience in the hopes of preparing our bodies for the rest of the experience. If it does not, the prediction, the simulation, and perhaps the boundaries of the conceptual category are revised in the hopes of higher accuracy next time, and the process continues. Barrett hypothesizes that, when prediction error for a certain category of simulations for x-like experiences is minimized, what results is a correction-informed simulation that the body will reenact for every x-like experience, resulting in a correction-informed full-bodied representation of sensory experience—an emotion. In this sense, Barrett proposes that we construct our emotions because the conceptual category framework our brains use to compare new experiences, and to pick the appropriate predictive sensory simulation to activate, is built on the go.
Challenges
As a mechanistic theory, predictive coding has not been mapped out physiologically on the neuronal level. One of the biggest challenges to the theory has been the imprecision of exactly how prediction error minimization works.[26] In some studies, the increase in BOLD signal has been interpreted as error signal while in others it indicates changes in the input representation.[26] A crucial question that needs to be addressed is what exactly constitutes error signal and how it is computed at each level of information processing.[27] Another challenge that has been posed is predictive coding's computational tractability. According to Kwisthout and van Rooij, the subcomputation in each level of the predictive coding framework potentially hides a computationally intractable problem, which amounts to “intractable hurdles” that computational modelers have yet to overcome.[28] Ransom and Fazelpour (2015) indicate "Three Problems for the Predictive Coding Theory of Attention".[29]
Future research could focus on clarifying the neurophysiological mechanism and computational model of predictive coding.
See also
- Blue Brain Project
- Cognitive biology
- Cognitive linguistics
- Cognitive neuropsychology
- Cognitive neuroscience
- Cognitive science
- Conceptual blending
- Conceptual metaphor
- Cortical column
- Embodied bilingual language
- Embodied cognitive science
- Embodied Embedded Cognition
- Embodied music cognition
- Enactivism
- Extended cognition
- Extended mind thesis
- Externalism
- Heuristic
- Image schema
- Moravec's paradox
- Neuroconstructivism
- Neuropsychology
- Neurophenomenology
- Philosophy of mind
- Plant cognition
- Practopoiesis
- Situated cognition
- Where Mathematics Comes From
References
- ↑ Millidge, Beren; Seth, Anil; Buckley, Christopher (2022-01-19). "Predictive Coding: a Theoretical and Experimental Review". arXiv:2107.12979 [cs.AI].
- ↑ Millidge, Beren; Salvatori, Tommaso; Song, Yuhang; Bogacz, Rafal; Lukasiewicz, Thomas (2022-02-18). "Predictive Coding: Towards a Future of Deep Learning beyond Backpropagation?". arXiv:2202.09467 [cs.NE].
- ↑ Ororbia, Alexander G.; Kifer, Daniel (2022-04-19). "The Neural Coding Framework for Learning Generative Models". Nature Communications. 13 (1): 2064. doi:10.1038/s41467-022-29632-7. PMC 9018730. PMID 35440589.
- ↑ "Helmholtz's Treatise on Physiological Optics - Free". 2018-03-20. Archived from the original on 20 March 2018. Retrieved 2022-01-05.
- ↑ McClelland, J. L. & Rumelhart, D. E. (1981). "An interactive activation model of context effects in letter perception: I. An account of basic findings". Psychological Review. 88 (5): 375–407. doi:10.1037/0033-295X.88.5.375.
- 1 2 3 4 Rao, Rajesh P. N.; Ballard, Dana H. (1999). "Predictive coding in the visual cortex: A functional interpretation of some extra-classical receptive-field effects". Nature Neuroscience. 2 (1): 79–87. doi:10.1038/4580. PMID 10195184.
- ↑ Grush, Rick (2004). "The emulation theory of representation: Motor control, imagery, and perception". Behavioral and Brain Sciences. 27 (3): 377–396. doi:10.1017/S0140525X04000093. ISSN 0140-525X. PMID 15736871. S2CID 514252.
- ↑ Hinton, Geoffrey E. (2007). "Learning multiple layers of representation". Trends in Cognitive Sciences. 11 (10): 428–434. doi:10.1016/j.tics.2007.09.004. PMID 17921042. S2CID 15066318.
- 1 2 Clark, Andy (2013). "Whatever next? Predictive brains, situated agents, and the future of cognitive science". Behavioral and Brain Sciences. 36 (3): 181–204. doi:10.1017/S0140525X12000477. PMID 23663408. S2CID 220661158.
- ↑ Kaaronen, R. O. (2018). "A Theory of Predictive Dissonance: Predictive Processing Presents a New Take on Cognitive Dissonance". Frontiers in Psychology. 9: 2218. doi:10.3389/fpsyg.2018.02218. PMC 6262368. PMID 30524333.
- ↑ Schultz, Wolfram (2016-02-11). "Dopamine reward prediction-error signalling: a two-component response". Nature Reviews Neuroscience. 17 (3): 183–195. doi:10.1038/nrn.2015.26. PMC 5549862. PMID 26865020.
- ↑ Friston, Karl J.; Feldman, Harriet (2010). "Attention, Uncertainty, and Free-Energy". Frontiers in Human Neuroscience. 4: 215. doi:10.3389/fnhum.2010.00215. PMC 3001758. PMID 21160551.
- ↑ Hohwy, Jakob (2012). "Attention and Conscious Perception in the Hypothesis Testing Brain". Frontiers in Psychology. 3: 96. doi:10.3389/fpsyg.2012.00096. PMC 3317264. PMID 22485102.
- 1 2 Friston, Karl (2009). "The free-energy principle: A rough guide to the brain?". Trends in Cognitive Sciences. 13 (7): 293–301. doi:10.1016/j.tics.2009.04.005. PMID 19559644. S2CID 9139776.
- 1 2 Adams, Rick A.; Shipp, Stewart; Friston, Karl J. (2013). "Predictions not commands: Active inference in the motor system". Brain Structure and Function. 218 (3): 611–643. doi:10.1007/s00429-012-0475-5. PMC 3637647. PMID 23129312.
- ↑ Bolz, Jürgen; Gilbert, Charles D. (1986). "Generation of end-inhibition in the visual cortex via interlaminar connections". Nature. 320 (6060): 362–365. Bibcode:1986Natur.320..362B. doi:10.1038/320362a0. PMID 3960119. S2CID 4325899.
- ↑ Koster-Hale, Jorie; Saxe, Rebecca (2013-09-04). "Theory of Mind: A Neural Prediction Problem". Neuron. 79 (5): 836–848. doi:10.1016/j.neuron.2013.08.020. ISSN 0896-6273. PMC 4041537. PMID 24012000.
- 1 2 Seth, Anil K. (2013). "Interoceptive inference, emotion, and the embodied self". Trends in Cognitive Sciences. 17 (11): 565–573. doi:10.1016/j.tics.2013.09.007. PMID 24126130. S2CID 3048221.
- 1 2 Barrett, Lisa Feldman; Simmons, W. Kyle (2015). "Interoceptive predictions in the brain". Nature Reviews Neuroscience. 16 (7): 419–429. doi:10.1038/nrn3950. PMC 4731102. PMID 26016744.
- ↑ Bennett, Max (2020). "An Attempt at a Unified Theory of the Neocortical Microcircuit in Sensory Cortex". Frontiers in Neural Circuits. 14: 40. doi:10.3389/fncir.2020.00040. PMC 7416357. PMID 32848632.
- ↑ Bastos, AM; Usrey, WM; Adams, RA; Mangun, GR; Fries, P; Friston, Karl (2012). "Canonical microcircuits for predictive coding". Neuron. 76 (4): 695–711. doi:10.1016/j.neuron.2012.10.038. PMC 3777738. PMID 23177956.
- ↑ Emberson, Lauren L.; Richards, John E.; Aslin, Richard N. (2015). "Top-down modulation in the infant brain: Learning-induced expectations rapidly affect the sensory cortex at 6 months". Proceedings of the National Academy of Sciences. 112 (31): 9585–9590. Bibcode:2015PNAS..112.9585E. doi:10.1073/pnas.1510343112. PMC 4534272. PMID 26195772.
- ↑ Ondobaka, Sasha; Kilner, James; Friston, Karl (2017). "The role of interoceptive inference in theory of mind". Brain and Cognition. 112: 64–68. doi:10.1016/j.bandc.2015.08.002. PMC 5312780. PMID 26275633.
- ↑ Barrett, Lisa Feldman (2016). "The theory of constructed emotion: An active inference account of interoception and categorization". Social Cognitive and Affective Neuroscience. 12 (1): 1–23. doi:10.1093/scan/nsw154. PMC 5390700. PMID 27798257.
- ↑ Barrett, L.F. (2017). How emotions are made: The secret life of the brain. New York: Houghton Mifflin Harcourt. ISBN 0544133315
- 1 2 Kogo, Naoki; Trengove, Chris (2015). "Is predictive coding theory articulated enough to be testable?". Frontiers in Computational Neuroscience. 9: 111. doi:10.3389/fncom.2015.00111. PMC 4561670. PMID 26441621.
- ↑ Bastos, Andre M.; Usrey, W. Martin; Adams, Rick A.; Mangun, George R.; Fries, Pascal; Friston, Karl J. (2012). "Canonical Microcircuits for Predictive Coding". Neuron. 76 (4): 695–711. doi:10.1016/j.neuron.2012.10.038. PMC 3777738. PMID 23177956.
- ↑ Kwisthout, Johan; Van Rooij, Iris (2020). "Computational Resource Demands of a Predictive Bayesian Brain". Computational Brain & Behavior. 3 (2): 174–188. doi:10.1007/s42113-019-00032-3. hdl:2066/218854. S2CID 196045530.
- ↑ Ransom M. & Fazelpour S (2015). Three Problems for the Predictive Coding Theory of Attention. http://mindsonline.philosophyofbrains.com/2015/session4/three-problems-for-the-predictive-coding-theory-of-attention/