If you don't remember your password, you can reset it by entering your email address and clicking the Reset Password button. You will then receive an email that contains a secure link for resetting your password
If the address matches a valid account an email will be sent to __email__ with instructions for resetting your password
Translational Neuromodeling Unit, Institute for Biomedical Engineering, University of Zurich and Swiss Federal Institute of Technology Zurich, Zurich, Switzerland
Translational Neuromodeling Unit, Institute for Biomedical Engineering, University of Zurich and Swiss Federal Institute of Technology Zurich, Zurich, Switzerland
Translational Neuromodeling Unit, Institute for Biomedical Engineering, University of Zurich and Swiss Federal Institute of Technology Zurich, Zurich, SwitzerlandCenter for Complementary and Integrative Medicine, University Hospital Zurich, Zurich, Switzerland
Address correspondence to Klaas E. Stephan, M.D., Dr.med., Ph.D., Translational Neuromodeling Unit, Institute for Biomedical Engineering, University of Zurich and Swiss Federal Institute of Technology Zurich, Wilfriedstrasse 6, CH-8032 Zurich, Switzerland.
Translational Neuromodeling Unit, Institute for Biomedical Engineering, University of Zurich and Swiss Federal Institute of Technology Zurich, Zurich, SwitzerlandMax Planck Institute for Metabolism Research, Cologne, GermanyWellcome Trust Centre for Neuroimaging, University College London, London, United Kingdom
This article outlines how a core concept from theories of homeostasis and cybernetics, the inference-control loop, may be used to guide differential diagnosis in computational psychiatry and computational psychosomatics. In particular, we discuss 1) how conceptualizing perception and action as inference-control loops yields a joint computational perspective on brain-world and brain-body interactions and 2) how the concrete formulation of this loop as a hierarchical Bayesian model points to key computational quantities that inform a taxonomy of potential disease mechanisms. We consider the utility of this perspective for differential diagnosis in concrete clinical applications.
). One strategy for computational psychiatry is to learn from internal medicine, where mechanistic frameworks for differential diagnosis enable targeted treatment decisions for individual patients. Importantly, differential diagnosis does not necessarily require molecular mechanisms. Much coarser distinctions—inflammatory, infectious, vascular, neoplastic, autoimmunological, or hereditary causes of disease—can provide crucial guidance for treatment, as they disclose fundamentally distinct disease processes.
This article outlines a framework for differential diagnosis that is motivated by a general computational perspective on brain function. While not the first attempt of its kind (
), this article makes three contributions. First, we adopt a disease-independent motif—the inference-control loop as fundament of cybernetic theories (
)—and consider how this may help in systematizing computational perspectives on brain-world and brain-body interactions. Second, we consider a hierarchical Bayesian implementation that suggests three possible computational quantities (predictions, prediction errors [PEs], and their precisions) at five potential failure loci (sensation, perception, metacognition, forecasting, action). Third, we discuss the potential clinical utility of this taxonomy for differential diagnosis in computational psychiatry and psychosomatics [compare (
Different theories of adaptive behavior exist, but they share common themes. We focus on the closed loop between sensations and actions that is at the core of classical cybernetic theories (
). We first summarize extended cybernetic/homeostatic theories (Figure 1) before considering one particular implementation as a foundation for a joint taxonomy of disease mechanisms in psychiatry and psychosomatics.
Figure 1(A) Simple example of a homeostatic reflex arc as described by classical cybernetics. Sensory inputs (sensations) about an environmental quantity “X” (e.g., current body temperature) are compared with a predefined set-point (e.g., ideal body temperature). Corrective actions occur as a function of the mismatch between input and set-point, such that “X” is moving closer to the set-point (e.g., heating or cooling the body). (B) Extension to an inference-control loop, where perception (inference of environmental states) under an individual’s generative model of the world updates beliefs that change the reflex arc’s set-point (e.g., allostatic control of bodily states); in other cases, actions might be chosen based on the perception rather than the sensation (not shown here). (C) Further extension of the inference-control loop to include forecasting and metacognition. We wish to emphasize that this plot is highly schematic and provides a core summary of different types of inference-control loops; it should not be misunderstood as a detailed circuit proposal.
A useful starting point to reflect on adaptive behavior is the observation that it must be constrained by requirements of bodily homeostasis. In the simplest case, actions can be purely reactive. For instance, to maintain constant body temperature, sensor information can be compared with a predefined set-point (e.g., 37°C). Actions, such as heating or cooling the body, are then selected to bring sensory inputs closer to that set-point. This reflex arc—which implements the same feedback control as a simple thermostat—is illustrated in Figure 1A.
If biological systems were like thermostats, with unambiguous sensory inputs and purely reactive in nature, simple feedback control would be sufficient. However, biological systems face three major challenges.
First, sensations (inputs from sensory channels) (see Glossary in the Supplement) are noisy and often highly ambiguous because the world’s states (body or environment) that excite sensors can interact nonlinearly and/or hierarchically (
). Among many findings supporting this notion, illusions prominently illustrate how learned physical regularities can shape perception profoundly (Figure 2A) (
). When sensations are ambiguous, perception can expand the capacity for control, particularly when action selection requires information about hierarchically deep states of the world that relate nonlinearly to sensations. For example, in social interactions, inferring the nature of others’ acts that generated visual input may not be sufficient; instead, inference on deeper states, such as the intentions of others that generated their acts, may be required (
Figure 2Two examples of perceptual inference. From left to right: prior belief, sensory data, resulting perception (posterior). (A) A classical example of a visual illusion. We perceive the surrounding objects in the image as concave and the center object as convex, even though the sensory data stem from a two-dimensional gray-scale image. The reason is that humans (likely resulting from experience) hold an implicit belief that light comes from above. If light comes from above, the shadow of a concave object should be located at the top, while the shadow of a convex object should be located at the bottom. The resulting percept is thus a reinterpretation of current sensory input based on an implicit a priori belief about lights and shadows. (B) Example of the placebo effect. Treatment with drugs that contain no therapeutic ingredient can alter the perception of a physical condition (e.g., reduce physical pain) and elicit autonomic reactions (e.g., an immune response). Again, the change in perception depends on a prior (implicit) belief—here, that the treatment will be effective. Notably, the placebo effect scales with the predicted efficacy of the intervention (for example, syringes are typically considered more potent than pills).
Second, inference on current states of the world can only finesse reactive control. By contrast, prospective control requires predicting the world’s future states (forecasting), taking into account both the influence of possible actions (
). This self-monitoring of one’s level of mastery in acting on the world is part of metacognition and can be seen as a high-level form of inference about one’s capacity for control (Figure 1C) (
Finally, given an inferred (or forecast) state of the world, actions can be selected to achieve a particular goal (optimize some objective function). This objective function can be defined differently—in terms of utility (
). Figure 1C depicts a schematic illustration of the extended inference-control loop. Importantly, any given action alters the world, thus shaping future sensory input. In other words, sensation, perception, forecasting, and actions form a closed loop between the brain and its external world. For brevity, we refer to this entire cycle as inference-control loop. Its closed-loop nature is fundamentally important, as it creates problems of circular causality that are at the core of diagnostic challenges we examine below.
Computational Modeling of Inference-Control Loops
We now consider how inference-control loops can be formalized as concrete computational models. We adopt hierarchical Bayesian models (HBMs) here but emphasize that this is not the only possible perspective; for forecasting and control in particular, alternative (and arguably more established) modeling approaches exist [e.g., (
)]. We prefer a hierarchical Bayesian view for two main reasons. First, it uses the same formalism and quantities—precision-weighted predictions and precision-weighted PEs (pwPEs)—for implementing perception, forecasting, reactive/prospective control, and metacognition. This suggests a compact taxonomy of computational dysfunctions and their differential diagnosis [compare (
)]. Second, the formulation of control in HBMs is intimately connected to concepts of homeostatic (reactive) and allostatic (prospective) control, which are of central importance for psychosomatics.
Bayesian Inference
A widely adopted concept of perception is the Bayesian framework (
). This casts perception as inference, where prior beliefs about hidden states of the world are updated in the light of sensory data to yield a posterior belief (Figure 3A) (
). A popular notion is that this computation rests on a (hierarchical) “generative model” of how sensory data are caused by hidden states of the world (Figure 3A) (
). Inverting this model under beliefs about the states’ a priori probability allows for inferring the causes of sensations.
Figure 3Schematic of inference-control in a Bayesian framework. (A) (Top panel) Illustration of Bayes’ rule using Gaussian distributions as an example. Bayes’ rule describes how different information sources—prior beliefs (predictions based on a model of the environment and the body within) and new sensory data (likelihood)—are combined to update the belief (posterior). The amount of belief update is proportional to the prediction error (PE)—the difference between predicted (prior) and actual (sensory) data—weighted by a precision ratio (π, inverse variance) of prior beliefs and sensory inputs (likelihood), respectively. Simply speaking, precise prior beliefs diminish and precise sensory data increase the impact of PEs on belief updates. (Bottom panel) Illustration of the concept of a generative model. A generative model infers hidden states of the world (environment or body) by inverting a probabilistic forward model from those states to possible sensory data (likelihood), under prior beliefs about the values of the hidden states. Inverting a generative model thus corresponds to the application of Bayes’ rule. Notably, the mapping from states to data can be mechanistically interpretable (e.g., biophysical models of neuronal responses) or descriptive, such as noisy fluctuations around a constant value or a periodic function (compare circadian rhythms of bodily states). (B) Example of an inference-control loop that is cast as a hierarchical Bayesian model. This figure is not meant to provide a detailed description, nor does it claim to represent the only possible layout. Briefly, the key premise here is that the brain represents and updates generative models (“model of the body/world”), with hierarchically structured beliefs. A low-level belief about a bodily/environmental state “x” (“prior”) is displayed separately from the rest of the model. The expected sensory inputs (under this prior) can be compared against actual sensations to yield a PE; this PE can be sent up the inference hierarchy and update the model. Switching from perception to action requires (temporarily) abolishing sensory precision [see (
) for details]. Actions can then be implemented in two main ways. Homeostatic (reactive) control unfolds as a direct function of PE and serves to fulfill beliefs about sensory input [as encoded by the prior; this can be seen as a probabilistic set-point
]. Allostatic (predictive) control prospectively shifts this probabilistic set-point to elicit actions; this requires predicting future states as a function of actions and bodily/environmental dynamics (“forecasting”). Finally, metacognition could be implemented as an additional layer in the model that holds (and updates) expectations with regard to the amount of PE at the top of the inference hierarchy
). A key point for this article is that Bayesian belief updates have, for most probability distributions, a generic form: the change in belief is proportional to PE—the difference between actual (sensory) data and predicted data (under the prior)—weighted by a precision ratio (
). The latter is critical, as it determines the relative influence of prior and sensory data: precise predictions (priors reduce, while precise sensory inputs increase, belief updates) (Figure 3A). Generally, abnormal computations and/or signaling of any of these three quantities—PEs, predictions, and precisions—could disrupt inference.
Hierarchical Bayesian Models
The hierarchical structure of the external world suggests an equivalent (mirrored) structure of the brain’s generative model (
). In these models, each level holds a belief (prediction) about the state of the level below (in predictive coding) or its rate of change (in hierarchical filtering). This prediction is signaled to the lower level, where it is compared against the actual state, resulting in a PE. This PE is sent back up the hierarchy to update the prediction—and thus reduce future PEs. Critically, again, this update is weighted by a precision ratio (Figure 3A): higher precision of bottom-up signals (sensory inputs or PEs) or lower precision of predictions leads to more pronounced belief updates. Neurobiologically, in cortex, predictions are likely signaled via N-methyl-D-aspartate receptors at descending connections, and PEs are likely signaled via alpha-amino-3-hydroxy-5-methyl-4-isoxazole propionic acid receptors (and possibly N-methyl-D-aspartate receptors) at ascending connections, while precision weighting depends on postsynaptic gain; this is determined by neuromodulators [e.g., dopamine, acetylcholine (
). Notably, PEs can be reduced not only by updating the generative model (as above) but also by changing the precision of sensory channels (attention) or by actions that fulfill predictions. The latter is “active inference” (
While HBMs are popular models of perceptual inference, they can also implement forecasting, action, and metacognition; again, this rests on pwPEs. Switching from inference to forecasting and actions requires “switching off” sensory precision (sensory attenuation) (
). One challenge for psychiatric and psychosomatic applications is that the model often needs to predict not only the effects of chosen actions but also the intrinsic dynamics of environment and body (
Turning to action, HBMs can implement both reactive and prospective control. The former occurs through a reflex arc at the bottom of the hierarchy (Figure 3B). Specifically, by replacing classical cybernetic set-points with beliefs about hidden states that cause sensory inputs, reactive control can be cast as a reflex where PEs elicit corrective actions that minimize surprise about sensory inputs (
)—a property that allows for new explanations of psychosomatic phenomena and placebo effects (see below). Prospective control can be implemented by dynamically adjusting this belief (e.g., its mean or precision) as a function of predicted future states (
). These predictions could be signaled from higher levels in the HBM that implement forecasting.
Action selection in HBMs could, in principle, proceed with respect to optimizing any chosen objective function, e.g., a subject-specific utility function (
) as a specific proposal. Simply speaking, this postulates that actions serve to minimize PEs by changing the world (environment or body) to fulfill the brain’s expectation of sensory inputs. We focus on this idea because it is closely related to cybernetics [e.g., perceptual control theory (
)] and represents a probabilistic formulation of the core principle of homeostasis—that regulatory actions minimize discrepancies between expected and actual inputs. It thus provides a basis for formal models of brain-body interactions (
Finally, metacognition could be incorporated into HBMs through an additional layer that holds expectations about the level of PEs throughout an inference hierarchy (Figure 1) (
). This layer infers the performance of the inference-control loop as a whole, enabling a representation (and updating) of mastery or self-efficacy beliefs.
Interoception and Homeostatic/Allostatic Control
HBMs have been used for more than 2 decades to investigate perceptual inference on environmental states (exteroception) (
). Signals from bodily sensors (interosensations)—blood oxygenation and osmolality, temperature, pain, heart rate, or plasma concentrations of metabolites and hormones—reach the brain through various afferent pathways that converge on posterior and/or mid insula cortex (
). This implies a joint computational approach to characterizing disease mechanisms in exteroceptive (psychiatry) and interoceptive (psychosomatics) domains (Figure 4).
Figure 4Highly schematic illustration of the inference-control loop for interoception and exteroception. For exteroception, exterosensations (sensory inputs caused by states of the external environment) originate from receptors (e.g., mechanoreceptors, proprioceptors, photoreceptors) and are transmitted via the classical sensory channels (vision, audition, touch, taste, smell) to reach the brain’s primary sensory areas. From the perspective of perception as inference, exterosensations are combined with a priori beliefs, based on a model of the environment, resulting in a perception of the environment that is referred to as exteroception. For interoception, interosensations (sensory inputs caused by bodily states) originate from various bodily receptors (baroreceptors, chemoreceptors, thermoreceptors, etc.). Interosensations carry information about bodily states, such as temperature, pain, itch, blood oxygenation, intestinal tension, heart rate, hormonal concentration, etc., and reach the brain via two major afferent pathways: small-diameter, modality-specific afferent fibers in lamina 1 of the spinal cord that project to specific thalamocortical nuclei and the vagus and glossopharyngeal nerves projecting to the nucleus of the solitary tract. Both pathways converge on the posterior insula cortex. From the perspective of perception as inference, interosensations are combined with a priori beliefs, based on a model of the body, resulting in a perception of the body that is referred to as interoception. Interoception and exteroception combined yield the percept of the body within its environment that informs action selection with regard to both internally directed (autonomic) and externally directed (motor) actions.
) refers to prospective control, where actions are taken before homeostasis is violated. Put differently, allostasis is a self-initiated temporary change in homeostatic set-points to prepare for a predicted external perturbation (
). Homeostatic control can then be understood as reflex-like emission of corrective actions that fulfill beliefs about bodily states, and allostatic control can be understood as changing homeostatic beliefs under guidance by higher beliefs or forecasts about future perturbations of bodily states (
). Anterior insula (AI) and anterior cingulate cortex (ACC) play a central role in these proposals, as they are thought to represent current and predicted states of the body within the external world (
). Equipped with projections to regions with homeostatic reflex arcs (e.g., hypothalamus, brainstem), AI and ACC may signal the forecasts that guide allostatic control (
). Furthermore, they likely interface interoceptive and exteroceptive systems and mediate their interactions, such as the influence of interoceptive signals on exteroceptive judgments (Figure 4) (
Taxonomy of Failure Loci and Computational Dysfunctions
Our general thesis is that conceptualizing adaptive behavior in terms of inference-control loops and their concrete implementation as HBMs systematizes potential failure loci and associated computational dysfunctions. The ensuing taxonomy of disease mechanisms could guide differential diagnosis, in analogous ways for computational psychiatry and computational psychosomatics. That is, in the general inference-control loop outlined above, maladaptive behavior could arise from primary disruptions at five major loci (Figure 3B): 1) sensory inputs (sensations), 2) inference (perception), 3) forecasting, 4) control (action), and 5) metacognition.
Clearly, each of these processes could be conceptualized under different computational frameworks. In the specific case of HBMs, failures at any of these levels can arise from disturbances in a small set of computational quantities (Figure 3A): 1) bottom-up signals (sensory input or PEs), 2) top-down signals (expectations or predictions), and 3) their precision (inverse uncertainty).
These two axes may lend useful overarching structure to pathogenetic considerations and provide a conceptual grid for classifying disease mechanisms in computational psychiatry and computational psychosomatics. However, this requires that the above levels and quantities can be inferred noninvasively in individual patients, using computational assays that can be applied to behavioral, (neuro)physiological, and neuroimaging data (
), for example, bodily symptoms caused by beliefs. Classic examples for the influence of beliefs on bodily states are placebo and nocebo effects (Figure 2B) (
). In placebo and nocebo effects, expectations about the effects of an intervention trigger reactions that fulfill the expectation. Importantly, the strength of placebo is known to depend not only on beliefs about effect amplitude but also on the precision of this belief (
). Our framework offers a formal explanation for this empirical phenomenon because in HBM implementations of homeostatic control, the vigor of belief-fulfilling actions depends on the precision of the beliefs (
)]. This may be due to the (perceived) lack of a comprehensive framework that formalizes interoception and homeostatic/allostatic control and makes them measurable in individual patients. In the following section, we consider one concrete problem of differential diagnosis and describe how the conceptual grid described above may guide the search for the locus of the primary (initial) abnormality.
Example: Depression and Somatic Symptoms
Many patients with depression have somatic abnormalities, including cardiac (
). One long-standing explanation of this association highlights maladaptive beliefs. For example, false high-level beliefs about volatility of the world could cause prolonged allostatic responses, with persistent sympathetic activation and ensuing damage to cardiovascular, immunological, and metabolic health (“allostatic load”) (
). In our framework, the influence of high-level beliefs could be mediated via projections from allostatic control regions (e.g., AI, ACC) on sympathetic effector regions (e.g., hypothalamus, amygdala, or periaqueductal gray) where they elicit autonomic actions by altering homeostatic set-points. Notably, HBMs can infer fluctuations in beliefs about environmental volatility from behavioral and peripheral physiological measurements (
)] of the above connection strengths and, by comparing models with and without modulatory effects of these beliefs, identify patients in whom bodily symptoms are possible consequences of beliefs. One might also hypothesize that these connection strengths correlate with peripheral indices of sympathetic activation (
An opposite interpretation views depression as “reactive” to initial somatic disease. In our framework, this can be formalized as a metacognitive response to (real or perceived) chronic dyshomeostasis. One implementation of metacognition in HBMs is through a top-level layer that holds beliefs about the performance of the inference-control loop. In this “allostatic self-efficacy” (
) concept, persistently elevated PEs decrease one’s beliefs of mastery over bodily states; this metacognitive “diagnosis” of lack of control may lead to depression as a form of learned helplessness. This proposal could be tested by correlating model-based indices of interoceptive PE signaling with questionnaire measures of self-efficacy and helplessness.
Critically, our framework emphasizes that dyshomeostasis could be real or perceived and could exist independently from the brain or be caused by it:
1.
A real bodily source of dyshomeostasis (that evades cerebral attempts of regulation).
) processes within the insula or functional pathologies of N-methyl-D-aspartate receptors and/or neuromodulators that alter the signaling of pwPEs [for reviews, see (
)]. For example, abnormally high precision of beliefs about bodily states could render unremarkable events, such as normal sensory noise, meaningful; this is an interoceptive analog to “aberrant salience” (
Control—inadequate deployment of autonomic, endocrine, and immunological actions; for example, owing to inflammatory changes in allostatic control regions [AI, ACC (
) or owing to inadequately shifted set-points as a result of false beliefs/forecasts (see above).
Distinguishing these options is hard: the closed-loop nature of the inference-control cycle means that any primary disturbance will cause compensatory changes downstream. Inflammation-sensitive imaging (
) could help but covers only a few possible causes. Instead, we propose that model-based inference [from behavior and functional magnetic resonance imaging data (
)] (Supplement) on pwPE signaling in brainstem-hypothalamic-insular-cingulate circuitry could help identify a primary dysfunction. For example, under experimentally controlled perturbations of a (yet undisturbed) bodily state, pwPE signals in posterior and/or mid insula to predictable and unpredictable interosensations should differ, depending on whether the pathology is located at inference or control levels.
Computational Psychiatry
Bayesian perspectives of inference-control impairments feature frequently in computational concepts of depression (
). We briefly discuss one application of this framework to distinguish disease mechanisms in autism spectrum disorder (ASD).
Example: ASD
Hierarchical Bayesian theories of ASD revisit long-standing observations of perceptual anomalies in patients, including the excessive processing of irrelevant details and concomitant difficulties of abstraction. They suggest two competing explanations (
): sensory inputs of overwhelming precision or higher-order beliefs that are too imprecise for providing generalizable predictions. In either case, a child with ASD would incessantly experience large PEs during perception (see equation in Figure 3A). Typical symptoms, such as repetitive behaviors and avoidance of complex and volatile situations (e.g., social interactions), can then be interpreted as coping mechanisms to reduce PEs [see (
). Viscerosensory precision weighting has been linked to oxytocin; associated disturbances during development might compromise the construction of generative models that attribute self versus other agency to interoceptive experiences (
) could be disambiguated by psychophysical experiments in combination with Bayesian models of perception. These have previously been used to assess individual sensory processing (
). These models could be used in ASD to detect (sub)groups with exaggerated precision estimates of sensory inputs and insufficiently precise predictions, respectively (
Assessing the computational anatomy of circuit dysfunctions follows principles of homeostatic thinking, as is commonplace in medicine, and holds great diagnostic potential. However, clinical translation faces nontrivial challenges, particularly in application to psychosomatics.
Chicken and Egg Problems
The inference-control loop represents the conceptual heart of theories of homeostasis, allostasis, and cybernetics (Figure 1). Its closed-loop nature means that a dysfunction in one domain typically invokes a cascade of changes throughout the circuit, making it difficult to differentiate cause from consequence. However, different primary disturbances induce distinct patterns of change that might be discriminable statistically—as commonly done in fields familiar with compensatory changes throughout dyshomeostatic systems, such as internal medicine (compare differential diagnosis of hypothalamic, pituitary, and glandular disturbances in endocrinology). Computational psychiatry and psychosomatics could finesse this by statistical comparison of models embodying alternative disease processes (
). Additionally, in medicine, challenge (perturbation) approaches are often crucial for diagnosis. Combining designed perturbations with model selection and prospective assessments of disease trajectories (
) represents a promising approach to resolve ambiguity created by circular causality.
One central challenge for computational psychosomatics concerns availability of somatic perturbation techniques. In contrast to computational psychiatry, where we can adopt methods for manipulating beliefs and precisions from psychology and psychophysics, manipulating the somatocerebral branch of psychosomatics has access to only a few techniques. These include cardiac challenges with short-acting sympathomimetics (
). Developing further challenges that are noninvasive and provide temporal control should become a priority topic for computational psychosomatics.
Universality versus Specificity
The HBM framework suggests pwPEs as a central computational quantity for inference, forecasting, action, and metacognition. This generalizing view has pros and cons. On one hand, it suggests a conceptual grid for differential diagnosis and implies that computational differentiation of pwPE abnormalities could find broad diagnostic application. On the other hand, one may be concerned that we portray cortex as a “nonspecific hierarchical Bayesian machine” (as put by one of our reviewers) without neuroanatomical specificity. We do not wish to convey this impression. The inference problems the brain faces vary, for example, depending on the sensory channels involved and the depth of hierarchical coupling among environmental states. Different tasks require different types of (cortically represented) generative models and thus distinct circuits; compare proposed circuits for interoception/allostasis (
). Empirically, in tasks using the same sensory modality but requiring inference on concrete versus abstract social quantities, pwPEs were reflected by activity in partially overlapping and partially distinct circuits (
Furthermore, we do not claim that the framework presented covers all existing psychiatric and psychosomatic phenomena. Not all symptoms relate to perception, forecasting, action, or metacognition as the core components of our framework. However, where this relation exists, our framework may provide useful guidance in establishing analogous schemes for differential diagnostics in computational psychiatry and computational psychosomatics. Combined with models that can infer pwPE signaling in cortical hierarchies from neuroimaging or electrophysiological data (Supplement), this could allow for noninvasive readouts of circuit function that may support differentiation of potential failure loci. The promise and limitations of this approach require prospective patient studies that evaluate its predictive validity.
Acknowledgments and Disclosures
This work was supported by the René and Susanne Braginsky Foundation (KES), University of Zurich (FHP, KES), Deutsche Forschungsgemeinschaft Grant No. TR-SFB 134 (KES), and University of Zurich Clinical Research Priority Programs Multiple Sclerosis and Molecular Imaging Network Zurich (KES).
The authors report no biomedical financial interests or potential conflicts of interest.