For example, the FFA was selective for the “plants” category in addition to “portraits.” These results are consistent with earlier results from the same group, which highlighted the presence of a distributed representation of categories with smooth, overlapping gradients of preferred categories along certain cortical directions (Huth et al., 2012). A second way to test the idea that scene categories are represented in specific brain regions is to ask whether it is possible to decode the category viewed by the observer
on the basis of the BOLD activity alone. This approach is similar to that used by the same group to demonstrate GSK-3 beta phosphorylation how the brain represents specific images and objects (Naselaris et al., 2011). The authors found that BOLD activity successfully predicted the category membership of individual images. Importantly, these images were of novel scenes that were not used to formulate the encoding model, indicating that the model generalized beyond the specific exemplars on which it was trained. Then, they used the LDA model to successfully predict the objects present Apoptosis inhibitor in individual images
on the basis of predicted category membership alone. This is quite a remarkable result given that objects are only encoded in the model indirectly through their correlation with scene however categories. The success of this decoding approach implies that the distribution of objects in natural scenes contains substantial structure and that this structure can be exploited by the visual system. These results might help to explain previous psychophysical findings that indicate
that, when the gist of a scene is understood, objects within it can be recognized accurately even at extremely low resolutions, in some cases as low as ∼6 × 6 pixels (Torralba, 2009). Performance in these tasks becomes worse when objects are isolated from their context. Similarly, human observers can detect an object more efficiently when it is found within a contextually consistent scene than when it is not (Biederman et al., 1973). Evidently, the problem of inferring object identity from low-level visual features is made much easier by context. Much like low-level how vision can make use of prior information to accurately estimate motion direction from noisy observations (Weiss et al., 2002), high-level vision could make use of learned statistical regularities to estimate object identity in ambiguous scenes (Lee and Mumford, 2003). More generally, the approach developed by Stansbury et al. (2013) may provide an objective way to probe the brain’s representation of abstract sensory information.