Fiser J, Aslin R N, Orbán G, Lengyel M, 2006, "Bayesian model-learning and the emergence of visual features and rules" Perception 35 ECVP Abstract Supplement
Bayesian model-learning and the emergence of visual features and rules
J Fiser, R N Aslin, G Orbán, M Lengyel
Based on our human experimental and modeling work in the last couple of years, in this talk we argue for four related points. First, we suggest that humans develop internal visual representations by incremental learning of the co-occurrence and predictability of visual elements. Second, we present evidence that despite this statistics-based learning, humans do not encode the full second-order correlational structure of the scene. Rather, they learn a sufficient representation of the underlying independent causes generating the scene, and this strategy naturally leads to the emergence of chunking and some of the basic Gestalt rules of perception. Third, we show that these learning processes can be well captured within the framework of Bayesian model-learning using sigmoid belief networks. This learning scheme seeks out the simplest model of the scene and shifts to more complex models only when it is warranted by additional experience. Fourth, we point out that the same model-learning framework has been shown to be superior to previous associative models in explaining aspects of configural learning in animals and thus it provides a natural link between learning of visual representations and classical conditioning.
[Supported by IST-FET-1940, NIH HD-37082, Gatsby Charitable Foundation.]
These web-based abstracts are provided for ease of seaching and access, but certain aspects (such as as mathematics) may not appear in their optimum form. For the final published version of this abstract, please see
ECVP 2006 Abstract Supplement (complete) size: 2368 Kb