Lovell P G, To M, Tolhurst D J, Troscianko T, 2006, "Observer ratings for natural scenes under varying illumination: Separating high-level and low-level processes by inversion and a low-level visual-difference predicting model" Perception 35 ECVP Abstract Supplement
Observer ratings for natural scenes under varying illumination: Separating high-level and low-level processes by inversion and a low-level visual-difference predicting model
P G Lovell, M To, D J Tolhurst, T Troscianko
We are developing a model which predicts the extent to which two images appear different to a human observer (Párraga et al, 2005 Vision Research 45 25 - 26). The model works by locally comparing the image contrast (luminance plus two chromatic channels), and therefore operates purely on low-level information. However, when people view images, they may ignore certain features of the image which appear unimportant. We hypothesise that image features arising from random changes in illumination may be thus disregarded by humans, but not by the model. Furthermore, we expect that inversion of the image may make it harder to discount such illumination noise, since the light-from-above assumption is violated. Monochrome images may also make it more difficult to discount shadows without full knowledge of scene structure. We examined ratings of differences between pairs of natural scene images taken of the same object as time of day and weather conditions varied, and presented in the following modes: upright, inverted, coloured, and monochrome images, and compared the ratings with the predictions of the image-difference model. We found discrepancies in the predicted direction between the observer and model ratings which indicates that there is a tendency to discount illumination noise when comparing natural images.
These web-based abstracts are provided for ease of seaching and access, but certain aspects (such as as mathematics) may not appear in their optimum form. For the final published version of this abstract, please see
ECVP 2006 Abstract Supplement (complete) size: 2368 Kb