UCL WIKI

UCL Logo
Child pages
  • Visual perception and attention
Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 52 Next »

1. Visual perception (John Hargreaves, Manan Vohra)

2. Biology of the visual system (Tom Knoll, Richie Kennedy)

Being able to perceive the world is the result of light passing into the eye and being projected onto the retina. Several systems regulate how an image is presented to the retina. Firstly, light passes through a transparent layer of the eye called the cornea, which acts as a protective layer. Then, the amount of light allowed into the eye is regulated by the iris that adjusts the size of the pupil. Finally, the light passes through the lens which focuses the image (See figure 1). The retina contains millions of receptor cells known as photoreceptors, of which there are two kinds: cones and rods. These photoreceptors change their electrical activity in response to the level of light that they detect.

Cones are the receptor cells that are sensitive to colour vision and rods are most sensitive low intensity light, therefore they help us to see at night or in low light conditions. When a change in the level of light is detected by the photoreceptor, it sends a message in the form of a neurotransmitter to the bipolar cell, which activates the ganglion cell which is responsible for sending the message to the brain through the optic nerve. The optic nerves from each eye meet in the brain at a section known as the optic chiasm. Here the information from both eyes converges and is split up into information for the left and right visual field (As shown in figure 2). Subsequent processing of information from the left visual field is done in the right hemisphere and information from the right visual field is done in the left hemisphere (Gazzaniga, 2002).

The optic nerve passes the information from the photoreceptors to the lateral geniculate nucleus (there is one of these in each hemisphere), which has six layers for analysing information from different areas of the visual field. Layers 2, 3 and 5 receive projections from the ipsilateral eye (on the same side of the body) and layers 1, 4 and 6 from the contralateral eye (on the opposite side of the body).  Each layer receives information from different types of ganglion cells in the retina. The lateral geniculate nucleus organises and then outputs this information to the primary visual cortex (V1) via the optic radiations. Further processing of the retinal image is done here. V1 is the first processing area of the striate cortex. There are several layers that are responsible for processing different types of visual data, for example one area would be sensitive to variations in colour and another area for movement variation. As the information passes through the visual system, the data is integrated and begins to form recognisable objects.

The visual data continues in one of two directions (as shown in figure 3). The ventral stream passes information to the inferior temporal cortex which determines what an object is. Alternatively, information is sent to the visual association cortex via the dorsal stream which determines where an object is (Carlson, 2004).

Because of the complexity of the visual system, there have been some interesting case studies recorded. One case study worth noting is the case of the colour blind painter. This was recorded by Sacks (1995). In this case, an artist lost the ability to see any colour other than shades of grey after a car crash. Colourblindness is most usually something someone is born with and it is extremely rare that a person cannot see colours at all. A defect with a person’s cones is usually the reason for colourblondness. This colourblindness caused by brain damage, or cerebral achromatopsia, affected the artist’s life significantly. Food lost it’s appeal, he avoided social interaction and even lost interest in sexual intercourse. One of the shocking things about this case was, that even when he closed his eyes to eat, the mental image of his food would be as grey or black as it appeared to him. It became apparent that he could not even imagine or dream in colour. This was due to the loss of the ability to use V4. This area is responsible for higher order colour generation. Although he could still use the his wavelength sensitive V1 area which meant he could still see variation in lightness and darkness. This case study is documented in Sacks’ An Anthropologist on Mars (1995). It’s very interesting!

The Island of the Colour-blind (1996) is another book by Oliver Sacks which he describes his visit to the island of Pingelap in Micronesia. On this island, 5-10% of the inhabitants have a hereditary condition which means their cones are not functional.

3. Colour perception (John Hargreaves, Dimitrios Kontaris)

4. Perceptual organisation (Monica Visani Scozzi, Richie Kennedy)

5. Object recognition (Jiri Siftar)

Attention (not sure should be part of it, but it's closely related)

Location or object based?
Visual attention is selective. We have a finite amount of attentional resources – we couldn’t attend to everything if we wanted to!
Sternberg (1999) “Attention acts as a means of focusing limited mental resources on the information and cognitive processes that are most salient at a given moment”

Location - The idea is the same for all of them: objects that fall under the “beam” of attention are subject to further processing with priority. * Spotlight (posner, 1980) - Spatial cueing tasks

  • Spotlight (Posner, 1980) - Spatial cueing tasks
  • Zoom-lens (Eriksen & St. James, 1986)
  • Multiple spotlights (Awh & Pashler, 2000)

Object - Attention selects from objects themselves, rather than potentially empty regions of space

  • Accordingly with Gestalt laws (Duncan, 1984) - overlapping objects experiments

General consensus on location vs. object based attention - Attention can operate in both ways, depending on the goals of attention.

Feature binding

Pop-out effect - one perceptual feature is different.
More features different - we need attention to glue features.

Feature Integration Theory (Treisman & Gelade, 1980)

  • location based
  • feature maps for each type of visual feature + master map of locations
  • feature detection occurs pre-attentively
  • correct binding (search) requires serially applied focused attention

Evidence - Illusory conjunctions, Dual route - dorsal (spatial) vs. ventral (object) - divided processing paths

Object Files

  • object representations (result of FIT) are maintained over time and despite movement
  • objects are perceivable while unidentified
  • multiple object tracking (up to 3-5)
  • spatio-temporal continuity is critical more than content (changing the colour has no impact)

In order for objects to be tracked, they must maintain a spatio-temporally plausible path of motion, even if occlusion occurs. Object-files are sticky, but motion must be consistent with reality and expectation

Memory

Binding in Working memory: Phonological Loop + Visuo-Spatial Sketchpad -> Central Executive (Episodic Buffer)

Object recognition

Models of object-recognition need to be able to allow accurate performance regardless of viewing conditions

Viewpoint-Invariant Theories

Viewpoint-invariant theories suggest that object recognition is based on structural information, such as individual parts, allowing for recognition to take place regardless of the object’s viewpoint.
3 stages (Marr & Nishihara, 1983) - 2D, 2 1/2D, 3D
Recognition by components (Biederman, 1987) - recognition occurs by matching a set of 3D primitives (geons) to a stored representation in visual memory. While the model can account for much data, it could not account for performance with novel objects

Viewpoint-Dependent Theories

Viewpoint-dependent theories suggest that object recognition is affected by the viewpoint at which it is seen, implying that objects seen in novel viewpoints reduce the accuracy and speed of object identification.

Extrapolation from Canonical viewpoints

Context influence

+ Specialised brain regions for recognition - Fusiform Face Area (FFA), Parahippocampal Place Area (PPA),…

6. The Gestalt view (Michal Charemza, George Maninis)

References for this section (Note: I will put them after in the "references" section):

Chang, D., Dooley, L. and Tuovinen, J. (2002). Gestalt Theory in Visual Screen Design: A New Look at an Old Subject. Australian Computer Society, Inc. Retrieved from http://crpit.com/confpapers/CRPITV8Chang.pdf

Fraher, R. and Boyd-Brent, J. (2010). Gestalt theory, engagement and interaction. In Proceedings of CHI EA ‘10, 3211-3216. New York, NY: ACM.

Graham, L. (2008). Gestalt Theory in Interactive Media Design. Journal of Humanities & Social Sciences, 2 (1), 1-12.

Hochberg, J. (1998). Gestalt Theory and Its Legacy: Organization in Eye and Brain, in Attention and Mental Representation. In J. Hochberg (Ed.), Perception and Cognition at Century's End (Ch. 9), 373--402. San Diego: Academic Press.

Soegaard, M. (2010). Gestalt principles of form perception. Retrieved 20 January 2012 from Interaction-Design.org: http://www.interaction-design.org/encyclopedia/gestalt_principles_of_form_perception.html

Ware, C. (2003). Design as applied perception. In J. Carrol, (Ed.), HCI models, theories, and frameworks: Towards a multidisciplinary science, 11-26. San Francisco, CA:Morgan Kaufmann. 

Wertheimer, M., (1938a). Gestalt Theory. In W. D. Ellis (Ed.), A Source Book of Gestalt Psychology, 1-11. New York, NY: Harcourt. Retrieved from http://gestalttheory.net/archive/wert1.html

Wertheimer, M., (1938b). Laws of Organization in Perceptual Forms. In W. D. Ellis (Ed.), A Source Book of Gestalt Psychology, 71-88. New York, NY: Harcourt. Retrieved from http://psy.ed.asu.edu/~classics/Wertheimer/Forms/forms.htm

7. Perceptions of groupings (Sam Naylor, Kio McLoughlin)

8. Psychological pop-out (Chi Tsang, Jianchuan Qi)

9. Icon structure and search (Dimitrios Kontaris, Filippas Kotsis, Roham Haddadi)

10. Attention (Susan Zhuang)

One of the earlier famous definitions of attention was given by the pioneering psychologist William James in 1890:

“Everyone knows what attention is. It is the taking possession by the mind, in clear and vivid form, of one out of what seem several simultaneously possible objects or trains of thought. Focalisation, concentration of consciousness are of its essence. It implies withdrawal from some things in order to deal effectively with others.” (James, 1890, pp. 403-403)

This definition highlights the selective nature of attention. Selective attention may for instance occur when you search the item listing of an online shopping website for a specific household item. During tasks such as this, some pieces of information are registered, whereas other information is ignored.

What is excluded in James’ definition of attention, however, is the notion of divided attention. An example of divided attention is when you are trying to type up a coursework assignment while simultaneously chatting with a friend on Facebook. The limitations of the amount of incoming information you can process, i.e. your attentional processing capacity, is relevant to divided attention. See Multitasking.

James' definition also focuses on attention as a conscious process - however, this notion has been qualified in later models. One such model is Schneider and Shiffrin's automaticity model (Schneider & Shiffrin, 1977), which makes a distinction between controlled processing and automatic processing:

  • Controlled processing is slow and conscious. Because such processing places substantial demands on an individual's attentional resources, it is of limited capacity. An example of controlled processing is when you first learn to drive a car and need to focus all your attention on the skills and rules necessary to acquire to perform the task successfully.
  • Automatic processing is fasted and unconscious. Such processing does not make any demands on an individual's attentional resources, and is therefore not constrained by capacity limitations. While controlled processing behaviour is easy to modify, automatic processing behaviour is not. An example of this is when you have become an experienced driver and are able to effortlessly retrieve all the skills and rules of driving from memory.

A common criticism of the automaticity model is the notion that automatic processing does not affect attentional resources. The Stroop effect is an infamous example of how attention directed towards a specific task can be detrimentally affected by automatically processed tasks: when given the task to name the colour a word has been written in, and the word spells out a conflicting colour (e.g. the word 'blue' written in the colour red), participants often struggle to correctly perform the task (Stroop, 1935). 

(Note: I was initially planning on writing about studies on selective attention (e.g. dichotic listening task) and relevant theories (Posner's spotlight model, Treisman's feature integration model, Broadbent's early selection filter, Deutsch-Norman's late selection filter) - but I'm wondering whether these fit more into section 11, as they are all revolved around auditory vs. visual attention...)

11. Attention in different modalities (Terri Herbert, Tom Greenwood)

12. Focus of attention and anticipating attention (Ara Avakian, Numa Pigelet, Roham Haddadi)

13. Top-down and bottom-up control of attention (Bing Cui, Yun Duan, Chaoyu Ye)

14. Application to systems

15. References

Carlson, N, R. (2004) Physiology of Behaviour. Eighth Edition. Boston, Allyn and Bacon

Gazzaniga, M, S., Ivry, R, B., & Mangun, G, R. (2002) Cognitive Neuroscience: The Biology of the Mind. Second Edition. New York, W.W. Norton & Company, Inc.

James, W. (1890). The Principles of Psychology (Vol. 1). New York: Henry Volt.

Sacks, O. W. (1995). An anthropologist on Mars: seven paradoxical tales. London: Picador.

Sacks, O. W. (1996). The island of the colour-blind and Cycad Island. London: Picador.

Schneider, W. & Shiffrin, R.M. (1977). Controlled and automatic human information processing: 1. Detection, search, and attention. Psychological Review, 84, 1-66.

Stroop, J.R. (1935). Studies of interference in serial verbal reactions. Journal of Experimental Psychology, 18, 643-662.

  • No labels