UCL WIKI

UCL Logo
Child pages
  • Multitasking
Skip to end of metadata
Go to start of metadata

1. Divided attention

Divided attention - Psychological refractory period (PRP) Effect

From Wikipedia: http://en.wikipedia.org/wiki/Psychological_refractory_period

The term psychological refractory period (PRP) refers to the period of time during which the response to a second stimuli is significantly slowed because a first stimulus is still being processed. PRP is a product of the psychological refractory period paradigm, a paradigm in which two different stimuli are presented in rapid succession, each requiring a fast response. If the time between the first stimulus and the second is made shorter, the time to respond will be longer.
This PRP effect is generally taken as evidence for central bottleneck when initiating responses to stimuli. The bottleneck mechanism occurs when the first stimulus is being processed; any other stimuli cannot be processed.

PRP effect:

  • slower RT for T2 when the interval between the onset of T2 and T1 is shortened.
  • this PRP effect is generally taken as evidence for central bottleneck when initiating responses to stimuli.

2. Wickens’ Multiple Resource Model

Wickens’ multiple resource theory suggests that several different ‘cognitive resources’ can be used simultaneously. Cognitive resources are represented by boxes in figure 1.

Figure 1: Wickens’ multiple resource theory (MRT) model

In basic terms the theory proposes that when different tasks require the same cognitive resource, e.g. visual perception, information must be processed in sequence. When the task requires different resources, e.g. visual perception and auditory perception, they can be processed simultaneously.

The 4-dimensional model

There are 4 dimensions used in the model (as shown in fig 1): stages, processing codes, input and visual channels.

1. Stages

In the MRT, dual-task interference can be affected depending on whether multiple tasks require cognitive/perceptual activities or response activities.

These activities represent stages of the model and are seen as a dichotomy in the sense that dual tasks requiring the same stages are prone to greater interference then in dual tasks where one task requires a cognitive activity and the other a response.

Figure 2. Representation of two resources, supplying the different stages of information processing. Sensory processing, the operation of the peripheral visual and auditory systems, is relatively resource-free (automatic).

In figure 2, Perceptual and cognitive activities share the same resources and are functionally separate to the processes used to select and execute a response.

Evidence for the stages come from Shallice et als 1985 study on dual-task performance where they found that speech and motor activity (responses) are often controlled in the frontal regions of the brain (in particular, the central sulcus) while perceptual and language comprehension tend to be undertaken in the posterior section of the central sulcus. This indicates that the stage dichotomy can be associated with different brain structures.

Wickens (2002) uses a concrete example to represent this: “the added requirement for an air traffic controller to acknowledge vocally or manually each change in aircraft state (a response demand) would not disrupt his or her ability to maintain an accurate mental picture of the airspace (a perceptual-cognitive demand).”

Wickens also postulates that the stages predict that interference is likely to be great between resource-demanding perceptual tasks and cognitive tasks involving working memory to store or transform information. Despite using different information processes, they are supported by common resources. An example of this can be seen in performing dual tasks such as visually searching while mentally rotating an object or understanding speech while rehearsing a speech.

2. Processing codes

Processing codes refer to separate resources used in analogue/spatial processes and categorical/symbolic processes (verbal).

The model further postulates that these resources are separate and distinct across the 3 stages of perception, cognition and responses.

Wickens postulates that the separation of resources may account for the lack of interference that may occur when manual and vocal responses are time-shared. In the model, manual responses are seen as spatial in nature (e.g. tracking or steering) while vocal responses are verbal (e.g. speaking).

This allows the model to predict when it may or may not be useful to employ voice vs manual control.

Wickens and Liu (1988) found that manual control may disrupt performance in a task environment imposing demands on spatial working memory (e.g. in driving, dialling a phone number and steering) whereas voice control may disrupt performance of a task with heavy verbal demands (or be disrupted by depending on the resource allocation)

3. Input (modalities)

In the MRT, Wickens talks about perceptual modalities which are visual (V), audio (A), tactile and olfactory modalities used in time sharing tasks. In particular, there is a dichotomy between tasks that utilise separate modalities or tasks that require one modality. These are referred to as cross-modal time sharing (e.g. AV) or intra-modalities (e.g. AA or VV).

The model predicts that there will generally be less interference when using cross-modalities rather than intra-modalities as a result of separate perceptual resources being used at the same time. Wickens however, is uncertain of whether this is really the case and points out that the advantage of cross-modal time-sharing may instead be as a result of peripheral factors on intra-modal conditions causing confusion or masking. For example, tasks that require two competing visual channels-if far apart- require visual scanning between them, if the tasks uses visual channels that are closer to one another, they may cause confusion and masking. The same is true of dual tasks requiring listen to two messages simultaneously.

There is research that has found that non-resource factors may contribute to intra-modal advantage such as the process of attention (I.e. knowing what to look for in two tasks) or ‘pre-emptive’ characteristics of auditory information . (Wickens and Liu 1988). Regardless, it can be inferred from the MRT that dual-task interference can generally be reduced by having tasks that uses one visual modality and one auditory modality. However, there are exceptions to this in cases where two displays may be more practical than one display and one auditory.

4. Visual channels

The MRT proposes that there are two visual channels used in visual processing. Focal and ambient. These are said to use separate resources which are characterised by a) the location within the brain where processing occurs and the type of processing that is undertaken.

The model infers that dual tasks involving one focal and one ambient processing will result in little interference.

Focal vision is linked to eye movement, used for fine details and pattern recognition. It is used in tasks involving visual search, object recognition and other tasks requiring high visual acuity (e.g. reading text).

Ambient vision involves use of peripheral vision and is used for sensing one’s orientation and motion in the environment.

Examples of dual tasks which use both channels include walking down a corridor (ambient) while reading a book (focal) or keeping a car in the centre of a lane (ambient) while reading a road sign or looking at the rear view mirror (focal).

Practical application

The theory allows us to predict behaviour in multitasking activities e.g. reading and map and talking on the phone while driving. It is particularly useful for predicting dual-task interference compared with earlier cognitive ‘filtering’ models, e.g. Broadbent 1958.

Theoretical application

MRT is closely related to both attention and workload; the “multiple” aspect relates to attention and the “resource” aspect relates to workload.

References

Wickens, C (2002) Multiple resources and performance prediction. Theoretical issues in ergonomics science, Vol 3 No 2.

Wickens, C (2008) Multiple resources and mental workload. Human Factors Vol 50 No 3.

van Engelen, D (2011) Attention Drivers! - Analyzing Driver Distraction. Diploma Thesis, RWTH Aachen University

3. Why do people multitask?

People are always contents doing multiple things at the same time. For example, some people like to listen to the radio while driving. This seems to be a simplest definition of “multitasking”. Generally, people define “multitasking” as “doing two or more things at the same time”. In order to detail its definition, multitasking is defined to describe the activity of performing multiple tasks during a specified time period. And there are some time spent switching between tasks to involve the necessary information (Dzubak). There is also another definition by Delbridge (2001) from different perspective. He defined it as accomplishing multiple goals in a specified time period with the task switches. Based on all of the definitions above, there is one main feature can be summarized: “task switches between the multitasking in a specified time period”.

As another example “people chew gum while walking” by Pashler (1994), it indicated that the multitasking is also related to the attentions needed for each task. In this example, people concentrate on walking and pay little attention to gum.

Why multitask? 

  1. From the academic view: there are some benefit

Multitasking can make our lives more diversity. With it, people can experience multi-types of activities in a specified time period and feel more enjoyable.

Besides it, the diversity brought from multitasking can also prevent boredom and also keep people thinking more creatively. Just imagine that “one person keep doing one thing for one hour”, it may be quite easy for him/her to get bored.

There is a controversy about whether multitasking can increase efficiency. Inspired from the experiment by Caruana (1997) about “multitasking learning of multiple related tasks”, I think the efficiency of multitasking depends on the relationship between these tasks to some extents. (experiment is needed for proving this). 

  1. However, many people also regard “Multitasking” as a waste. The research of Federal Aviation Administration shows that managing multiple tasks simultaneously may decrease efficiency and actually take extra time switching from one task to another. In the most severe cases, it can even mean the difference between life and death.

http://www.psychologytoday.com/blog/conquering-cyber-overload/201005/mining-your-inner-moron-why-multitasking-is-such-waste

                                                               i.      Many people think they can multitask. For example, you can walk and chew gum at the same time. But the reason is that these two tasks don’t require your attention. However, when you were in a situation that two tasks, both of them require your attention, then you have to do switch your attention back and forth.

                                                             ii.      Why multitasking is hard:  

  1. But plenty of people still try to do multitask anyway. There are five possible reasons for why people multitask.

http://www.psychologytoday.com/blog/conquering-cyber-overload/201005/five-reasons-we-multitask-anyway

                                                               i.      Employers think they need people who can multitask

                                                             ii.      It’s so convenient – I mean, it's right there in your hands.

                                                           iii.      We’ve become impatient

                                                           iv.      We are convinced that the bad rap on multitasking is a hoax perpetrated by oldsters, who just don’t get it.

                                                             v.      People are bored.

Summary

  • Multitasking is often the rational thing to do.
  • By sharing our time between different tasks we maximize productivity/reward
  • We multitask because there are often benefits to doing so.
  • There of course situations where it is inappropriate or simply reckless to multitask.
  • We sometimes do a good job at deciding when to engage in other activities but sometimes not.
  • In many settings we choose when to switch.
  • multimodal devices might be promising (voice, audio combined with traditional GUI interfaces)

Reference:

 American Psychological Association. (2001) Is multitasking more efficient? Shifting mental gears costs time, especially when shifting to less familiar tasks.

[Online available at: http://www.apa.org/news/press/releases/2001/08/multitasking.aspx]

Dzubak, C. and State, P. Multitasking: The good, the bad, and the unknown

Pashler, H. (1994)  Dual-Task Interference in Simple Tasks: Data and Theory

Caruana R. (1997) Multitask Learning

4. The costs of switching tasks

In many settings we must make strategic decisions about how to allocate limited resources.

Payoff function performance is predicted by the theory that people will select the strategy that maximizes payoff. Performance is an adaptation to multiple architectural constraints that include at least noise (task interference) and motor process interference.

Payoff achieved by strategy

  • PRP delay is a strategic response to noise so as to maximize payoff. Defer interleaving to natural breakpoints.
  • Having a bottleneck or not made little difference!
  • Strategic unlock - Meyer & Kieras (1997) If there is a payoff scheme in place in which participants are given or discounted points according to performance, PRP effect might not be due to a central bottleneck, instead pps might strategically delay T2 response until after T1 has completed to avoid response reversal penalty - this would be sensible as it would maximize payoff.

Adaptation to payoff - Howes, Vera, & Lewis (2009)

  • PRP delay is a strategic response to noise so as to maximize payoff.
  • Having a bottleneck or not made little difference!

5. When to switch?

  • Are some switch points better than others?
  • focus has been on low-level interleaving behaviour rather than the big choices.

Miyata & Norman (1986) hypothesized that people will be more likely to switch at task boundaries.

Payne et al. eventually explain their data by assuming that people switch after subgoal (i.e., finding a word in a patch of mixed words) – related to foraging and maximising interleaving strategies while comparing two sets of mixed words (hard set and easy set).

People tend to switch at natural break points

Brumby, Salvucci & Howes (2009) found that people select strategies to meet a desired dual-task performance tradeoff objective, because the interleaving strategies that people can use when multitasking are limited by the cognitive resources available.

For example, in the context of dialing a number while driving, they found that drivers tend  to dial chunks of digits at a time, returning their attention to driving in between each chunk. Dialling three or four digits at a time is a particularly efficient strategy because any more interleaving incurs additional time costs without significant improvement in lane keeping, and any less interleaving sacrifices safety.

  • Drivers adapt their strategy to changing objective.
  • Task interleaving strategy shaped by structure of secondary dialing task.
  • It is safer to take the time to interleave even for short tasks.

But how might people decide how to interleave tasks in situations where there are no representational structures or natural cues to guide this decision?

One possibility for how people might adapt their dual-task strategy to meet a specific task objective is that they monitor the amount of time that has elapsed since they last checked on the more important task (Kushleyeva, Salvucci, and Lee, 2005). For example, a safer driver would set a lower threshold for the time they would look away from the road, and therefore, will interleave between tasks more frequently.

Another possibility is that people select strategies to meet a desired dual-task performance tradeoff objective. Brumby et al (2010) used a dual-task paradigm to demonstrate that people can strategically allocate attention in multitask settings to meet specific performance criteria.  The benefit of this paradigm, over the classic dialing-while-driving paradigm, is that it does not have an external representational structure, such as the chunked number natural breakpoints, that can be used to guide decisions about when to interleave. Thus, participants are free to interleave the tasks how they like.

Briefly explained, the experiment consisted in trying to maintain a vehicle in the centre of the lane and attending a secondary navigation panel with instructions.  So the performance criteria referred to above consisted  on whether the participants were asked to prioritize keeping the vehicle centred on the lane or to concentrate on the navigation panel task. Looking at the navigation panel for instructions would switch off the main lane display and cause the vehicle to slightly drift to one side.

Results showed that participants met the required task objective by varying the number and duration of visits to the navigation panel, and by also varying the amount of time given up to steering control between visits. These findings support the idea that people can strategically allocate attention in multitask settings.

Modelling the strategy space offers insights into the kinds of tradeoffs at stake

  • With no-interleaving you forget about dialing.
  • With maximum interleaving strategies, you press a number switch back to look at the road and keep doing this for a longer period.

6. Application to systems: how can we mitigate problems associated with mobile devices used on the move?(Daniel Fozzati, Katrine Sannaes)

Horrey and Lesch (2009): driver-initiated Distractions
Tets track experiment – while driving, even given the change to do secondary in-car task on less demanding parts of the track, people would still be inclined to do them at other points. This means that people will feel compelled to pick up the phone when it starts ringing, no matter how demanding the road they are in is.

A design solution?
Offer a combination of output moadalities. Audio is less distracting but can be slow in how quickly the information is received. Example: incoming text messages are output as audio.

Brumby, D.P., Davies, S.C.E., Janssen, C.P., & Grace, J.J. (2011). Fast or safe? How performance objectives determine modality output choices while interacting on the move. In CHI 2011.
Given that many studies have proved that using audio is safer than traditional GUI interfaces while multitasking (driving in this case), they try to see whether drivers might sacrifice security for speed (of the output modality). Since audio is generally slower in providing the information required, drivers might want to quickly glance at the screen instead. Using their terminology, the experiment is designed to find whether “performance objectives have a strong influence on modality choices with multimodal devices”. Performance being not just the characteristics of the output modality (audio slower, visual quicker) but also of the  task objectives (if the driver’s priority is to access the information quickly or not).
What they found is that completion time of the task is a critical aspect when designing audio interfaces. There is a safety/speed tradeoff to consider when designing multimodal interfaces. If for example, the user priority is to access the information, he/she will sacrifice safety and choose to use the visual modality as it is quicker.

7. References

8. Exam questions

Previous exam questions

Other exam questions that we've thought of

  • No labels

2 Comments

    1. Be sure to back up the claims made in this document with peer reviewed evidence. Is there any evidence to support what this guy is saying?