These basic multimedia learning principles help guide learning designers in designing effective multimedia learning experiences. Using these principles, we aim to:
- Reduce extraneous processing
- Support essential processing
- Encourage generative processing
These principles don’t apply in all situations but in many. However, this article won’t cover the conditions under which a principle applies.
To reduce extraneous processing:
We do this by minimising the extraneous cognitive load that learners endure when they have to process poorly designed learning material.
The Split-Attention Principle
Where learning materials include different sources of information, designers or instructors should integrate these sources physically and temporally so that learners don’t have to split their attention to study and integrate the different sources of information. Learners can focus their energy on learning the materials.
The Redundancy Principle
Including redundant material can inhibit learning. Examples of redundancy include an unnecessary elaboration of information or presenting the same information in different forms. Adding on-screen text to visuals and narration of the same information will inhibit learning.
The Coherence Principle
People learn better when extraneous material is excluded.
The Signalling (Cueing) Principle
Adding cues to highlight relevant areas or organisation of the learning materials (whether a text or a picture) can make learning more effective.
The Spatial Contiguity Principle
People learn better when related words and pictures are placed near each other.
The Temporal Contiguity Principle
People learn better when related animation and narration are provided at the same time rather than one after another.
The Segmenting Principle
People learn better when we divide the material into segments.
To support essential processing:
We do this by managing the essential cognitive load that learners endure due to the complexity of the learning material.
The Multimedia Principle
People learn better from words and pictures than from just words. Words refer to verbal content, i.e. written or spoken words. Pictures are visual based forms of contents such as photographs, animations, graphs, maps, diagrams and videos.
The Modality (Effect) Principle
In certain conditions, presenting information in both visual and auditory modes is more effective than when using only visual or auditory mode. This helps reduce the cognitive load and make better use of the limited working memory capacity.
In certain conditions, people learn better when we use narration to explain graphics rather than using on-screen text.
The Pre-training Principle
People learn more effectively when they know the main concepts, e.g. their names and characteristics.
To encourage generative processing:
Except for the image principle, we use the other social cues principles to help students make sense of the learning material by managing students’ motivation to learn.
The Personalisation Principle
People learn better when the words in a multimedia message are presented in conversational style rather than in formal style.
The Voice Principle
People learn more effectively when the text is narrated in a human voice rather than machine-generated voice.
The Image Principle
People don’t necessarily learn better when there is the speaker’s image.
The Embodiment Principle
People learn better when on-screen speakers use gestures, movement, eye contact, and facial expressions.
Printout
References
Mayer, R. E. (Ed.). (2014). The Cambridge Handbook of Multimedia Learning (2nd ed.). New York: Cambridge University Press.
No Comments