One of the most valuable topics to understand when designing digital learning experiences with multimedia is dual channel processing.
This describes how our brains process auditory and visual material through different channels, and has significant ramifications for managing and guiding attention.
Have you ever been watching a presentation and the visuals don’t seem to match the words being spoken by the presenter?
Or been watching a video, where words appear on-screen and don’t match the narration (in timing or verbage)?
I don’t know about you, but to me it’s almost painful. I can feel my brain fighting to try to interpret what’s going on, like gears grinding together and getting jammed.
This is because this incongruence of visual and auditory information is actually making your brain work much harder, and the more layers and distractions going on, the more it just becomes noise.
A common example of this is also found in the information overload when a presenter piles all of their text onto a slide and then reads it out loud to you like a script.
You’ve probably also intuitively found that this is ineffective and what you might call “boring”. But what’s really going on here?
We’re having trouble focusing because there’s too much happening at once, and we’re getting redundant auditory and visual information which stops us from engaging with or processing either channel effectively.
When we need to read large chunks of text, audio doesn’t help, it becomes noise because reading is much more labour intensive for our brains.
When we have spoken narration or a presenter giving us auditory information, large chunks of text are distracting. They become noise because it’s harder to process text through the visual channel than the human voice through the auditory channel.
Think about it, reading words is a couple layers of abstraction up from simply hearing a voice tell you about something.
You’ve probably noticed this if you’re into audiobooks. While most of us can slip in and out of an audiobook fairly seamlessly, sitting down to read requires some effort to get into.
This doesn’t mean you should never use on-screen text and audio. Like all good science, this is not an “always” or “never” rule, there is nuance and there will be outliers and people who prefer this or that because they’ve learned to adapt. There will be special considerations for language learners. There will be political and ideological reasons for choosing to break these rules that depend on your context. There will also be accessibility considerations that will mean you need to have multiple things going on at the same time.
Being a designer means being aware of and acknowledging the trade-offs present in everything you do, and finding opportunities to advocate for your end-user with evidence-based rationale, doing the best you can within your constraints and continuously improving.
Interactive video is the next logical step in the evolution...