Design Implications of Dual Channel Processing

Why Not Both?

Nothing helps out on a long drive like listening to an audiobook, podcast, or freshly minted playlist. Not only does it make the monotonous activity seem to pass more quickly, it also helps to keep us awake and focused on the task of driving. The concept is so universal that the idea of a modern car being produced without a sound system seems absurd.

But why not a TV?

Wouldn’t a movie or TV series work to keep us awake and alert as well? The screen isn’t outside our field of vision, after all, it’s right there on the dash in front of us, just below the giant glass windshield we’re carefully watching everything happening on the road through. For that matter, why are we bothered when our co-pilot won’t stop talking on the phone or over-chatting the story or the music? We can hear both sets of sounds, if we couldn’t it wouldn’t be irritating us.

So what’s the problem?

The Sketchpad and The Loop

The reason we can’t listen to two conversations or read a book while chatting on the phone is because those activities are all taking place on the same neurological channel, the phonological loop. The loop processes spoken and written language, and we can’t run more than one activity through it at a time, so each of those activities is competing for our attention on the same channel. Similarly, we can’t focus on more than one thing in front of our eyes despite the fact that our field of vision encompasses a wide variety of things at any one given time. The best we can do is quickly switch back and forth between them. That’s because all of that information is coming in on our other channel, the visuo-spatial sketchpad, which among its many other duties processes the shape, location, and movement of the things we see.

120 Bit Dual Processor

According to the research of Mihály Csíkszentmihályi (pronounced Chick Sent Me High, which is surprisingly easy to remember), the human nervous system can only process between 110-120 bits per second, with the average conversation requiring 60 bits to process. Add in any other sounds or distractions being taken in by the phonological loop, and we simply lack the processing power to take in more than one at a time. The best we can manage is to quickly switch back and forth between them and try to fill in the broken patchwork of each after one or both has stopped. As for the visuo-spatial, there is a literal flood of information to sort through, with a pair of human eyes taking in roughly 10,000,000 bits of information per second and dumping them into the unconscious, with only the tiniest sip of that allowed into our conscious mind for processing.

There is some good news, however. While both of these processing channels are severely limited, they do not interfere with one another, meaning that both channels can be filled at once without either one causing information loss in their counterpart.

Designing with Dual Channel Processing in Mind

I’ll cut straight to the spoilers because this isn’t that complex. Training that uses both auditory and visual cues to reinforce a lesson is more effective than one or the other individually. Kinesthetic learning (learning by doing) is still king of the hill as far as best practices go, but not everything can be hands on in every moment, and development resources are often limited. That being said, if you have access to video editing or e-learning software like Camtasia or Articulate Storyline, you have access to both visual and auditory capture and editing tools, so why not utilize both in your designs?

Things like training games, un-narrated videos, simulations, and virtual environments are infinitely better with sound effects reinforcing important points and user decisions, and a fitting sound track can go a long way toward improving engagement and immersion in the module.

Conversely, visual cues added to emphasize important points in any kind of narrated work will help cement them in your learners mind, creating a visuo-spatial memory link to reference when trying to recall the phonological information later.

Let’s look at an example. Imagine having to memorize a random string of words as a passcode.

FIRE HORSE TWO CLOUD SPINNER THREE RED CROSS BATTERY 

Processing this phrase into long term memory without losing anything in the content or composition would take quite a bit of effort because none of the concepts really have anything to do with one another and can’t easily be linked to existing real world counterparts. However, by creating a simple image that fabricates a context in which the words do relate to one another, we incorporate the visuo-spatial sketchpad in the otherwise entirely phonological effort and suddenly the amount of effort it takes to recall the phrase is reduced significantly.

This psychological effect is well known in many fields of design, so you can also see examples of it in more or less every advertisement, meme, and video game you interact with on a day to day basis. If you have an ISD worth their salt, you will find it in your instructional material as well.

While you’re visiting the site, you should stop by the Custom Modules section to see some examples of this principle in action. They might prompt some new and innovative ideas for the kinds of training you could provide your learners in the future.

Searching for an effective way of training your employees or students?

Check out our customized  learning modules!

About the author

Chavis N. Comer

Founder and Senior Instructional Systems Designer of Tohmes Training, Mr. Comer has nearly two decades of experience developing training programs for clients representing a variety of industries across dozens of different countries. From the federal government to the banking and tech sectors to local K-12 school systems and universities, he has provided consultation and design services to a truly diverse portfolio of clientele.

Telegram / Signal

Email Address