Our Vision Sense Outshines and Overshadows
Have you ever wondered why graphical user interfaces (GUIs) have been the primary interface to computing? Why not use aural (sound), or haptic (touch) and gesture? A simple answer might be that technological evolution drove us in the direction of GUIs; where cathode-ray tubes CRTs met typewriters and never looked back. It also seems intuitively obvious. All of us want to see something first in order to touch it or move it with a gesture. In addition, although sound can exist without accompanying visuals, it is more manageable alongside text and images. Recent research in brain science gives us insight (pun intended) into the dominance of our sense of vision.
Neuroscientists have found that approximately half of our mental processing power at any given time is primarily used for vision or visual recall. The reason this is so important may be due to our evolutionary history. Threats to our survival were “seen” in the savannah, as were our food supplies and our reproductive opportunities. No wonder sight is our primary sense.
It turns out that the mechanism of vision is much more complicated than previously thought and that it primarily happens in our brain, not in the mechanism of the eye itself. Previously, the concept of sight was believed to be when an image arrived in the brain from the eye with a little bit of flipping, depth perception and resolution the brain learned to interpret pictures. Current neuroscience suggests that it’s very different than what we presumed. Visual input is broken down into light, color, edges, line direction and motion and each are processed separately and assembled to form pictures. Perception is much more prone to individual personality than previously believed.
Research has found that the more visual the input to the brain, the more likely it will be remembered and recognized. The phenomenon is so prevalent that scientists have named it the pictorial superiority effect, or PSE. Past research has shown that people could remember more than 2,500 pictures with at least 90% accuracy several days afterward, even when subjects only spent approximately 10 seconds looking at each picture. A year later, rates of accuracy remained around 63% and some visual information can be recalled reliably for decades. Compare these numbers to other forms of communication input, such as oral and the difference is stunning. People presented with strictly oral content and tested several days later, retained only about 10% if the information.
What about text? Are written words different from pictures? It turns out the brain sees words as many tiny pictures. Research has shown that to be readable, a word must have certain identifiable visual features in the letters. The brain must contemplate each feature and identify it. Reading is strenuous for the brain because it identifies text as many, many pictures. Even though the brain is very adaptive, reading is inefficient. It requires a lot of visual processing power without the high memorability of complex pictures such as scenery and faces. As research suggests, our visual sense does indeed outshine our other senses, thus proving why the GUI has been so pervasive.
Source: Brain Rules: 12 Principles for Surviving and Thriving at Work, Home and School – John Medina ©2008 Pear Press