Professional folklore, such as beliefs about the best possible class size, drives much current educational policy and practice. This is not a condemnation, for folklore is the set of practical powerful beliefs about mysteries that emerge over time within an interactive group. Folklore must seem sensible and work often enough to endure; but it doesn’t always work, and its advocates generally aren’t certain why it works when it does. Many experienced teachers, for instance, have had both positive and negative experiences with various class sizes; while most do prefer smaller classes, they’ll typically suggest that student quality is more important than class size.

Educational research has sought explanations for many teaching/learning mysteries, but it’s difficult to control the variables that complicate educational research, and so it tends to settle for (often weak) group correlations rather than the underlying behavioral causes. For example, many teacher, student, curricular, and building variables complicate class size research. Further, many educators don’t understand what scientific research can and can’t discover, and has and hasn’t discovered, because preservice programs rarely provide future educators with apprenticeship opportunities in carefully designed research investigations.

Cognitive neuroscience researchers are now moving us well beyond folklore and the limitations of educational research towards an unprecedented understanding of the brain systems that process cognition. It’s thus now important that educators understand the nature, limitations, and findings of such research.

This column begins a series that will focus on educationally significant elements of cognitive neuroscience and educational research that educators must understand. Let’s begin with a couple of current examples of professional folklore that have pretensions of scientific validation.

Be Wary Of Tidy Graphs
I’ve seen various versions of the following graph many times, and twice quite recently. Chances are that you’ve also seen it. The versions differ somewhat in categories, percentages, and source, but the substance is the same. Although the presenters who used the list couldn’t give me a research journal citation, they had clearly suggested that the information had been scientifically validated, and so should drive educational practice. My sense of the audience was that many accepted the list at face value, and left with the impression that asking students to just read or listen isn’t as valuable as discussions and hands-on experiences.

WE LEARN
10% of what we read
20% of what we hear
30% of what we see
50% of what we see and hear
70% of what is discussed
80% of what we personally experience
95% of what we teach to someone else.

The graph and its common interpretation are nonsense at a number of levels.

It’s not possible to design a study that would get such results, and it’s highly improbable that the data in any study would come out in such a tidy 10/20/30/etc. sequence. For example, how could the study distinguish between seeing and reading, since reading involves seeing? Further if the study combines seeing and hearing to get a 50% remembrance rate, how could it possibly determine how much was due to seeing and how much to hearing? For example, if you watch a film, do you typically remember 50% of everything that was shown on the screen and said by the actors, divided evenly between the two related experiences — or is it OK to arrive at 50% by remembering 100% of what was seen and none of what was heard? The other categories have similar serious design problems.

The graph (as commonly interpreted) implies that the lower percentage categories aren’t as valuable in learning as the higher percentage categories, and that’s an incorrect conclusion. Reading (10%) hearing (20%) and seeing (30%) are receptive acts, in which our brain is trying to assimilate external information, only some of which is novel. Conversely, teaching (95%) is an expressive act that draws on information from a solid internal memory bank. It’s obvious that I learned at least 95% of what I taught. How else could I have taught it? It’s a bogus comparison.

Learning is dependent on a key reductionist cognitive action: our brain must always distinguish between foreground and background (or contextual) information — and then attend principally to the foreground, but simultaneously be aware of the background.

Let’s use speech as an example. The speaker controls the flow of information. I can talk faster than you can understand, because I know what I’m going to say and you don’t. So to enhance your understanding of what I intend to say, I must slow down and simplify my comments to match your ability to comprehend my message. We tend to do this by repeating complicated things in slightly different ways, and by inserting background information and meaningless noise words (ah, OK, well, etc.) into our comments.

Think of the classic jokes (A guy went into a bar…) that go on for several minutes before the five second punch line emerges. All the background information in the joke is important in that it sets up the hearers so that they can quickly understand the point of the joke. If the joke-teller only told the punch line and didn’t include all the (later mostly unremembered) context, the person might remember 100% of the punch line but wouldn’t get the point of the joke. So what’s the point of a 100% retention that’s not imbedded in any kind of useful context? The way we remember a joke (and many other things) is that we remember the gist and the punch line, and then invent our own version when we retell it (A logger went into a restaurant….). So although we remember only a small part of the original joke, we can easily retell it.

Reading text differs from hearing speech, in that readers control the rate at which information enters their brain (because they can reread something they don’t yet understand). But writers also generally insert a lot of background and extraneous material that isn’t later remembered, in order to allow readers to move comfortably through the narrative. So it’s a lot like with the jokes discussed above. Think of a novel that you glided through easily, and when done, you can compress the entire 200+ pages into a few sentences for a friend who asks what the book’s about. This is far less than the 10% retention rate the graph suggests, but would you really want to remember more — every detail of the novel? The narrative elements that you don’t remember later were very important, however. They did such things as set the mood, provided important character information, and smoothed transitions during the reading.

Textbooks and other print materials that compress information are generally difficult to read because students are expected to remember much of what they read — and that’s difficult to do. If it’s important to remember a lot of content, it’s obviously helpful to expand the experience into other (supposedly better remembered) cognitive activities that the graph covers, since multiple explorations of an idea may well enhance the breadth of the retention. But this certainly doesn’t deny the importance of reading, seeing, and hearing as individual learning vehicles that, by their very nature, encourage much selectivity in what will be remembered.

The beautiful thing about our brain is that it has the good sense to ignore most of what occurs in our life, and that its finest moments are those in which decisions are made about what to remember and what to ignore. The all-time-most-intelligent question a student can ask is “Why do we have to learn this?” If the teacher doesn’t have a good answer, the student is well-advised to opt out of the activity. Many beautiful novels are trashed in Lit courses that require students to remember far more detailed information from the story than the novelist expected the reader to remember.

So a 10% reading and 20% hearing remembrance rate is probably high if the categories in this graph were actually researched — and they weren’t. It’s folklore.

Question Pat Assumptions
A similar pseudoscientific allegation is that we only use 5% of our brain — and that this is somehow bad, the fault of poor schools. Our brain carries out a bewildering range of activities, few of which occur simultaneously and continuously. It’s somewhat like a library — only a few of the large number of books are currently being used, and currently only one page in each such book. I have no idea where the 5% came from, but a 100% active brain that is continuously attending to all surrounding sensory information, activating all muscle groups, and recalling all memories of everything would be terribly stressed.

Our brain uses as little cognitive energy as possible to solve current challenges — that’s the whole point of efficient learning and automatic response patterns. Imagine if you had to consciously direct mouth/tongue muscles while speaking. So a 5% activation rate is probably high. But percentages aside, how could someone have credibly researched the issue, and accurately determined what’s going on in 100 billion neurons and a trillion glial support cells over an extended period, in order to come up with a 5% average activation level? It’s a folklore percentage.

When folks make such statements about teaching/learning that purport to be based on scientific research but that don’t seem right to you, ask them to cite their sources, and also ask them the kind of questions I raised in this column. That’s how to get them to quit making such statements. There’s nothing wrong with using folklore, if that’s all you have — but don’t call it science. It isn’t.