When I first met Bill Calvin it was at a convention of Futurists, in a loud room filled with passionate people discussing important issues such as the societal impact of nanotechnology and alternative fuels. Dr. Calvin’s voice was barely a whisper. A soft-spoken man, his tone sharply contrasts with the impact of his thoughts. An Affiliate Professor at the University of Washington School of Medicine in Seattle, Dr. Calvin has written on many diverse subjects from How Brains Think to how changes in the climate affected our evolution. In all of his writings he blends his unique ability to keep the reader’s interest with the ability to explain complex phenomena in simple terms. In a recent conversation I had the opportunity to speak with him about his life, his newest thoughts on language, and his perspective on the future of education.
BC: Where did you grow up, and what was the path to where you are right now?
WHC: I grew up in Kansas City and without any particular aspirations, not even to go to college. Neither of my parents went to college, what with the Depression. I got very interested, while I was in high school, in doing journalism and that’s where I learned how to write. I also did a lot of press photography, and worked one summer for LIFE magazine as an assistant. But when I got to college, I discovered physics. I dropped all the photography, all the interest in sports, all the things that had gone on before – and just reveled in the sciences and humanities as an undergraduate.
BC: So it was an experience in college that turned you into a scientist?
WHC: The first week did it. But by the second year, the humanities were almost as important to me.
BC: Was it any particular experience, or just the entire exposure to something you weren’t exposed to before?
WHC: It was clear that science was a fun way of life. By the end of the first year of college at Northwestern, I had a pretty good idea that being a professor was an interesting thing to do, and that doing real research would be even more interesting. Whether I was going to do it in physics, or whether I was going to do something else, never really got decided for the next three or four years.
BC: What did you end up getting your degree in as an undergraduate?
WHC: Physics, but I got very interested in things like color vision – just from reading and looking up the retina in the card catalog and discovering this big fat neuroanatomy book on the retina. I never took a single biology course as an undergraduate, unfortunately. I read my way into things that involved the nervous system, then I gradually spread out and learned other biology. It wasn’t until I undertook to teach an introductory course in biology about fifteen years ago that I really learned biology in the broad sense.
BC: That’s usually the way the learning curve goes, it’s not until you turn around and have to teach it that you really learn it.
WHC: I intentionally put myself in that position, by promising to teach an intro course. Gaps in one’s training are, of course, the usual thing. Most of the people in brain research came to it in funny ways, though there is increasingly a standard way of doing it now.
BC: I still haven’t met a lot of people that take that undergraduate-major route. It seems everyone I know comes to the brain from an external source.
WHC: People who are over thirty probably came to neuro through psychology or pre-med. Or they transferred laterally from physics and math and so forth.
BC: What do you think it is, about brain research or neuroscience, that tends to draw people from many fields?
WHC: Well, the brain is the most complicated and capable mechanism in the universe, that we know about. So it’s natural to want to know how it works, to try out the scientific method, of trying to formulate a question to some piece of the puzzle. If you phrase it right, you can force nature to give you an answer.
BC: Where did you do your graduate work, and what was your emphasis?
WHC: I first did about a year of graduate work in physics, but that was inadvertent. The physics department at Northwestern persuaded me to graduate early and become a graduate teaching assistant, because they needed another one. But I was very much in the process then of trying to figure out how I could have my cake and eat it too, with respect to physics and the brain.
I got two bits of very good advice from professors at Northwestern. Donald Campbell suggested in 1961 that I go to MIT, even though it wasn’t clear what I would major in there. And indeed, I wound up in the Electrical Engineering Department at MIT, but it exposed me to this broad range of research. It enabled me to go over to Harvard Med School and take a very good neuroscience course, and MIT also taught me a lot of communications and engineering principles. That made up my mind for me, that I would finally give up physics and do physiology and biophysics with a specialization in neurophysiology. Steve Glickman’s 1961 advice was to come out to Seattle because it had a good graduate program in exactly that area, and so that’s how I wound up in Seattle. I took his advice, one year delayed, and I have stayed here since. I’m always careful, when giving career advice to undergraduates, because I know how influential it can be.
BC: Do you think the approach to graduate school has changed, from when you went through?
WHC: Graduate study is a lot more organized now, but that is because there are so many more people to educate. Neuroscience has expanded from a few thousand people back in the early 60’s, mostly scattered in areas called neuroanatomy, neurophysiology, and the clinical areas like neurology. The words ‘neurobiology’ and ‘neuroscience’ were invented in the late 60’s to cover the broader enterprise. Today you get 25,000 people at the Society for Neuroscience meetings. The first one, that I attended in 1971, had 750 people at it. It’s just been an enormous expansion.
BC: What effect do you think it’s going to have on us as a culture at large, given that we’ve had an exponential increase in the number of scientists and in particular the number of scientists doing brain research?
WHC: The public is interested in how the brain works, thanks to the carryover from psychology. People are naturally interested in how other people tick. And such carryover doesn’t similarly exist for other fields. For example, I work some of the time in areas that are closer to geophysics, ocean currents and things like that. They don’t have a ready-made public that is particularly interested in their problems, not without a lot of priming. Brain research, in contrast, has the carryover from both psychology and from things that go wrong with the brain; a lot of people have stroke patients or brain tumor patients in their family or as acquaintances. That double influence is missing in other fields, so we’re privileged in having a ready-made public that will pay attention to us. It may well short-change other important subjects, neuro having the public eye so disproportionately. But it might mean that the public will get a good view of one branch of science, and so come to appreciate science more generally.
BC: You’ve written books for scientists and non-scientists. Do you think differently, when you know you’re talking to a scientific audience, versus when you’re talking to that general population that wants to know about how the brain works?
WHC: One of the things you learn in journalism, right off the first semester, is what constitutes a ‘story.’ There are so many interesting things in life that aren’t really stories, and so don’t appear in the press. Though brain research is full of interesting stories, only a fraction of them can be told to the public. Other stories, just as interesting scientifically, have prerequisites of chemistry, anatomy, electricity, or other things, and that keeps you from inflicting them on a general reader.
You just can’t, in a short article, cover very much ground unless you get very good at making analogies. Even then you’re limited. You can only tell perhaps one-tenth of the good stories. When you’re writing for a more professional audience you can spend time writing about things that some of the general public will not be able to follow. I think I’m a much better writer for professional audiences because of the exercise of learning to write for general readers. I spend a lot of time picking my terminology, simplifying, and finding analogies. If I had started out writing more for professional audiences, I don’t think I would have acquired such skills. But it helps me communicate with scientists in adjacent fields such as linguistics, this sensitivity to terminology and ability to spin analogies.
BC: There’s a translation step that has to happen, between science terminology and general understanding. Which do you think is more beneficial for the general population to learn – to be more selective in their appreciation and understanding the terminology of science, or for scientists to become more comfortable using less precise terms?
WHC: Oh, easily the second. A writer has to learn to use less precise terms geared to the audience you have in mind. You’ve got to adjust your terminology to help [the audience] make the next step up. And if you make that step too high, then you lose your audience. So you’ve got to take the occasional flack from somebody saying you’re not really right, because of some exception or nuance. But I think we can live with [imprecision] a lot more easily than we can live with not telling the stories at all.
Scientists writing [for a general audience] have a conflict, because when you’re doing science you know the whole thing hasn’t settled, as a solved issue. And you feel you ought to indicate that, usually by weasel words that confuse nonspecialists. A double negative construction may help tell another scientist that you know the objection forming up in her mind and that you have cleverly sidestepped it – but such circumlocution only confuses everyone else. So one’s explanations are always context dependent, and seldom as precise as legal prose.
BC: What about all those things that don’t make a good story?
WHC: One advantage the journalist has, over a scientist from another field trying to tell the same story, is that journalists have a good eye for what constitutes a memorable story. They may be able to select, out of the morass of imprecision and unsettled stuff, the things that will make a good 6,000 word story. That may be particularly hard to do, for someone working in the field, as all the untidy threads are so interesting.
But sometimes the story is too big for a science journalist. I waited for ten years for someone else to write up the story of climate flips, the ten-year transitions into an ice-age climate that was cool and dry, followed centuries later by an abrupt flip back into warmer and wetter. Science and Nature had been writing news stories on this chattering about twice a year for a decade, but the story never made it out into a wider circulation. Finally the editor of The Atlantic Monthly tracked me down and twisted my arm enough to get me to write what was the big-news story for another field. And when I slogged through the terminology of oceanography and ice cores, I knew why journalists had found it so hard to get a story past their editors. I took off half a year to write that story in a form that nonscientists could appreciate it. That’s where my cover story “The great climate flip-flop,” came from. And now I’m writing it up as a book, embedded in the context which caused to me follow the story since 1984, that of how repeated climate flips promoted an increase in hominid brain size. The working title is Cool, Crash, and Burn: The Once and Future Climate of Human Evolution.
BC: You’ve written on many topics and truly had impressive breadth in your writings. Of all the things you’ve written on, which one do you think will have the greatest impact say in 50 years in the future?
WHC: While it might be abrupt climate change, I would like to think that it would be one of my own research topics. Darwin machines, and all the issues around that, will be one of the big things 50 years from now: how your brain does a Darwinian process and how you develop shortcuts so that you don’t have to do it the long slow way the next time. Whether it will be my formulation of it that lasts, that I can’t judge.
I look at the Darwinian process as one of these great underlying principles in science. It serves as a generator of novelty that you can stabilize and build on top of. It’s one of these fabulous principles of the universe, like coding information in DNA. It’s right up there with the formation of chemical bonds, in terms of generating complexity in the universe.
BC: Let’s say we continue to accelerate our knowledge at the rate we have been, and we come up with an understanding of consciousness. Or consciousness becomes a seeable mechanism.
WHC: I think it is now. Read Antonio Damasio’s new book, The Feeling of What Happens. What Tony’s book does is lay a wonderful foundation for all the aspects of consciousness: staying awake, paying attention, and all the neurologist’s conundrums. With his foundation, we can better talk about higher consciousness, all those aspects of structured thought that I write about in The Cerebral Code and How Brains Think.
BC: So are we at a point where we understand consciousness?
WHC: I feel that I understand it to some extent. That is to say, I don’t see any mystery there. In 1991 Dan Dennett published his book Consciousness Explained. He had this wonderful introduction where he says that a mystery is one of those things that we don’t know how to think about – yet. He says the world used to be filled with mysteries, like “What were the origins of the universe?” Well, that’s not a mystery anymore. We don’t have all the answers yet but we certainly know how to think about the problem.
Consciousness is, Dennett said, the last remaining big mystery – the one that people just don’t know how to think about yet. Well, I think in the nine years since [Dennett’s] book was written, a lot has happened and we really do know how to think about it now. There’s a batch of consciousness issues that we still have to address. But we certainly know how to think about it, it’s not a “mystery” any more.
BC: What’s the next big thing you’re thinking about? I know you just came out with a new book on language.
WHC: Lingua ex Machina is about origins of the brain’s specialization for syntax, what Chomsky started talking about in the Sixties. We’re not talking about all the things that language is good for, once you’ve got it. Rather, we’re asking how do you get the ability to do structured things like syntax. And all the contingent planning, so you can say things like “Well, maybe we can go to the country this weekend, unless I have to work on Saturday, in which case maybe we’ll go to a movie on Sunday.”
That kind of contingent planning requires a certain amount of mental structuring. Where did we get that ability? The structure is the issue that [co-author] Derek Bickerton and I really ask about. It’s a question of evolutionary sidesteps, finding what sort of things paved the way for structured communicative ability. Derek points out that, once you get the “Who owes what to whom” mental categories, that you can start talking about “Who did what to whom” – and you’ve got a listener that’s already prepared for it because they too have all the right mental categories for doing this kind of structured thinking, thanks to the payoffs of reciprocal altruism. So you can immediately jump into the argument structure way of doing syntax without having had any natural selection going on for communication per se.
Most structures in the brain are multifunctional. You improve them for one reason, as in making curb cuts for wheelchairs, and then this turns out to be handy for “free lunch” secondary uses like wheeled suitcases and skateboards. Universal grammar is what we’re trying to account for in evolutionary terms, but it may be primarily another “skateboard,” not a “wheelchair.”
BC: Are you reading anything non-scientific right now?
WHC: I read a certain amount of science fiction, mostly Brin-Bear-Benford, and love the Patrick O’Brian 20-part historical novel. I read the futurist literature, particularly the people who are trying to think seriously about the next thousand years. Aside from my interest in climate change, I have a fascination with what people have to say in terms of what society might look like if any of the present trends and genetic engineering continues.
BC: Any favorite theories of what we may look like in a thousand years?
WHC: The babies born a thousand years from now will probably look pretty much like they do now, but the adults of a thousand years in the future will think substantially differently than we do. That’s because education will have gotten good. A hundred years ago, medicine was influenced only slightly by science. Most of medicine was very empirical – try it and see if it worked. There were a whole lot of things carried along that really didn’t work. When something worked, we didn’t know why it worked. Gradually, through the 20th century, the impact of science upon the practice of medicine became enormous.
I think education is in for exactly that kind of revolution. In another century, we’ll know what’s going on in the brain of a developing child, and we’ll have ways of carrying the science over into education. Education is still “try it and see if it works,” but pretty soon, half of it will be like medicine is today, where we’ll know why the good things work and we’ll know from science how to further improve them. We’ll know the underpinnings, why the system works, and so we’ll infer optimal ways of doing things. We’ll know when to present new information, when to consolidate, when to encourage creativity. All this will change what adults are like. As I say, we’ll still look the same as babies, but some adults are going to be far more capable.