----------> https://archive.ph/5FUvT
No matter how hard they try, brain scientists and cognitive psychologists will never find a copy of Beethoven’s 5th Symphony in the brain – or copies of words, pictures, grammatical rules or any other kinds of environmental stimuli. The human brain isn’t really empty, of course. But it does not contain most of the things people think it does – not even simple things such as ‘memories’.
Our shoddy thinking about the brain has deep historical roots, but the invention of computers in the 1940s got us especially confused. For more than half a century now, psychologists, linguists, neuroscientists and other experts on human behaviour have been asserting that the human brain works like a computer.
To see how vacuous this idea is, consider the brains of babies. Thanks to evolution, human neonates, like the newborns of all other mammalian species, enter the world prepared to interact with it effectively. A baby’s vision is blurry, but it pays special attention to faces, and is quickly able to identify its mother’s. It prefers the sound of voices to non-speech sounds, and can distinguish one basic speech sound from another. We are, without doubt, built to make social connections.
A healthy newborn is also equipped with more than a dozen reflexes – ready-made reactions to certain stimuli that are important for its survival. It turns its head in the direction of something that brushes its cheek and then sucks whatever enters its mouth. It holds its breath when submerged in water. It grasps things placed in its hands so strongly it can nearly support its own weight. Perhaps most important, newborns come equipped with powerful learning mechanisms that allow them to change rapidly so they can interact increasingly effectively with their world, even if that world is unlike the one their distant ancestors faced.
Senses, reflexes and learning mechanisms – this is what we start with, and it is quite a lot, when you think about it. If we lacked any of these capabilities at birth, we would probably have trouble surviving.
But here is what we are not born with: information, data, rules, software, knowledge, lexicons, representations, algorithms, programs, models, memories, images, processors, subroutines, encoders, decoders, symbols, or buffers – design elements that allow digital computers to behave somewhat intelligently. Not only are we not born with such things, we also don’t develop them – ever.
We don’t store words or the rules that tell us how to manipulate them. We don’t create representations of visual stimuli, store them in a short-term memory buffer, and then transfer the representation into a long-term memory device. We don’t retrieve information or images or words from memory registers. Computers do all of these things, but organisms do not.
Computers, quite literally, process information – numbers, letters, words, formulas, images. The information first has to be encoded into a format computers can use, which means patterns of ones and zeroes (‘bits’) organised into small chunks (‘bytes’). On my computer, each byte contains 8 bits, and a certain pattern of those bits stands for the letter d, another for the letter o, and another for the letter g. Side by side, those three bytes form the word dog. One single image – say, the photograph of my cat Henry on my desktop – is represented by a very specific pattern of a million of these bytes (‘one megabyte’), surrounded by some special characters that tell the computer to expect an image, not a word.
Computers, quite literally, move these patterns from place to place in different physical storage areas etched into electronic components. Sometimes they also copy the patterns, and sometimes they transform them in various ways – say, when we are correcting errors in a manuscript or when we are touching up a photograph. The rules computers follow for moving, copying and operating on these arrays of data are also stored inside the computer. Together, a set of rules is called a ‘program’ or an ‘algorithm’. A group of algorithms that work together to help us do something (like buy stocks or find a date online) is called an ‘application’ – what most people now call an ‘app’.
Forgive me for this introduction to computing, but I need to be clear: computers really do operate on symbolic representations of the world. They really store and retrieve. They really process. They really have physical memories. They really are guided in everything they do, without exception, by algorithms.
Humans, on the other hand, do not – never did, never will. Given this reality, why do so many scientists talk about our mental life as if we were computers?


While there are zealots who genuinely do think that the brain is literally a digital computer, I think most people would, when pressed, admit that it’s an analogy. The prevailing technology of the day has a long and distinguished history of being used as a metaphor for describing thought–early modern philosophers loved to talk about mental events as a kind of clockwork mechanism, for instance–but those analogies are not generally to be taken literally. The mechanists of the 18th century didn’t think that there were literally gears inside your skull; they just thought that the idea of an unimaginably intricate clockwork mechanism was a useful way to think about the functioning and organization of cognition (which it is, at least in some ways).
There’s maybe more literalism about it these days than is standard, but even most of the people who take this analogy very seriously aren’t saying something so trivially false as “your brain has a literal CPU and works exactly the way your laptop does.” That’s pretty obviously not true. But the analogy is a useful one in many ways, and can help us understand what the hell is going on in there that lets a big chunk of meat give rise to such an extraordinary phenomenon as consciousness. The observation that when I do something like add two and two in my head there must be something going on that is, in some relevant sense, functionally identical to what goes on in a desk calculator when I enter
2+2isn’t totally vapid. It’s possible to take all this too seriously, and moving from “there’s some amount of functional parallelism here” to “these two systems are functionally identical in general” is (I think) an unwarranted one. But, again, I don’t really think that’s the interesting thesis here.They really don’t, at least no more literally than brains do. Computers don’t “know” anything about numbers, words, formulas, images, or algorithms. They don’t even know anything about 1s and 0s or bits and bytes: every single one of those things is an abstraction that helps us track the real pattern in how an extraordinarily complicated system changes over time. Computers are cleverly designed arrangements of metal, plastic, and other physical material that, when subjected to certain boundary conditions, will evolve over time in predictable ways that we can then use to model various patterns. They’re physical models in about the same sense that a model airplane in a wind tunnel is a physical model of a full-sized airplane in the open air–they’re just more complicated by far. There’s nothing magical about this kind of arrangement that makes it “real” information processing while the brain (or anything else) is ersatz; that is, information processing just is that kind of stable, predictable physical change. Computers process information in exactly the same sense that brains do and in exactly the same sense that any other physical system does. Computers (and brains) have the virtue of being a combination of complex and stable that lets them process a lot of information across a wide variety of contexts, given appropriate inputs and boundary conditions.
Consider the difference between a modern digital computer and something like Babbage’s analytical engine. There are enormous physical differences between those two systems. The analytical engine was purely mechanical: it took its input via punch cards and stored its internal state via wooden or metal pegs inserted into rotating barrels. Is something like that “quite literally processing information” on this view? Is it encoding data as bits and bytes? Or is that something that only electronic digital computers can do? This strikes me as an obviously silly question: there are ways in which the analytical engine and my laptop are functionally similar, and ways in which they are different. Whether the similarities or differences are more salient depends on what you care about, or what kinds of things you think are important to track. Either they’re both doing information processing, though, or neither of them is–there’s nothing special about electronic digital computers that makes them “real” instances of computation and everything else just a simulacrum. But if the analytical engine can process information despite huge differences in material constitution and operation from an electronic computer, then surely the brain can as well. That doesn’t mean a brain is a digital computer, just that (again) there are elements of similarity between the two, and that the formalism of information theory can be a useful lens for understanding the operation of both.