Your brain is probably the most complex thing in the universe. Consciousness is a mystery that millennia of philosophers, theologians, and scientists haven’t begun to comprehend. Underestimating that by comparing it to a machine is a mistake.
I feel like this is a point we’re all forgetting, so let’s back up a little and break it down.
Do you have any bandwidth right now? We should sync up, so you can give me a download of your current processes.
You’ve probably heard or said sentences like these, borrowing terms from technology—bandwidth, sync, download—to talk about yourself and your co-workers. Computers have become the primary metaphors for the human brain, and humans in general, for a while now.
It’s not the first such metaphor. The ancient Greeks, fascinated by the then cutting edge science of hydraulics, though the mind was powered by pumps and liquids. Robert Epstein, a senior research psychologist at the American Institute for Behavioral Research and Technology in California, points out that the dominant metaphor kept changing:
By the 1500s, automata powered by springs and gears had been devised, eventually inspiring leading thinkers such as René Descartes to assert that humans are complex machines. In the 1600s, the British philosopher Thomas Hobbes suggested that thinking arose from small mechanical motions in the brain. By the 1700s, discoveries about electricity and chemistry led to new theories of human intelligence—again, largely metaphorical in nature. In the mid-1800s, inspired by recent advances in communications, the German physicist Hermann von Helmholtz compared the brain to a telegraph.
None of these metaphors aged well. We still use phrases like “the wheels are turning,” but you probably don’t think of your brain as a mechanical clock, for example, because that’s absurd. People in the future will likely find the computer metaphor similarly strange. You don’t store memories in your mind, like a hard drive. You don’t process data, like a processor. You don’t recharge.
The human brain is nothing like a computer.
The vast majority of Americans have no trouble riding a bike—we even use the expression “like riding a bike” to describe something that’s intuitive or easy to remember. But the Wikipedia article on the physics of riding a bicycle is almost 14,000 words long and really, really dense. Seriously, here’s just the physics of turning:
If you’ve ever ridden a bike, you know, instinctually, how to do all of that—regardless of whether you understand any of the math. A surprising number of routine tasks are similarly complicated, mathematically, yet easily handled by humans. In 1995, researchers at Kent State University examined how outfielders determine where to run to catch fly balls. From the paper:
This work supports the premise that outfielders use spatial rather than just temporal cues to initially guide them toward the fly ball destination point. It confirms that optical information can be simplified when analyzed as a full 2D image rather than separated into vertical and horizontal one-dimensional components. We suggest that the act of maintaining a linear trajectory takes advantage of a perceptual invariant-constancy of relative angle of motion- that can be used generically to pursue and approach moving objects.
You might understand that. I don’t. But if you’ve ever caught a pop fly, you’ve done all of it.
Humans are very good at taking in a variety of information, all at once, and acting on it. None of this is to say that you couldn’t build a machine that could turn a bike or catch a pop fly. I suspect some grad student somewhere is in a basement right now, crunching the numbers needed to make it all work—assuming that it hasn’t happened already. What I mean to point out is that our minds, after millions of years of fine-tuning via evolution, can do these sorts of things in ways that are totally different—and arguably better—than how a machine would accomplish them.
You can’t run faster than a car or swim faster than a motorboat. It’s so obvious that we don’t think to compare runners to cars or swimmers to boats. We compare ourselves to computers all the time, though, which causes us to miss the ways in which our minds are different.
There are things computers can do better than humans, just like there are things cars, combines, and planes can do better than humans. Computers can store and recall information with perfect fidelity. Computers can crunch numbers with astonishing speed. Computers can send that information to other computers over a network with precision. Comparing yourself to machines for these tasks isn’t useful because humans will simply never measure up.
But you don’t have to measure up. The comparison is irrelevant. Computers aren’t a benchmark for humanity—they’re a tool that we created so we can do more cool stuff. Using computers allows us to focus more on the kinds of things humans are better at.
In 1997, a computer named Deep Blue defeated Garry Kasparov, then the world chess champion. You might think the story ends there—computers are better at chess than humans, humanity is now useless, etcetera.
But it wasn’t the end. The story continued.
The human mind can do things that even the most complex computer systems cannot. Take Google’s AlphaZero, an advanced AI that taught itself to play chess. Patrick Wolff, a former chess champion, talked about it in Solomon’s Code, a book about artificial intelligence written by Olaf Groth and Mark Nitzberg:
[Wolff] realized that AlphaZero, for all its sheer computing and cognitive power, still lacked something innate to human grandmasters: a conceptual understanding of the game. AlphaZero can calculate at four orders of magnitude faster than any human, and it can use that power to generalize across a wide range of potential options across a chess or go board, but it can’t conceptualize a position or put it into language.
Computers are very good at certain aspects of chess, but even now—more than 20 years after Deep Blue—there are things human players can do that computer cannot. Human grandmasters practice playing against computers, learning from them. Computers, in turn, are still learning from humans, who experience the game in ways that no machine can. The book continues:
These days, Wolff says, virtually every grandmaster will train with a computer, integrating it into their routine to help hone their game and develop new concepts—perhaps coming up with a more intricate opening, or contemplating ways to counter novel tactics deployed by opponents. Many of them compete in “advanced chess” or “centaur chess” tournaments, in which human-machine pairs vie against each other. The combination can produce sublime results, says Wolff. “It’s like watching gods play,” he says. “It’s incredible, the quality of chess they play.”
A computer defeated the best chess player in the world. But now, human players are better than ever, in part because they have access to computer players. Computers, in turn, become better because of the things they learn from humans. The game is evolving.
I copied the above excerpts from a paper copy of Solomon’s Code using Google Lens, which is a staggering piece of software. I pointed my phone at the book, tapped a button, and suddenly I could copy the text. I could have typed it out myself, sure, but this was faster.
I write for a living, attempting to plant my thoughts in other people’s minds using 26 characters and some punctuation. I’m really thankful I get to do this now because the tools I have access to make me a lot better at it. I don’t think I could do it if I had to use a pen and paper.
We now have access to tools that make us better. You can set up systems to do the mundane tasks, giving you more time to focus on the sorts of things that humans excel at. Things like creativity, empathy, and improvisation are uniquely human—and computers can make us better at all of them.
There’s no reason to compare yourself to these tools. It’s nonsensical.