The Feeling of Life Itself: Why Consciousness Is Widespread but Can’t Be Computed. By Christof Koch. MIT Press, 2019, ISBN: 978-0-262-04281-9. 257 pp. Hardcover, $27.95
In 2003, billionaire Paul Allen—best known for not being Bill Gates—founded the Allen Institute for Brain Science, which has done, and continues to do, important work. In 2011, Christof Koch, then a professor at the California Institute of Technology, joined the Institute as its chief scientist, becoming its president in 2015. Koch is the author of over 300 papers and four previous books (two of which are on the subject of consciousness).
Does such an impressive background, training, and experience equip someone to solve the conundrum of consciousness? Apparently not.
As Koch acknowledges in his book The Feeling of Life Itself, the problem of consciousness (a.k.a. the mind-body problem) has puzzled thinkers for millennia. Koch tells us that the problem “formerly the sole province of philosophers, novelists, and moviemakers” (Really? Not also theologians, poets, songwriters, psychologists, and cognitive scientists?) is now being addressed by scientists. Well, yes, addressed—but not actually solved. Koch has married two very different works—one scientific, the other wildly speculative—and used the former to lend credibility to the latter. But the marriage is a bad one.
In the scientific part, Koch states the by now uncontroversial principle no brain, no mind. Koch also acknowledges that, as he quotes Theodosius Dobzhansky, “Nothing in biology makes sense except in the light of evolution.” One would think, then, that consciousness—which as far as we can tell occurs only in animals with a nervous system more or less like ours—can also be understood only in the light of evolution. But no.
In the speculative part, Koch suggests that a single bacterium or even a single atom may have a mind. He predicts that someday we’ll have a consciousness detector like the one in Star Trek. For organisms with brains, this is conceivable even if highly unlikely. But the notion that we would ever have one for bacteria or atoms is, like Star Trek, pure science fiction. And could we ever ask a bacterium to describe its conscious experience so we could confirm the device’s reliability? I doubt it.
The scientific part of the book provides a fairly conventional introduction to brain anatomy, neurophysiology, and the quest for the neural correlates of consciousness, a pursuit in which Koch has been a central figure. This research has already had concrete clinical benefits, especially in assessing whether apparently unresponsive patients are brain dead or actually conscious.
But even here, Koch goes far beyond current clinical evidence to suggest that if two brains could be suitably connected, the individual minds would disappear and a new mind would abruptly appear. Other speculations are even more extreme.
In getting to his boldest claims, Koch begins Chapter 1 with a phenomenological description of his experience of consciousness. At the very end of the chapter, he summarizes his introspection by stating that “Every conscious experience exists for itself, is structured, is the specific way it is, is one, and is definite.”
He follows this, with apparent modesty, with “So that’s how it is for me. How is it for you?” Well, aside from the fact that I find his distillation mostly unintelligible, I’d say that for me consciousness has many other properties—it’s always local; it’s always temporally immediate; it always includes my point of view or perspective; it’s always embedded both in my body and my physical environment; and it always has specific content. How is it for you?
Koch’s modesty turns out to be false, although he takes quite awhile to admit it. By Chapter 7, he’s asserting that those five introspectively derived phenomenological aspects of consciousness identify its necessary and sufficient elements. He’s got a theory of consciousness. Well, actually, it’s not original to him—it was developed by Guilio Tononi (a professor at the University of Wisconsin). But Koch endorses it enthusiastically—in fact, it’s “a first in the history of thought.” (I don’t think only a single exclamation point would be sufficient here, so I won’t add any.)
That theory is called “integrated information theory” (IIT). This unfortunate coinage—Tononi’s—would suggest that information theory (itself misnamed—information theory is actually a theory of signal transmission) has been integrated with some other theory or maybe just with itself. But no, IIT (as Koch almost immediately abbreviates it) is a theory of integrated information, whatever that may mean.
Those five properties are now treated as, in Koch’s own words, axioms. As he says, “in geometric or mathematical logic, axioms are foundational statements that serve as a starting point for deducing further valid geometric or logical properties and expressions.” But, as Koch himself explains elsewhere, science isn’t deductive, it’s abductive. Science has principles and models and even dogmas (such as the central dogma of molecular biology) but not axioms.
Koch makes the bald assertion that “any system that obeys these five axioms is conscious.” And we’re off to the races. The theory introduces a mathematical measurement, dubbed ϕ (phi), which can be calculated for any network of elements that cause changes in one another. The greater the ϕ, the greater the mind. He shows a diagram of only three elements capable of assuming only two states each that has a measurable amount of ϕ, so presumably three transistors wired together in such a way would have a mind. Because the three elements aren’t connected to any others that could provide any input, what such a system would be conscious of is entirely unclear. (Apparently, it would have pure consciousness—about which more later.)
As long ago as 2015, mathematician Ronald Cicurel and neurobiologist Miguel Nicolelis published a monograph titled The Relativistic Brain: How It Works and Why It Cannot Be Simulated by a Turing Machine. In the concluding chapter of that short work, they refer to the concept of ϕ as “leading Koch to a form of panpsychism.” They go on to write:
In our view, despite being an interesting concept, ϕ measures Shannon-Turing information and, as such, is not sufficient to justify the emergence of higher brain functions responsible for fusing, in a single picture, a continuous stream of sensory and mnemonic information.
Koch asserts, without any clinical evidence, that somewhere in the brain’s physical substrate of consciousness (which, as he explains, clinical evidence suggests is in the posterior cortex) will be found a set of neurons (or perhaps clusters of neurons) connected to each other like the three-element diagram. Remember, although he’s been working on brains for a decade or two, he’s not claiming that such a configuration has actually been found; he’s just asserting that it will be.
Because he’s elsewhere suggested that nothing else in a normal brain may be conscious, he’s also claiming that almost certainly nowhere else in the brain would such a configuration exist. (The brain contains many tens of billions of neurons, each connected to many thousands of others. This nonoccurrence is hardly likely even from just a purely statistical viewpoint.) This despite the assertion that consciousness is, to quote the subtitle, “widespread” elsewhere.
Of the five axioms, one is first among equals: what he calls “intrinsic existence.” Like “information theory” in “integrated information theory,” this usage is eccentric. Rather than using intrinsic to mean “belonging to the essential nature or constitution of a thing,” Koch uses it to mean that consciousness exists “for itself, without an observer.” Leaving aside that consciousness is a process that occurs rather than a thing that exists, it’s entirely unclear what this means. Again, from the Dobzhansky principle, consciousness hardly exists “for itself” but for animals to function more effectively in the world. But Koch makes this intrinsic existence something of a religious principle. In the chapter “Of Wholes,” he gives the set of elements that bestows intrinsic existence to itself “the more poetic name the Whole [his emphasis] (with a capital W).” Like the word God, maybe?
In keeping with this religious spirit, Koch repeatedly refers to something he calls “pure consciousness” or “pure experience,” explicitly invoking mystical states reported by practitioners of Buddhist meditation and the like. Koch describes his own experience in a sensory deprivation tank as being such a state. (If you can notice that you’re in such a state well enough to report it later, is your consciousness really devoid of content?)
Although the book’s subtitle asserts that consciousness isn’t computable—that is, can’t be created in a computer—and the book has a chapter titled “Why Computers Can’t Experience,” Koch claims that a “neuromorphic electronic device” (which he emphasizes would be nothing like today’s Universal Turing Machine computers or even tomorrow’s quantum computers) “could have human-level experience.” Well, no. A computer, as Koch himself recognizes, is such only from the external perspective of an observer. An electronic device—even a neuromorphic one—is no more intrinsically a computer than a mechanical clock is intrinsically an instrument to display the time and not, say, a child’s wind-up toy.
Koch makes some other remarkable claims. He asserts that computers can’t experience but they can be genuinely intelligent: “the tech industry will create, within decades, machines with human-level intelligence” and refers to this as the “birth of true artificial intelligence” (emphasis added). Again, no.
Koch is well aware of John Searle’s famous Chinese Room argument that syntax is not semantics: a system that doesn’t know what anything actually means (not in terms of still more symbols—words, usually—but in the world) can’t do anything but simulate intelligence. Such a computer program will never have what we call common sense—the ability to recognize when one of its own conclusions is absurd. But Koch confines his discussion of this issue to a long footnote in which he attacks Searle for applying the Chinese Room to IIT itself. (Koch and Tononi have, he reports, met with Searle several times to explain the theory but without success.)
Koch also claims that consciousness might occur in a network of neurons grown in a Petri dish (“forming a mini-mind”), a single cell, or “perhaps even brute matter itself”—that is, a single atom. He suggests (by framing it as a rhetorical question) that integrated information theory has solved the mind-body problem. And, triumphantly, “Causal power of two kinds [the physical and that supposedly explained by IIT] is the only sort of stuff needed to explain everything in the universe. These powers constitute ultimate reality.” Wow.
Crucially, even if it were true, IIT wouldn’t actually explain how a system embodying his five axioms actually causes the experience of consciousness itself, much less its detailed contents—the qualia. Even if IIT could somehow be proved, the mechanism itself would still be utterly mysterious. It’s a long way between determining that water is H20 and demonstrating why such a molecule is a colorless liquid at room temperature, why it freezes and boils at the points it does, why it’s less dense in the solid form than in the liquid, why it’s a good solvent, and its many other properties.
On many subjects, Koch mostly displays the sort of skepticism one might expect from a working scientist. For example, he explains why quantum mechanics is unlikely to explain how the brain works. He dismisses panpsychism (other than IIT, that is) as “barren.” His take on computationalism (the brain as computer) is good stuff. (His discussion of deep learning, however, is credulous and inadequate.)
This book kept reminding me of The Physics of Immortality by Frank Tipler. If you’re not familiar with that work, its subtitle is Modern Cosmology, God and the Resurrection of the Dead. (I have to admit that I didn’t actually read that book, which is 560 pages long—I just paged through it in the bookstore in utter stupefaction.) At least The Feeling of Life Itself is (excluding the back matter) only 173 pages.
Koch has erected a remarkable edifice on a remarkably shaky foundation. He decries computationalism as consisting of “convenient but poor tropes” and being “ideology run amok.” Both of these criticisms apply well to integrated information theory and the bizarre notion of intrinsic experience.