lunes, 16 de febrero de 2009

The limits of the brain.

Paul and Patricia Churchland synthesize pretty well in the foreword the main thesis put forward by John von Neumann in this book:
Should we simply press past the obvious limitations of biological systems (limitations mostly of speed and reliability) and pursue the dazzling potential of electronic systems, systems that can, in principle and even with a von Neumann architecture, implement or simulate any possible computational activities? Or should we attempt instead, for whatever reasons, to mimic the computational organization displayed in the brains of insects, fishes, birds, and mammals? And what organization is that, anyway? Is it importantly or interestingly different from what goes on in our artificial machines?

Here, the reader may be surprised to discover, John von Neumann weighs in with a prescient, powerful, and decidedly nonclassical answer. He spends the first half of the book leading the reader stepwise through the classical conceptions for which he is so famously responsible, and as he turns finally to address the brain, he hazards the initial conclusion that "its functioning is prima facie digital." But this initial take on the neuronal data is also prima facie procrustean, a fact that von Neumann acknowledges immediately and subsequently turns to pursue at length.

The first problem he notes is that the connections between neurons do not show the telltale "two lines in and one line out" configuration that classical and-gates and or-gates display. Though each cell typically projects exactly one output axon, as the classical take would require, each cell receives more than a hundred, even more than several thousand, inputs from many other neurons. This fact is not decisive —there are, for example, multivalent logics. But it does give him pause.

The plot thickens as von Neumann pursues a point-by-point comparison between the fundamental dimensions of the brain's "basic active organs" (presumably, the neurons) and the computer's "basic active organs" (the various logic gates). Spatially, he observes, neurons have the advantage of being at least 10² times smaller than their presumptive electronic counterparts. (At the time, this estimate was exactly right, but with the unforeseen advent of photo-etched microchips, this size advantage has simply disappeared, at least where two-dimensional sheets are concerned. We can forgive von Neumann this one.)

More important, neurons have a major disadvantage where the speed of their operations is concerned. Neurons are, he estimates, perhaps 105 times slower than vacuum tubes or transistors in the time required to complete a basic logical operation. Here he is portentously correct, in ways about to emerge. If anything, he underestimates the neuron's very considerable disadvantage. If we assume that a neuron can have a "clock frequency" of no better than roughly 10² Hz, then the clock frequencies of almost 1,000 MHz (that is, 109 basic operations per second) now displayed in the most recent generation of desktop machines push the neuron's disadvantage closer to a factor of 107. The conclusion is inescapable. If the brain is a digital computer with a von Neumann architecture, it is doomed to be a computational tortoise by comparison.

Additionally, the accuracy with which the biological brain can represent any variable is also many orders od magnitude below the accuracies available to a digital computer. Computers, von Neumann observes, can easily use and manipulate eight, ten, or twelve decimal places of representation, while the neuron's presumed mode of representation —the frequency of the spike train it sends down its axon— appears limited to a representational accuracy of at most two decimal places (specifically, plus or minus perhaps 1 percent of a frequency maximum of roughly 100 Hz). This is troubling because, in the course of any computation that involves a great many steps, small errors of representation in the early steps regularly accumulate into larger errors at the closing steps. Worse, he adds, for many important classes of computation, even tiny errors in the early steps get exponentially amplified in subsequent steps, which inevitably leads to wildly inaccurate final outputs. Thus, if the brain is a digital computer with only two decimal places of representational accuracy, it is doomed to be a computational dunce.

Conjointly, these two severe limitations —one on speed, and the other on accuracy— drive von Neumann to the conclusion that whatever computational regime the brain is using, it must be one that somehow involves a minimum of what he calls "logical depth". That is, whatever the brain is doing, it cannot be sequentially performing thousands upon thousands of sequentially orchestrated computational steps, as in the super-high frequency, recursive activity of a digital machine's central processor. Given the slowness of its neuronal activities, there isn't enough time for the brain to complete any but the most trivial of computations. And given the low accuracy of its typical representations, it would be computationally incompetent even if it did have enough time.

(Paul & Patricia Churchland: pp. XV - XVIII, foreword)

In other words, von Neumann asks himself the same questions that humans have been trying to answer for centuries now: is it possible to simulate the human brain? But how does the brain function? That's precisely what he reflects upon in this short book we are now discussing. If anything, what makes this attempt different is the fact that we know far more about the human body and also that we have developed a methodology that allows us to reap the fruits of human knowledge in a manner that we could only dream of centuries ago: the scientific methodology. Of the two, this latter issue is perhaps the key, the engine behind the rapid succession of advances that we have accomplished in the last 100 years or so. In other words, unlike in the time of Aristotle, we now have a good reason to believe that our dream of building intelligent machines is within reach.

No hay comentarios: