viernes, 27 de febrero de 2009

El nombre de la rosa.

Novela histórica de misterio ambientada en el ambiente religioso del siglo XIV que se convirtió rápidamente en un best seller internacional a principios de los años ochenta y acabó siendo llevada al cine por el director francés Jean-Jacques Annaud. Narra la investigación de una misteriosa cadena de crímenes que se suceden en una abadía de los Alpes italianos. Pronto se convirtió en una novela de culto en la que muchos creyeron ver referencias a Jorge Luis Borges, Arthur Conan Doyle o incluso Guillermo de Ockham. En definitiva, que se trata de una de esas novelas que hace ya bastantes años que he querido leer y jamás me había puesto a hacerlo hasta ahora.

Ficha técnica:
Título: El nombre de la rosa.
Autor: Umberto Eco.
Editorial: Debolsillo.
Edición: Séptima edición en formato de bolsillo, Barcelona, octubre 2006 (1980).
Páginas: 781.
ISBN: 84-9759-258-1

viernes, 20 de febrero de 2009

Comparing brain versus computer: parallel versus serial.

Von Neumann's summary of the comparison between the human brain and the computer contains, astonishingly enough, a few insights that still apply to today's technology:
First: in terms of the number of actions that can be performed by active organs of the same total size (defined by volume or by energy dissipation) in the same interval, the natural componentry is a factor 104 ahead of the artificial one. This is the quotient of the two factors obtained above, i.e. of 108 to 109 by 104 to 105.

Second: the same factors show that the natural componentry favors automata with more, but slower, organs, while the artificial one favors the reverse arrangement of fewer, but faster, organs. Hence it is to be expected that an efficiently organized large natural automation (like the human nervous system) will tend to pick up as many logical (or informational) items as possible simultaneously, and process them simultaneously, while an efficiently organized large artificial automaton (like a large modern computing machine) will be more likely to do things successively —one thing at a time, or at any rate not so many things at a time. That is, large and efficient natural automata are likely to be highly parallel, while large and efficient artificial automata will tend to be less so, and rather to be serial. (...)

Third: it should be noted, however, that parallel and serial operation are not unrestrictedly substitutable for each other —as would be required to make the first remark above completely valid, with its simple scheme of dividing the size-advantage factor by the speed-disadvantage factor in order to get a single (efficiency) "figure of merit." More specifically, not everything serial can be immediately paralleled —certain operations can only be performed after certain others, and not simultaneously with them (i.e. they must use the results of the latter). In such a case, the transition from a serial scheme to a parallel one may be impossible, or it may be possible but only concurrently with a change in the logical approach and organization of the procedure. Conversely, the desire to serialize a parallel procedure may impose new requirements on the automaton. Specifically, it will almost always create new memory requirements, since the results of the operations that are performed first must be stored while the operations that come after these are performed. Hence the logical approach and structure in natural automata may be expected to differ widely from those in artificial automata. Also, it is likely that the memory requirements of the latter will turn out to be systematically more severe than those of the former.

(von Neumann: pp. 50-52)

It speaks for the excelence of von Neumann's work that at least parts of it are ust as relevant today as they were back in 1956, in spite of all the changes that took place in a field that's as dynamic as this. Specifically, those reflections about parallel versus serialized computing are still plenty relevant.

lunes, 16 de febrero de 2009

The limits of the brain.

Paul and Patricia Churchland synthesize pretty well in the foreword the main thesis put forward by John von Neumann in this book:
Should we simply press past the obvious limitations of biological systems (limitations mostly of speed and reliability) and pursue the dazzling potential of electronic systems, systems that can, in principle and even with a von Neumann architecture, implement or simulate any possible computational activities? Or should we attempt instead, for whatever reasons, to mimic the computational organization displayed in the brains of insects, fishes, birds, and mammals? And what organization is that, anyway? Is it importantly or interestingly different from what goes on in our artificial machines?

Here, the reader may be surprised to discover, John von Neumann weighs in with a prescient, powerful, and decidedly nonclassical answer. He spends the first half of the book leading the reader stepwise through the classical conceptions for which he is so famously responsible, and as he turns finally to address the brain, he hazards the initial conclusion that "its functioning is prima facie digital." But this initial take on the neuronal data is also prima facie procrustean, a fact that von Neumann acknowledges immediately and subsequently turns to pursue at length.

The first problem he notes is that the connections between neurons do not show the telltale "two lines in and one line out" configuration that classical and-gates and or-gates display. Though each cell typically projects exactly one output axon, as the classical take would require, each cell receives more than a hundred, even more than several thousand, inputs from many other neurons. This fact is not decisive —there are, for example, multivalent logics. But it does give him pause.

The plot thickens as von Neumann pursues a point-by-point comparison between the fundamental dimensions of the brain's "basic active organs" (presumably, the neurons) and the computer's "basic active organs" (the various logic gates). Spatially, he observes, neurons have the advantage of being at least 10² times smaller than their presumptive electronic counterparts. (At the time, this estimate was exactly right, but with the unforeseen advent of photo-etched microchips, this size advantage has simply disappeared, at least where two-dimensional sheets are concerned. We can forgive von Neumann this one.)

More important, neurons have a major disadvantage where the speed of their operations is concerned. Neurons are, he estimates, perhaps 105 times slower than vacuum tubes or transistors in the time required to complete a basic logical operation. Here he is portentously correct, in ways about to emerge. If anything, he underestimates the neuron's very considerable disadvantage. If we assume that a neuron can have a "clock frequency" of no better than roughly 10² Hz, then the clock frequencies of almost 1,000 MHz (that is, 109 basic operations per second) now displayed in the most recent generation of desktop machines push the neuron's disadvantage closer to a factor of 107. The conclusion is inescapable. If the brain is a digital computer with a von Neumann architecture, it is doomed to be a computational tortoise by comparison.

Additionally, the accuracy with which the biological brain can represent any variable is also many orders od magnitude below the accuracies available to a digital computer. Computers, von Neumann observes, can easily use and manipulate eight, ten, or twelve decimal places of representation, while the neuron's presumed mode of representation —the frequency of the spike train it sends down its axon— appears limited to a representational accuracy of at most two decimal places (specifically, plus or minus perhaps 1 percent of a frequency maximum of roughly 100 Hz). This is troubling because, in the course of any computation that involves a great many steps, small errors of representation in the early steps regularly accumulate into larger errors at the closing steps. Worse, he adds, for many important classes of computation, even tiny errors in the early steps get exponentially amplified in subsequent steps, which inevitably leads to wildly inaccurate final outputs. Thus, if the brain is a digital computer with only two decimal places of representational accuracy, it is doomed to be a computational dunce.

Conjointly, these two severe limitations —one on speed, and the other on accuracy— drive von Neumann to the conclusion that whatever computational regime the brain is using, it must be one that somehow involves a minimum of what he calls "logical depth". That is, whatever the brain is doing, it cannot be sequentially performing thousands upon thousands of sequentially orchestrated computational steps, as in the super-high frequency, recursive activity of a digital machine's central processor. Given the slowness of its neuronal activities, there isn't enough time for the brain to complete any but the most trivial of computations. And given the low accuracy of its typical representations, it would be computationally incompetent even if it did have enough time.

(Paul & Patricia Churchland: pp. XV - XVIII, foreword)

In other words, von Neumann asks himself the same questions that humans have been trying to answer for centuries now: is it possible to simulate the human brain? But how does the brain function? That's precisely what he reflects upon in this short book we are now discussing. If anything, what makes this attempt different is the fact that we know far more about the human body and also that we have developed a methodology that allows us to reap the fruits of human knowledge in a manner that we could only dream of centuries ago: the scientific methodology. Of the two, this latter issue is perhaps the key, the engine behind the rapid succession of advances that we have accomplished in the last 100 years or so. In other words, unlike in the time of Aristotle, we now have a good reason to believe that our dream of building intelligent machines is within reach.

El Hobbit.

The Hobbit debe haber sido la segunda novela que leí en inglés en toda mi vida. Si no recuerdo mal, fue allá por 1990 ó 1991, poco después de leer Animal Farm, de George Orwell, que sin duda intimidaba menos. Y no es que cometiera el error tan frecuente de pensar que, puesto que era literatura fantástica, debiera estar dirigida al público lector infantil o juvenil y, por tanto, debiera ser fácil de leer, no. A esas alturas ya sabía distinguir entre literatura fantástica y aquella otra dirigida a los niños. Pero el caso es que ya había leído por entonces un buen número de libros de ensayo e imaginaba que, siendo capaz de leer obras de economía y política internacional en una lengua extranjera, tampoco tendría problema alguno con The Hobbit. La verdad es que, a pesar de los avisos de más de un amigo, no me equivoqué. La lectura fue moderadamente difícil, pero reconfortante.

Pues bien, muchos años después, me encuentro con este comic en la estantería de uno de mis hijos... y yo sigo sin leer el opus magnum de J.R.R. Tolkien, The Lord of the Rings. Se trata, por un lado, de que me da un poco de pereza cuando veo tamaño libro. Pero, fundamentalmente, el hecho es que la literatura fantástica no me va, la verdad sea dicha. Puedo tolerar la ciencia ficción con algunas condiciones —fundamentalmente, que no sea demasiado fantástica, que entre dentro de lo relativamente posible, aunque sea dentro de varios miles de años—, pero la fantasía me parece en demasiadas ocasiones... pues eso, excesivamente fantástica. Y es que en demasiadas ocasiones los autores parecen confundir fantasía con infantilismo, o al menos esa es la impresión que se lleva uno. Dicho esto, he de reconocer que Tolkien no es uno de esos autores.

En cualquier caso, esta novela gráfica de Charles Dixon y David Wenzel está bastante conseguida. Mantiene el espíritu del libro y tanto el dibujo como la tipografía casan bien con el contenido fantástico de la obra que nos narra las aventuras de Bilbo Bolsón. Se disfruta mucho y se lee en un par de sentadas.

Ficha técnica:
Título: El Hobbit.
Autor: Charles Dixon y David Wenzel, basado en la historia de J.R.R. Tolkien.
Editorial: Norma Editorial.
Edición: Barcelona, 1990.
Páginas: 134 páginas
ISBN: 84-8431-432-4

domingo, 15 de febrero de 2009

The Computer and the Brain.

A very short book, written for Yale's Silliman Lectures and published after the author's death, that can definitely be considered a part of the very foundations of Computer Science. In it, John von Neumann (yes, the father of the renowned von Neumann architecture upon which we built the whole edifice of modern computing) muses about the differences between machine and biological intelligence. Prominent neuroscientific thinkers Paul M. Churchland and Patricia S. Churchland provide a brief introduction to the book, which represents the final accomplishment of one of the greatest mathematicians of the twentieth century. John von Neumann concludes that the human brain operates, at least in part, in a digital manner, although instead of doing so in a sequential manner, it acts in what can only be considered a massively parallel form, thus predating our contemporary approach to these issues.

Technical description:
Title: The Computer and the Brain.
Author: John von Neumann.
Publisher: Yale Nota Bene/Yale University Press
Edition: Second edition, New York (USA), 2000 (1958).
Pages: 82 pages.
ISBN: 0-300-08473-0

Find it on Amazon (USA, UK).