domingo, 29 de marzo de 2009

Stereotypes of Europe & London.

The following could also be said about the idea many Americans (and Europeans) still have of England:
Kumiko knew the Sprawl from a thousand stims; a fascination with the vast conurbation was a common feature of Japanese popular culture.

She'd had few preconceptions of England, when she'd arrived there: vague images of several famous structures, unfocused impressions of a society her own seemed to regard as quaint and stagnant. (In her mother's stories, the princess-ballerina discovered that the English, however admiring, couldn't afford to pay her to dance.) London, so far, had run counter to her expectations, with its energy, its evident affluence, the Ginza-bustle of its great shopping streets.

(Gibson: p. 142)

Talk about stereotypes. Yes, sure, we all know that London is one of the most important financial centers of the world, but the assumption is that it remains so due to the past vigor of the British Empire more than anything else. Yet, the reality is that British society is quite dynamic and diverse. Due to the influence of popular media, when we try to think of a diverse, lively, dynamic, changing multicultural society, New York City pops up to mind almost immediately. However, London can also be described with the very same words. To some extent, the same applies to the overall image most Americans have of Europe. Many still view it as a group of monolithic societies but things have changed too much in the past two decades.

A futuristic but familiar world.

Gibson portrays a futuristic world that, nevertheless, feels quite familiar and plausible:
A soapbox evangelist spread his arms high, a pale fuzzy Jesus copying the gesture in the air above him. The projection rig was in the box he stood on, but he wore a battered nylon pack with two speakers sticking over each shoulder like blank chrome heads. The evangelist frowned up at Jesus and adjusted something on the belt at his waist. Jesus strobed, turned green, and vanished. Mona laughed. The man's eyes flashed God's wrath, a muscle working in his seamed cheek.

(Gibson: p. 67)

It's one of the things that makes Gibson's type of science-fiction attractive: it feels close enough, realistic, something waiting for us just around the corner. The characters are as flawed as we all are, and for the most part are driven by the same motives (greed, ambition...). They just live and act in a different context.

Sixteen and SINless.

A picture that's starting to look more and more real, even in the Anglo-Saxon countries where there was no tradition of using a national ID:
She was sixteen and SINless, Mona, and this older trick had told her once that that was a song, 'Sixteen and SINless'. Meant she hadn't been assigned a SIN when she was born, a Single Identification Number, so she'd grow up on the outside of most official systems. She knew that it was supposed to be possible to get a SIN, if you didn't have one, but it stood to reason you'd have to go into a building somewhere and talk to a suit, and that was a long way from Mona's idea of a good time or even normal behavior.

(Gibson: p. 64)

If anything, what I find most peculiar is the fact that it's precisely in those countries where there was no tradition of national ID that the governments are perhaps the ones who have gone the furthest making an excessive use of their powers in order to preserve national security: the United Kingdom allows for the Government to interfere with any sort of private communication, the United States does that and it also creates an international prison in a legal no-man's zone in Guantanamo Bay... Only a decade ago, these very same countries proudly emphasized their commitment to the liberties and pointed their fingers to continental Europe as an example of overgrown Governments gone wild. Ten years later, one could hardly believe the things most American and British commentators wrote about Germany or France when it comes to these issues. It's almost as if the pendulum swung from one extreme to the other. Let's just hope that, sooner or later, it stops somewhere in the wise and sensible middle.

An early description of cyberspace.

William Gibson is well known for coining the terms cyberspace and matrix to describe a virtual reality accessed via computers or some form of technology. The same idea is very nicely developed in Neal Stephenson's Snow Crash novel, among many others, but the seed of the idea is already present in the Sprawl Trilogy without a doubt:
There is no there, there. They taught that to children, explaining cyberspace. She remembered a smiling tutor's lecture in the arcology's executive creche, images shifting on a screen: pilots in enormous helmets and clumsy-looking gloves, the neuroelectronically primitive "virtual world" technology linking them more effectively with their planes, pairs of miniature video terminals pumping them a computer-generated flood of combat data, the vibrotactile feedback providing a touch-world of studs and triggers... As the technology evolved, the helmets shrank, the video terminals atrophied...

(Gibson: pp. 55-56)

We are still a long ways from the cyberspace described in the cyberpunk novels, but we're definitely getting there pretty quickly. As a matter of fact, with the success of Second Life, we just need a leap forward in the commercialization of already existing immersive technologies coupled with yet another little push in hardware performance. However, that holy grail of the 1980s and 1990s science-fiction seems now within reach. The global reach of the Internet is now a reality in our daily lives. People work from home, share ideas, pictures, videos, watch TV shows and movies and talk to distant relatives thanks to the Internet. All of this would have sounded like science-fiction fantasy to anyone in the first half of the 1980s.

A nightmarish genetic manipulation.

As science has evolved, it's become a commonplace of contemporary science-fiction to portray the nightmarish consequences of genetic manipulation gone wrong. Yet, in the case of cyberpunk —even more so perhaps in the so-called biopunk genre— it is all presented to us as a given:
Like the time she'd screamed about the bugs, the roaches they called palmetto bugs, but it was because the Goddamn things were mutants, half of them; someone had tried to wipe them out with something that fucked with their DNA, so you'd see these screwed-up roaches dying with too many legs or heads, or not enough, and once she'd seen one that looked like it had swallowed a crucifix or something, its back or shell or whatever it was distorted in a way that made her want to puke.

(Gibson: p. 34)

What I find peculiar about this approach is not so much that the author defends genetic manipulation, but rather the fact that he presents it matter-of-factedly as something normal, common, part of the daily routine. Yes, Mona pukes when she sees the genetically deformed cockroaches, but she doesn't show any surprise at all. She doesn't display any sort of deep moral qualms about what she sees, which some may consider quite scary. And yet, my guess is that this is far more realistic than the deep philosophical and/or theological musings that we see in other books. If we take a look back at our own History, most leaps tend to happen in this manner, bit by bit, inadvertently. After all, isn't it true that most people just a few decades ago would have considered a national ID something more typical of dictatorial regimes than of advanced democracies? But here we are.

Cities with history: a blessing and a burden.

One of the things that I found interesting about Americans was their obsession with History and, in general, cities (and peoples) with a History behind them. I suppose it makes sense coming from a relatively young nation. Gibson puts it in the mouth of Kumiko when she is pondering about London:
This was nothing like Tokyo, where the past, all that remained of it, was nurtured with a nervous care. History there had become a quantity, a rare thing, parcelled out by government and preserved by law and corporate funding. Here it seemed the very fabric of things, as if the city were a single growth of stone and brick, uncounted strata of message and meaning, age upon age, generated over the centuries to the dictates of some now all but unreadable DNA of commerce and empire.

(Gibson: pp. 11-12)

To be clear, living in a society with deep roots in the past is both a blessing and a curse. On the one hand, it provides some sort of default identity, something to hang onto when one feels disoriented. But, on the other, it often feels oppressive, like a heavy weight upon one's shoulders, a common set of assumptions about what one should think, how one should behave, what one should like and dislike. In this sense, American cities have it easy: they can redefine themselves without fear. They can look into the future without a need to worry about the connections to their "true self", their "real identity". American cities —American society in general— are naturally postmodern. They don't have an identity set in stone. On the contrary, they get to choose their own identity.

sábado, 21 de marzo de 2009

Mona Lisa Overdrive.

The final part of William Gibson's Sprawl Trilogy, following Neuromancer and Count Zero. Clear example of cyberpunk literature. As it happened with the previous novels, this story is also formed from several interconnected plot threads involving a few characters that also appeared in previous books: Mona, a young prostitute who looks like Angie Mitchell, a famour superstar, and who is hired by a few individuals who intend to kidnap the star; Kumiko, a Japanese girl, daughter of a Yakuza boss sent to London from a gang war involving a few Yakuza leaders; and Slick Henry, a convicted car thief who lives in a vast wasteland, and who is hired to take care of Bobby Newmark, also known as Count Zero.

Technical description:
Title: Mona Lisa Overdrive.
Author: William Gibson.
Publisher: Harper Collins Publishers.
Edition: Paperback edition, reprinted ten times, London (UK), 1995 (1988).
Pages: 316 pages.
ISBN: 978-0-00-648044-0

jueves, 19 de marzo de 2009

Sobre el concepto de belleza.

Interesante digresión sobre el concepto de belleza:
En otras ocasiones y en otros sitios vi muchos scriptoria, pero ninguno conocí que, en las coladas de luz física que alumbraban profusamente el recinto, ilustrase con tanto esplendor el principio espiritual que la luz encarna, la claritas, fuente de toda belleza y saber, atributo inseparable de la justa proporción que se observaba en aquella sala. Porque de tres cosas depende la belleza: en primer lugar, de la integridad o perfección, y por eso consideramos feo lo que está incompleto; luego, de la justa proporción, o sea de la consonancia; por último, de la claridad y la luz, y, en efecto, decimos que son bellas las cosas de colores nítidos. Y como la contemplación de la belleza entraña la paz, y para nuestro apetito lo mismo es sosegarse en la paz, en el bien o en la belleza, me sentí invadido por una sensación muy placentera y pensé en lo agradable que debería de ser trabajar en aquel sitio.

(Eco: p. 106)

No está de moda ni mucho menos afirmar los conceptos universales, pero eso precisamente parece hacer aquí Umberto Eco (o, cuando menos, el personaje en cuya boca pone las palabras aquí citadas). Desde que se extendiera el postmodernismo por las sociedades industriales avanzadas, se lleva más bien lo relativo, contrapartida quizá inevitable de aquello que Vattimo denominara el pensamiento débil. No es que a uno le parezca mal del todo subrayar la presencia de lo relativo, la verdad sea dicha. Demasiado tiempo nos llevamos durante nuestra Historia afirmando lo absoluto y matando en su nombre. Sin embargo, un exceso de relativismo le deja a uno también un mal sabor de boca, todo hay que decirlo. Sencillamente, no es posible ir por la vida como barco sin rumbo, dejándose ir con la marea o allá adonde sople el viento (¿o quizá sí pueda uno hacerlo?). En todo caso, a uno le parece que debemos anclar de cuando en cuando en unos cuantos valores esenciales que deben ser, eso sí, pocos y amplios: los derechos humanos básicos, el respeto mutuo, etc. No son pocos los estudios científicos que parece demostrar que, en efecto, tenemos un concepto de belleza que todos compartimos, por mínimo que sea. Se trata de un concepto que incluye elementos tan esenciales como la simetría o la atracción de lo simple. No puede decirse, por consiguiente, que sobre gustos nada esté escrito, como a menudo nos han contado. Todavía quedan muchos estudios por hacer en este ámbito, pero lo que he leído de momento promete bastante... y va precisamente en la línea de lo que el personaje arriba citado indica.

miércoles, 4 de marzo de 2009

En la lista de libros prohibidos del Opus Dei.

Por una de esas casualidades de la vida, me tropecé recientemente con una versión algo anticuada del índice de libros prohibidos del Opus Dei y resulta que El nombre de la rosa está incluida en la lista. Conociendo a esta gente, no me extraña lo más mínimo, claro. Eco trata indirectamente demasiados temas escabrosos de la historia de la Iglesia que no tienen más remedio que tocar una fibra sensible, sobre todo en el caso de aquellas personas poco dispuestas a tolerar opiniones discordantes con las suyas propias. Basta con presentar a miembros de la jerarquía eclesiástica como meros seres humanos, con todos sus defectos, ambiciones y luchas de poder, para que los integristas católicos consideren a una obra como clara muestra de herejía ponzoñosa. De todos modos, merece la pena echarle un somero vistazo a la lista completa, que incluye libros de la importancia de Las metamorfosis, de Apuleyo; Camino de perfección y El árbol de la ciencia, de Pío Baroja; varias obras de Balzac; El segundo sexo, de Simone de Beauvoir; unos cuantos libros de Norberto Bobbio; El Aleph, de Jorge Luis Borges; varios libros de Albert Camus y Alejo Carpentier; La Colmena, de Camilo José Cela; Erasmo de Rotterdam; Emile Durkheim, Paulo Freire, Erich Fromm, Gadamer, Gabriel García Márquez, Hermann Hesse, Heidegger, Husserl; El proceso, de Kafka; D. H. Lawrence, Maurice Merleau-Ponty; Misión de la Universidad, de Ortega y Gasset; Fortunata y Jacinta y Tristana, de Benito Pérez Galdós; Karl Popper, Rousseau, Sartre, Max Sheler, Stendhal, Teilhard de Chardin, Valle Inclán, Mario Vargas Llosa, Zola, Max Weber y hasta Ludwig Wittgenstein. Vamos, lo más granado de la cultura del siglo XX. Uno se pregunta cómo diantres serán capaces los miembros de "la Obra" de familiarizarse con cualquier disciplina del conocimiento a la luz (más bien debiera decir "sombra", obviamente) de esta lista.

viernes, 27 de febrero de 2009

El nombre de la rosa.

Novela histórica de misterio ambientada en el ambiente religioso del siglo XIV que se convirtió rápidamente en un best seller internacional a principios de los años ochenta y acabó siendo llevada al cine por el director francés Jean-Jacques Annaud. Narra la investigación de una misteriosa cadena de crímenes que se suceden en una abadía de los Alpes italianos. Pronto se convirtió en una novela de culto en la que muchos creyeron ver referencias a Jorge Luis Borges, Arthur Conan Doyle o incluso Guillermo de Ockham. En definitiva, que se trata de una de esas novelas que hace ya bastantes años que he querido leer y jamás me había puesto a hacerlo hasta ahora.

Ficha técnica:
Título: El nombre de la rosa.
Autor: Umberto Eco.
Editorial: Debolsillo.
Edición: Séptima edición en formato de bolsillo, Barcelona, octubre 2006 (1980).
Páginas: 781.
ISBN: 84-9759-258-1

viernes, 20 de febrero de 2009

Comparing brain versus computer: parallel versus serial.

Von Neumann's summary of the comparison between the human brain and the computer contains, astonishingly enough, a few insights that still apply to today's technology:
First: in terms of the number of actions that can be performed by active organs of the same total size (defined by volume or by energy dissipation) in the same interval, the natural componentry is a factor 104 ahead of the artificial one. This is the quotient of the two factors obtained above, i.e. of 108 to 109 by 104 to 105.

Second: the same factors show that the natural componentry favors automata with more, but slower, organs, while the artificial one favors the reverse arrangement of fewer, but faster, organs. Hence it is to be expected that an efficiently organized large natural automation (like the human nervous system) will tend to pick up as many logical (or informational) items as possible simultaneously, and process them simultaneously, while an efficiently organized large artificial automaton (like a large modern computing machine) will be more likely to do things successively —one thing at a time, or at any rate not so many things at a time. That is, large and efficient natural automata are likely to be highly parallel, while large and efficient artificial automata will tend to be less so, and rather to be serial. (...)

Third: it should be noted, however, that parallel and serial operation are not unrestrictedly substitutable for each other —as would be required to make the first remark above completely valid, with its simple scheme of dividing the size-advantage factor by the speed-disadvantage factor in order to get a single (efficiency) "figure of merit." More specifically, not everything serial can be immediately paralleled —certain operations can only be performed after certain others, and not simultaneously with them (i.e. they must use the results of the latter). In such a case, the transition from a serial scheme to a parallel one may be impossible, or it may be possible but only concurrently with a change in the logical approach and organization of the procedure. Conversely, the desire to serialize a parallel procedure may impose new requirements on the automaton. Specifically, it will almost always create new memory requirements, since the results of the operations that are performed first must be stored while the operations that come after these are performed. Hence the logical approach and structure in natural automata may be expected to differ widely from those in artificial automata. Also, it is likely that the memory requirements of the latter will turn out to be systematically more severe than those of the former.

(von Neumann: pp. 50-52)

It speaks for the excelence of von Neumann's work that at least parts of it are ust as relevant today as they were back in 1956, in spite of all the changes that took place in a field that's as dynamic as this. Specifically, those reflections about parallel versus serialized computing are still plenty relevant.

lunes, 16 de febrero de 2009

The limits of the brain.

Paul and Patricia Churchland synthesize pretty well in the foreword the main thesis put forward by John von Neumann in this book:
Should we simply press past the obvious limitations of biological systems (limitations mostly of speed and reliability) and pursue the dazzling potential of electronic systems, systems that can, in principle and even with a von Neumann architecture, implement or simulate any possible computational activities? Or should we attempt instead, for whatever reasons, to mimic the computational organization displayed in the brains of insects, fishes, birds, and mammals? And what organization is that, anyway? Is it importantly or interestingly different from what goes on in our artificial machines?

Here, the reader may be surprised to discover, John von Neumann weighs in with a prescient, powerful, and decidedly nonclassical answer. He spends the first half of the book leading the reader stepwise through the classical conceptions for which he is so famously responsible, and as he turns finally to address the brain, he hazards the initial conclusion that "its functioning is prima facie digital." But this initial take on the neuronal data is also prima facie procrustean, a fact that von Neumann acknowledges immediately and subsequently turns to pursue at length.

The first problem he notes is that the connections between neurons do not show the telltale "two lines in and one line out" configuration that classical and-gates and or-gates display. Though each cell typically projects exactly one output axon, as the classical take would require, each cell receives more than a hundred, even more than several thousand, inputs from many other neurons. This fact is not decisive —there are, for example, multivalent logics. But it does give him pause.

The plot thickens as von Neumann pursues a point-by-point comparison between the fundamental dimensions of the brain's "basic active organs" (presumably, the neurons) and the computer's "basic active organs" (the various logic gates). Spatially, he observes, neurons have the advantage of being at least 10² times smaller than their presumptive electronic counterparts. (At the time, this estimate was exactly right, but with the unforeseen advent of photo-etched microchips, this size advantage has simply disappeared, at least where two-dimensional sheets are concerned. We can forgive von Neumann this one.)

More important, neurons have a major disadvantage where the speed of their operations is concerned. Neurons are, he estimates, perhaps 105 times slower than vacuum tubes or transistors in the time required to complete a basic logical operation. Here he is portentously correct, in ways about to emerge. If anything, he underestimates the neuron's very considerable disadvantage. If we assume that a neuron can have a "clock frequency" of no better than roughly 10² Hz, then the clock frequencies of almost 1,000 MHz (that is, 109 basic operations per second) now displayed in the most recent generation of desktop machines push the neuron's disadvantage closer to a factor of 107. The conclusion is inescapable. If the brain is a digital computer with a von Neumann architecture, it is doomed to be a computational tortoise by comparison.

Additionally, the accuracy with which the biological brain can represent any variable is also many orders od magnitude below the accuracies available to a digital computer. Computers, von Neumann observes, can easily use and manipulate eight, ten, or twelve decimal places of representation, while the neuron's presumed mode of representation —the frequency of the spike train it sends down its axon— appears limited to a representational accuracy of at most two decimal places (specifically, plus or minus perhaps 1 percent of a frequency maximum of roughly 100 Hz). This is troubling because, in the course of any computation that involves a great many steps, small errors of representation in the early steps regularly accumulate into larger errors at the closing steps. Worse, he adds, for many important classes of computation, even tiny errors in the early steps get exponentially amplified in subsequent steps, which inevitably leads to wildly inaccurate final outputs. Thus, if the brain is a digital computer with only two decimal places of representational accuracy, it is doomed to be a computational dunce.

Conjointly, these two severe limitations —one on speed, and the other on accuracy— drive von Neumann to the conclusion that whatever computational regime the brain is using, it must be one that somehow involves a minimum of what he calls "logical depth". That is, whatever the brain is doing, it cannot be sequentially performing thousands upon thousands of sequentially orchestrated computational steps, as in the super-high frequency, recursive activity of a digital machine's central processor. Given the slowness of its neuronal activities, there isn't enough time for the brain to complete any but the most trivial of computations. And given the low accuracy of its typical representations, it would be computationally incompetent even if it did have enough time.

(Paul & Patricia Churchland: pp. XV - XVIII, foreword)

In other words, von Neumann asks himself the same questions that humans have been trying to answer for centuries now: is it possible to simulate the human brain? But how does the brain function? That's precisely what he reflects upon in this short book we are now discussing. If anything, what makes this attempt different is the fact that we know far more about the human body and also that we have developed a methodology that allows us to reap the fruits of human knowledge in a manner that we could only dream of centuries ago: the scientific methodology. Of the two, this latter issue is perhaps the key, the engine behind the rapid succession of advances that we have accomplished in the last 100 years or so. In other words, unlike in the time of Aristotle, we now have a good reason to believe that our dream of building intelligent machines is within reach.

El Hobbit.

The Hobbit debe haber sido la segunda novela que leí en inglés en toda mi vida. Si no recuerdo mal, fue allá por 1990 ó 1991, poco después de leer Animal Farm, de George Orwell, que sin duda intimidaba menos. Y no es que cometiera el error tan frecuente de pensar que, puesto que era literatura fantástica, debiera estar dirigida al público lector infantil o juvenil y, por tanto, debiera ser fácil de leer, no. A esas alturas ya sabía distinguir entre literatura fantástica y aquella otra dirigida a los niños. Pero el caso es que ya había leído por entonces un buen número de libros de ensayo e imaginaba que, siendo capaz de leer obras de economía y política internacional en una lengua extranjera, tampoco tendría problema alguno con The Hobbit. La verdad es que, a pesar de los avisos de más de un amigo, no me equivoqué. La lectura fue moderadamente difícil, pero reconfortante.

Pues bien, muchos años después, me encuentro con este comic en la estantería de uno de mis hijos... y yo sigo sin leer el opus magnum de J.R.R. Tolkien, The Lord of the Rings. Se trata, por un lado, de que me da un poco de pereza cuando veo tamaño libro. Pero, fundamentalmente, el hecho es que la literatura fantástica no me va, la verdad sea dicha. Puedo tolerar la ciencia ficción con algunas condiciones —fundamentalmente, que no sea demasiado fantástica, que entre dentro de lo relativamente posible, aunque sea dentro de varios miles de años—, pero la fantasía me parece en demasiadas ocasiones... pues eso, excesivamente fantástica. Y es que en demasiadas ocasiones los autores parecen confundir fantasía con infantilismo, o al menos esa es la impresión que se lleva uno. Dicho esto, he de reconocer que Tolkien no es uno de esos autores.

En cualquier caso, esta novela gráfica de Charles Dixon y David Wenzel está bastante conseguida. Mantiene el espíritu del libro y tanto el dibujo como la tipografía casan bien con el contenido fantástico de la obra que nos narra las aventuras de Bilbo Bolsón. Se disfruta mucho y se lee en un par de sentadas.

Ficha técnica:
Título: El Hobbit.
Autor: Charles Dixon y David Wenzel, basado en la historia de J.R.R. Tolkien.
Editorial: Norma Editorial.
Edición: Barcelona, 1990.
Páginas: 134 páginas
ISBN: 84-8431-432-4

domingo, 15 de febrero de 2009

The Computer and the Brain.

A very short book, written for Yale's Silliman Lectures and published after the author's death, that can definitely be considered a part of the very foundations of Computer Science. In it, John von Neumann (yes, the father of the renowned von Neumann architecture upon which we built the whole edifice of modern computing) muses about the differences between machine and biological intelligence. Prominent neuroscientific thinkers Paul M. Churchland and Patricia S. Churchland provide a brief introduction to the book, which represents the final accomplishment of one of the greatest mathematicians of the twentieth century. John von Neumann concludes that the human brain operates, at least in part, in a digital manner, although instead of doing so in a sequential manner, it acts in what can only be considered a massively parallel form, thus predating our contemporary approach to these issues.

Technical description:
Title: The Computer and the Brain.
Author: John von Neumann.
Publisher: Yale Nota Bene/Yale University Press
Edition: Second edition, New York (USA), 2000 (1958).
Pages: 82 pages.
ISBN: 0-300-08473-0

Find it on Amazon (USA, UK).

miércoles, 28 de enero de 2009

The solar terminator.

Shame on me, but I had never heard of the solar terminator. Or, to put it a different way, every single time I heard the concept in a context other than SCSI cables I thought it was referring to Terminator, the character in the movies. I was obviously wrong, and I owe to Neal Stephenson that I learned a new tidbit of information today.
They are angling across the terminator -not the robotic assassin of moviedom, but the line between night and day through which our planet incessantly rotates.

(Stephenson: p. 740)