miércoles, 30 de julio de 2008

A geek's love for puns.

Who hasn't "enjoyed" the rapid exchange of puns that geeks quickly engage in as soon as they are set lose in a meeting room? I found Ullman's musings about this quite interesting:
My first reaction to programmer punning had been a certain curiosity. The reason for compulsive punning among computer people seemed interesting, ripe for study. Perhaps this would be my ticket to tenure, I thought. My time spent as a "quality-assurance engineer" would not be seen as falling away from academia but as... fieldwork! I would write papers. I would be invited to the next meeting of the Modern Language Association in San Francisco. There I would tell a learned but computer-illiterate audience how programmers had to work in the relentlessly literal language of code, where one slip of a letter reduced everything to incomprehensibility. The compiler, I would tell them, is a computer program that translates the programmer's code into a set of instructions that can be executed by the microprocessor, the chip at the heart of the machine. The compiler is the entity the programmer must talk to, the creature he or she must make understand the intentions embodied in the code. But this compiler-creature is error-intolerant to a fault; it demands a degree of exactness that is exhausting, painful for an intelligent human being. Leave out a comma, and the compiler halts, affronted at the slightest whiff of error. Fix the comma, run the compiler again, then it halts again, this time at a typo. Puns, I would say, represented a human being's pent-up need for ambiguity. That a word could signify two things at once! And these double meanings could be simultaneously understood! What a relief from the flat-line understanding of a programmer's conversation with the machine!

(People coming up to shake my hand. One of whom tells me about an opening at the University X, for which I should apply immediately. I would be short-listed at once.)

But as my time at Telligentsia progressed, I began to see something more sinister in the programmers' penchant for puns. It wasn't an upwelling of humanistic impulses in the face of the mute machine; it wasn't a cry for the sweet confusion of being human. Quite the contrary, it was an act of disdain for the complicated interchange known as conversation: for its vagaries, lost and meandering trails, half-understandings, and mysterious clarities. For the meaning of a pun is clear, all too clear. It demands a leap in understanding, to the exact place the punner demands. It's the programming of a conversation. Llike the GOTO instruction in code, Go here, jump to this place, unconditionally. Forget about the person Chuck Glover, the desire to know something, Raisa Vastnov's hope for an answer to her question. Would. Chuck. How much wood. Duck. Fuck. A ricochet. A question goes out in a direction, to a listener, a potential answerer, then off it flies at some bizarre trajectory. To take a word at the level of sound, to feel absolutely free (and delighted!) to take that word anywhere language suggested, never mind the intention of the person talking to you —there is something fundamentally hostile in a pun.

(Ullman: pp. 59-60)

As for me, I don't know. I think that perhaps Ullman is just over-analyzing here. I like puns. I like the type of jokes that play with our assumptions about language, concepts and situations. Is that typical of a geek? I don't know, but I certainly like it far more than the typical slapstick humor, not to talk about "jokes" based on denigrating other people or the ones based on bodily functions.

The complexities of event-driven programming.

A good description of event-driven programming:
He was afraid of the mouse. Among everything he had to learn, the mouse seemed the most impenetrable, and the more he read of the e-mail, the more afraid he became. He had never worked with a mouse before; neither had anyone else at Telligentsia, except for Harry, who had once read some code involving a mouse. There were perhaps a few hundred people in the world who knew anything about it, and most of them were working at Apple or at the two or three companies building mice. The documentation from SM Corp., the company that had built the systems Telligentsia was using, consisted of a few barely comprehensible pages, mostly of C-code samples, which Ethan had puzzled over in growing alarm. He felt horribly stupid about this device that had suddenly become so central to the workings of the system. Everything in the graphical user interface was centered on it: Where was the mouse? Was a button pressed? Which one? Single- or double-click? It was an "event-driven" system, code that was constantly being interrupted by the user through the movement and clicking of the mouse, code that was designed to be interrupted by the user's intentions: here, now here, now here. The systems he was used to, in contrast, moved from instruction to instruction under the direction of the programmer's imagination —if this is true, go here; if not, go there. Now he had to rearrange his mind. He had to know how these interruptions were being generated, what happened to them as they percolated up from the mouse, through the mouse device-driver software, through SM's mouse routines, finally to reach his code. He had to understand the mouse.

(Ullman: p. 43)


Obviously, thanks to abstraction layers, we don't need to do this anymore. However, it's still true that event-driven programming still requires a completely different mindset. Personally, as the Ehan Levin from the novel, I also come from the tradition where you directed the computer instruction after instruction and I still find the whole event-driven paradigm quite mind-boggling whenever I dare take a quick look. Perhaps that's why I feel more comfortable with scripting and Web development.

martes, 29 de julio de 2008

Programming as plumbing.

Programming as plumbing:
Ethan sat in his dining room and remembered the moment when his bluff and self-sell suddenly failed him, when he found himself muttering something about there being a "skills mismatch" between himself and the job. And then Harry leaning forward in his chair (with great difficulty, breathing at the mere exertion of shifting his formidable weight), telling him not to worry about not having finished graduate school: "It's all mostly useless by now anyway," he said. "Look, Levin. Programming starts out like it's going to be architecture —all black lines on white paper, theoretical and abstract and spatial and up-in-the-head. Then, right around the time you have to get something fucking working, it has this nasty tendency to turn into plumbing."

"No, no. Lemme think," Harry interrupted himself. "It's more like you're hired as a plumber to work in an old house full of ancient, leaky pipes laid out by some long-gone plumbers who were even weirder that you are. Most of the time you spend scratching your head and thinking: Why the fuck did they do that?"

"Why the fuck did they?" Ethan said.

Which appeared to amuse Harry to no end. "Oh, you know," he went on, laughing hoarsely, "they didn't understand whatever the fuck had come before them, and they just had to get something working in some ridiculous time. Hey, software is just a shitload of pipe fitting you do to get something the hell working. Me," he said, holding up his chewed, nail-torn hands as if for evidence, "I'm just a plumber."

(Ullman: pp. 27-28)

That sentence towards the end should be a classic: "Hey, software is just a shitload of pipe fitting you do to get something the hell working". Whoever hasn't heard of this or that programming language as an excellent glue to fit things together? For, in the end, that's what a programmer does most of the time. Yes, design is also a part of it. Yes, there are certain tecniques learnt from engineering disciplines that help. Still, for the most part, the priority is to get the code working, especially in this crazy world of ours where the product had to be released yesterday. The simile is great, I think.

Great description of a programmer mesmerized by his activity.

Sorry for the long quote, but I think this ought to be one of the best descriptions of a programmer (Ethan Levin, the main character in the novel) mesmerized by his programming work:
He was running the compiler when the bug report came across his desk. The compiler, the program that translates a programmer's code into machine-readable bits —Ethan Levin had laid his code before its merciless gaze, and now he stared into his screen to see its verdit. Was he aware of the tester passing behind him? Did he take any notice of how carefully she slipped between his whiteboard and his chair, holding in her stomach so as not even to brush against him? No, of course not. Oh, all right: perhaps peripherally. Literally in his peripheral vision he might have seen her, and the thought "tester" might have formed in his brain, which would have led to the automatically accompanying idea that testers only bring news of trouble, and so why bother with whatever trouble she was bringing him now?

He was occupied by other troubles. Warning messages were scrolling up his screen, the compiler's responses, notices that this and that little thing in his code might not be legal, might not parse according to the rules, might prevent the compiler from turning his routine into an "object file," a little file full of ones and zeros, ready to be linked up with other object files to create the executable program. He wasn't getting fatal errors —no, not on this pass. There was nothing yet that caused the compiler to issue the awful message that began "Fatal error" and then, abruptly, stop. But such fatalities had already befallen him many times this morning, how many times, he'd lost track —twenty, fifty, a hundred? On each occasion, he had fixed whatever little thing wasn't legal, didn't parse, prevented his code from becoming machine-readable bits; and then he ran the compiler again.

Warning. Warning. Warning. Fatal error. Stop.

Fix the little thing in the code that wasn't legal. Run the compiler again.

Warning. Warning. Fatal error. Stop.

Fix the next little thing.

Warning. Warning.

"It'll compile this time" was the only thought Ethan Levin kept steadily in mind, over and over, on that morning of March 5, 1984. Even the phone had to go on ringing —once, twice, three times— until it reached him, three rings to get inside his window of attention so tuned to the nervous refresh rate of the screen: Warning. Warning.

"What! Are you still there?" said the voice on the phone. "You're supposed to be here!"

He sat confused for a moment. Yes, he knew the voice was Joanna's, but the words "there" and "here": What exactly did they mean? Then, all at once, his attention snapped into focus: It was Joanna, he was supposed to take her to the airport, and he was late.

"I'm leaving!", he said.

"I mean now. Are you leaving now?"

"I'm leaving, I'm leaving," he said again, but vacantly, automatically, because despite himself, his eyes had been drawn back to the screen, to the irregular pulse of the messages as they appeared: Warning. Warning.

"Hello? Are you there?"

"Yeah, yeah. I'm here," he said, jusr as the compiler suddenly displayed the message "Fatal error: MAXWINSIZE not defined," and came to a stop.

"Shit!" Ethan Levin muttered under his breath.

"Ethan! You're compiling! I know it!"

"Yeah, yeah, sorry," he said to Joanna McCarty, who, as his girlfriend of four years, knew all too well the sort of exclamations he made when he was programming and compiling. "Sorry," he repeated, but again automatically, because —though he knew better than to do this now— his mind immediately began ranging over the places where MAXWINSIZE should have been defined. On one side of his attention was Joanna, the month-long trip to India she was taking with Paul Ostrick, husband of her best friend, Marsha Ostrick, and the promise Ethan had made to take them to the airport —and be on time! he swore! But on the other was his sudden and unexpected problem of MAXWINSIZE, this branching trail of questions that led off in many directions, curious and puzzling, down which his mind involuntarily started traveling...

"It's an airplane, Ethan! It's not going to wait for a compile!"

"Shit! Yes! I'm leaving now."

"That's like now now, right? Not like a one-more-compile now?"

"Now now," he repeated, as much to himself as to Joanna. "Now now."

Still, even after he'd stood up and grabbed his jacket from the back of his chair, the face of the screen drew him back. He sat down again, reread the messages on the monitor, typed a command to the system that set it searching the entire code library for all occurrences of MAXWINSIZE. Yes, much better now. The system would tell him where the problem was. Then, tomorrow, he'd fix it.

(Ullman: pp. 17-19)

Who hasn't experienced something like this? Programming, unlike many other human activities —especially the ones we do while at work—, does have that extraordinary ability to suck you in. Viewing the code, writing it, compiling it, testing it, you lose track of time, forget whether it's daytime or nightime, and quite often even disregard basic human needs, such as eating too. The only other activity I can think of that's similar in this respect is any artistic endeavor. It shouldn't surprise us then that so many people speak of programming as an art.

Complying with the rules does not guarantee meaning: linguistics and programming.

Interesting reference to Noam Chomsky, unfortunately better known for his political ideas than his work in linguistics:
Plans, rules, order —I should have known better. The relief I looked forward to at my desk, my perfect test —I should have realized even before I logged on to the system that morning that something else was going to happen. A linguist knows that following the rules is no assurance of anything. "Colorless green ideas sleep furiously," the linguist Noam Chomsky once famously said, his perfectly well-formed sentence: grammatically correct, signifying nothing.

(Ullman: pp. 11-12)

The study of linguistics does have a direct link to that of programming languages —not to talk about the field of artificial intelligence. Basically, a language is nothing but a system where patterns, structures, syntax, grammar, rules... are all very closely related. How is this any different from a computer or a computer network? More and more we realize that different fields that up until know where considered to be separate are actually far more closely related than we ever thought. I have a feeling this trend will do nothing but to deepen in the near future.

Computers gave us... a life full of waits and pauses.

Roberta Walton, the narrator of the book, starts the novel telling us how she is waiting for an immigration agent to pass her passport through the scanner:
And so we waited. Tick-tock, blink-blink, thirty seconds stretched themselves out one by one, a hole in human experience. Waiting for the system: life today is full of such pauses. The soft clacking of computer keys, then the voice on the telephone telling you, "Just a moment, please." The credit-card reader instructing you "Remove card quickly!" then displaying "Processing. Please, wait." The little hourglass icon on your computer screen reminding you how time is passing and there is nothing you can do about it. The diddler at the bottom of the browser screen going back and forth, back and forth like a caged crazed animal. All the hours the computer is supposedly saving us —I don't believe it, in the sum of things, I thought as I stood there leaning on my lugagge cart. It has filled our lives with little wait states like this one, useless wait states, little slices of time in which you can't do anything at all but stand there, sit there, hold the phone -the sort of unoccupied little slices of time no decent computer operating system would tolerate for itself. A computer, waiting like this, would find something usefult to do: check for other processes wanting attention, flush a file buffer, refresh a cache, at least.

(Ullman: pp. 4-5)

Sure, it's unfair to depict computers in this manner. For the most part they do indeed save us plenty of time —what if that very same immigration agent had to manually verify her identity, for starters? Still, I think it was an interesting —in the sense of different, unusual— thought. We are perhaps too used to complimenting the use of new technology in the modern world to realize this sort of small, almost imperceptible waits.

The Bug.

Author of Close to the Machine, yet another cult book among geeks, Ellen Ullman is a computer programmer turned novelist. In The Bug, she describes the fate of a programmer, Ethan Levin, who struggles to fix an UI bug that only shows up sporadically and without any sort of regularity in its behavior. Deeply insecure about his skills because he doesn't have a computer science degree, Levin is in charge of doing the debugging in the midst of a personal crisis that drives his neglected girlfriend to the arms of a hippie. Ullman does a pretty decent job of portraying the overall programmer mindset to the lay reader, as well as the environment of extreme competition and even collective madness they often have to work in.

Technical description:
Title: The Bug.
Author: Ellen Ullman.
Publisher: Nan A. Talese/Doubleday.
Edition: New York (USA), first edition, 2003.
Pages: 356 pages, including postscripts.
ISBN: 0-385-50860-3

lunes, 28 de julio de 2008

From electricity to fiber optics.

Petzold's parting thoughts in Code are well worth noting here:
While much of this book has focused on using electricity to send signals and information through a wire, a more efficient medium is light transmitted through optical fiber —thin tubes made of glass or polymer that guide the light around corners. Light passing through such optical fibers can achieve data transmission rates in the gigahertz region —some billion of its per second.

So it seems that photons, not electrons, will be responsible for delivering much of the information of the future into our homes and offices; they'll be like faster dots and dashes of Morse code and those careful pulses of blinking light we once used to communicate late-night wisdom to our best friend across the way.

(Petzold: p. 382)

Yet, in spite of all these technological developments, the core of his book remains relevant. He tells us about the main concepts in electrical engineering in order to introduce us the key concepts of computer science without which we would miss the foundations of this particular discipline of knowledge. However, the concepts we learn along the way (rudimentary concepts of information, binary and hexadecimal languages, circuits, addressing memory, etc.) are still as valuable today as they were 20 years ago.

This is an excellent book. It's highly recommended to anyone who cares to know about how things work in the world of computing.

The limitations of backward compatibility and open architecture.

Petzold considers the problems of backward compatiblity:
Even before the introduction of the Macintosh, several companies had begun to create a graphical operating system for the IBM PC and compatibles. In one sense, the Apple developers had an easier job because they were designing the hardware and software together. The Macintosh system software had to support only one type of diskette drive, one type of video display, and two printers. Implementing a graphical operating system for the PC, however, required supporting many different pieces of hardware.

Moreover, although the IBM PC had been introduced just a few years earlier (in 1981), many people had grown accustomed to using their favorite MS-DOS applications and weren't ready to give them up. It was considered very important for a graphical operating system for the PC to run MS-DOS applications as well as applications designed expressly for the new operating system. (The Macintosh didn't run Apple II software primarily because it used a different microprocessor.)

(Petzold: pp. 371-372)


We're still dealing with this problem. Notice that both Linux and Windows have to support a vast array of hardware devices, which Apple doesn't have to worry about. When people talk about the smooth Apple experience, they are simply not comparing apples to apples —yeah, I know, bad pun. Additionally, in the case of Windows, the developers had to deal with backward compatibilty for MS-DOS apps until quite recently —was it Windows XP the first release that finally bropke away from it? Sure, that still doesn't change the reality that one product provides a better experience than the other, but it should definitely put things in perspective when what we discuss is the relative merits of the software developers involved in these projects.

VisiCalc: the beginning of a revolution.

A tiny little app made the whole difference in the world:
The first indication that home computers were going to be much different from their larger and more expensive cousins was probably the application VisiCalc. Designed and programmed by Dan Bricklin (born 1951) and Bob Frankston (born 1949) and introduced in 1979 for the Apple II, VisiCalc used the screen to give the user a two-dimensional view of a spreadsheet. Prior to VisiCalc, a spreadsheet (or worksheet) was a piece of paper with rows and columns generally used for doing series of calculations. VisiCalc replaced the paper with the video display, allowing the user to move around the spreadsheet, enter numbers and formulas, and recalculate everything after a change.

(Petzold: p. 366)

Who would have been able to tell at the time? But it tends to happen this way in different fields of life: an event that few people pay attention to at the very beginning is precisely the one that brings about huge changes. The developments that are welcomed as "revolutionary" —the projects that intend to be revolutionary from the get-go— don't always accomplish much. However, what was intended to be a small but innovative idea, something humble, something that didn't necessarily pretend to bring about any huge change is precisely what ends up turning things upside down very often. We have seen it with Linux too. In this case, VisiCalc made it clear that computers could be used by regular folks to perform their routine activities. As a matter of fact, they could help automate some of the more repetitive tasks and, more importantly, speed them up and make them almost totally reliable. That's what the introduction of the personal computer and these little apps changed. Computing was taken out of the university campus and the research centers, to be brought, first, to businesses and, later, to the home.

C as a high-level assembly language of sorts.

So, how come the C programming language is still so popular after all these years? How come there are so many people out there still using it for their projects? Petzold gives us a clue:
I mentioned earlier that most high-level languages don't include bit-shifting operations or Boolean operations on bits, which are features supported by many processors. C is the exception to this rule. In addition, an important feature of C is its support of pointers, which are essentially numeric memory addresses. Because C has operations that parallel many common processors instructions, C is sometimes categorized as a high-level assembly language. More than any ALGOL-like language, C closely mimics common processor instruction sets.

(Petzold: p. 363)

Simply put, it still is the best programming language for systems programming in general. If you need to do something really complex without resorting to assembly language (i.e., an operating system or a heavy application, for instance), C is still a safe bet. Which brings up another issue: what if you are not a programmer? Well, I'd say even if you are not involved in any systems programming at all, you are better off being fluent in C for most of the operating systems out there are written in that language. You will need it, if only to troubleshoot and understand whichever problem you may come across of while running your favorite OS. Let's face it: gaining a decent understanding of how computing works without knowing any C is pretty tough.

UNIX borrowed the hierarchical file system from MS-DOS?

Interesting statement:
The hierarchical file system -and some other features of MS-DOS 2.0- were borrowed from an operating system named UNIX, which was developed in the early 1970s at Bell Telephone Laboratories largely by Ken Thompson (born 1943) and Dennis Ritchie (born 1941). The funny name of the operating system is a play on words: UNIX was originally written as a less hardy version of an earlier operating system named Multics (which stands for Multiplexed Information and Computing Services) that Bell Labs had been codeveloping with MIT and GE.

(Petzold: p. 333)

UNIX borrowed the concept of a hierarchical file system from MS-DOS? Is he sure? As Petzold himself says, UNIX was developed in the early 1970s and MS-DOS didn't see the light until the 1980s. Was UNIX working without any file system until then? If it wasn't a hierarchical file system, what was it then? I find it quite difficult to believe that Petzold's statement is correct, especially since UNIX and a deeply hierarchical file system have come to be almost synonimous over the years, which cannot be said Microsoft's MS-DOS-derived products. When searching around, all I find is references to Apple's HFS —as, for example, in the link above—, but little or nothing about the general notion of a hierarchical file system. Was it then invented by Apple? No idea, but Petzold's statement sounds quite suspicious to me.

Open architecture and open standards.

By now we all know about the importance of open standards in computing and how those companies that bet on it in the early 1980s ended up winning the war over the home computer. However, at the time the wisdom of betting on open standards was far from obvious.
If you were designing a new computer system that included a new type of bus, you could choose whether to publish (or otherwise make available) the specifications of the bus or to keep then secret.

If the specifications of a particular bus are made public, other manufacturers —so-call third party manufacturers— can design and sell expansion boards that work with that bus. The availability of these additional expansion boards makes the computer more useful and hence desirable. More sales of the computer create more of a market for more expansion boards. This phenomenon is the incentive for designers of most small computer systems that adhere to the principle of open architecture, which allows other manufacturers to create peripherals for the computer. Eventually, a bus might be considered an industry-wide standard. Standards have been an important part of the personal computer industry.

The most famous open architecture personal computer was the original IBM PC introduced in the fall of 1981. IBM published a Technical Reference manual for the PC that contained complete circuit diagrams of the entire computer, including all expansion boards that IBM manufactured for it. This manual was an essential tool that enabled many manufacturers to make their own expansion boards for the PC and, in fact, to create entire clones of the PC —computers that were nearly identical to IBM's and ran all the same software.

The descendants of that original IBM PC now account for about 90 percent of the market in the desktop computers. Although IBM itself has only a small share of this market, it could very well be that IBM's share is larger than if the original PC had a closed architecture with a proprietary design. The Apple Macintosh was originally designed with a closed architecture, and despite occasional flirtations with open architecture, that original decision possibly explains why the Macintosh currently accounts for less than 10 percent of the desktop market. (Keep in mind that whether a computer system is designed under the principle of open architecture or closed architecture doesn't affect the ability of other companies to write software that runs on the computer. Only the manufacturers of certain video games have restricted other companies from writing software for their systems.)

(Petzold: pp. 302-303)

Sure, we all know who was right. However, we also know that Apple has come back to life and become, once more, one of the most innovative companies in the business. Not only that, but few people would doubt that the main reason why they could afford this level of creativity is precisely because they control both the software and the hardware. It's part of the Apple experience that guarantees a smooth transition between products and a near-perfect compatibility between two different Apple devices.

Likewise, we also saw at the end of the 1990s the appearance of a new disruptive phenomenon: the open source movement. Where open standards applied mainly to hardware specs in the past —and Microsoft benefitted a lot from it—, Linux and other projects tried to spread the same philosophy now to the software world too. That's precisely why the moaning about the supposedly Communistic threat coming from the open source world that we hear from Microsoft's top execs from time to time is so self-serving. After all, they are the first ones who benefitted from IBM's decision not to play dirty and share the specs.

In any case, few people doubt that without these open standards the Internet wouldn't exist today and there is a good chance technology wouldn't play such a central role in our societies. It's precisely the adoption of open standards and protocols that made it possible for all this software craze to spread throughout the world in the 1990s.

Subtracting without borrowing.

Petzold's book is full of excellent morsels of computer wisdom... as well as useful tips on mathematics. For example, while discussing the complexities of carrying out addition and subtraction operations on a computer, he shares with us a method to perform subtractions without borrowing (pp. 143-145):
  1. Subtract the subtrahend from a number that has as many nines as digits the numbers in the subtraction have (e.g. if the largest number in the operation is 176, you subtract that number from 999; if it is 3456, you subtract that number from 9999). This results in the nines' complement.
  2. Add the result to the original minuend.
  3. Finally, add 1 and subtract 1000.
For example, imagine we have to subtract 129 from 216. First of all, we subtract 129 from 999, which results in 870. Then, we add this result to the original minuend (e.g., 216), which gives us 1086. Finally, we add 1 (resulting in 1087) and subtract 1000, which gives us a total of 87. Now, all this is time consuming for a human being, but easier and faster to perform for a machine, and indeed that is how things work inside some of the machines we often use.

jueves, 24 de julio de 2008

The bridge-building metaphor.

So much talk throughout the eons about how software development ought to be "like building bridges", and it turned out that perhaps it is far closer than we ever thought although not in the way we always intended:
Somewhere, someone's fist is pounding the table again. Why can't we build software the way we build bridges?

Well, maybe we already do.

As OSAF's programmers labored to construct their tower of code, piling bug fixes atop Python objects atop the data repository, I watched the work proceed on the new eastern span of the Bay Bridge. The project, replacing half of the 4.5-mile, double-decker bridge that several hundred thousand vehicles cross each day, was born in the 1990s and first budgeted at a little over a billion dollars. The original design called for a low-slung, unadorned causeway, but political rivalries and local pride led to the adoption of a more ambitious and unique design. The new span, a "self-anchored suspension bridge", would hang from a single tower. A web of cables would stretch down from that lone spire, underneath the roadway and back up to the tower top, in a picturesque array of gargantuan loops. It was going to be not only a beautiful bridge to look at but a conceptually daring bridge, a bootstrapped bridge —a self-referential bridge to warm Douglas Hofstadter's heart.

There was only one problem: Nothing like it had ever been built before. And nobody was eager to tackle it. When the State of California put it out to bid, the lone contractor to throw its hat in the ring came in much higher than expected.

In December 2004, California governor Arnold Scwarzenegger stepped in and suspended the project, declaring that the Bay Area region would have to should more of the ballooning cost of the project and calling for a second look at the bridge design. Never mind that work on half of the bridge, the water-hugging viaduct that would carry motorists for more than a mile on a slow climb up to the future main span, was already very far along, and every morning you could see vehicles swarming up a temporary ramp onto the new roadbed. Schwarzenegger wanted the project scaled back to a less novel and cheaper design. The governor, the state legislature, the state's transportation agency, and local governments spent months bickering and horse-trading. The transportation agency claimed that each day of delay was costing the state $400,000. Finally, in July 2005, a new compromise reaffirmed the fancier single-tower design, to be paid for with bridge toll hikes and other measures, and projected a new finish date for the bridge: 2012 —almost a quarter century after the Loma Prieta earthquake had shaken a chunk of the old bridge deck loose.

As I read about the controversy, I couldn't help thinking of all the software management manuals that used the rigorous procedures and time-tested standards of civil engineering as a cudgel to whack the fickle dreamers of the programming profession over the head. "Software development needs more discipline," they would say. "Nobody ever tried to change the design of a bridge after it was already half-built!"

(Rosenberg: pp. 346-347)

Sounds like a horror story. As a matter of fact, it doesn't sound as different from the story about the development of Chandler that Rosenberg narrates in his book: changes of heart, disputes, political controversy, worries about the final cost, delays... In other words, as soon as civil engineers try to build something that has never been done before —something truly innovative— they run into the very same problems software engineers do.

Programming is writing and its corollaries.

After so much pondering about the best possible ways to make software development more like engineering, Rosenberg finally realizes something important: programming, after all, is a lot like writing.
People write programs. That statement is worth pausing over. People write programs. Despite the field's infatuation with metaphors like architecture and bridge-building and its dabbling in alternative models from biology or physics, the act of programming today remains an act of writing -of typing character after character, word after word, line after line. Tools that let programmers create software by manipulating icons and graphic shapes on screen have a long and sometimes successful story (for a time in the 1990s, one such tool, Visual Basic, won phenomenal popularity). But these have generally served as layers of shortcuts sitting on top of the same old text-based code, and sooner or later, to fix any really hard problems, the programmer would end up elbow-deep in that code anyway.

(Rosenberg: pp. 298-299)

Why else would style become such a contentious issue among programmers? He's absolutely right. Programming is like writing, in spite of all the efforts to make it a "scientific" discipline, something reliable, akin to building a bridge. Companies have created lots of IDEs, tools that make it easier for programmers to carry on with their activities, to drag and drop. At the end of the day, the programmer is no more than a coder, that adjective that so many would-be engineers find derisive. But it's a reality, the programmer has to write the lines of code, one by one. He may be able to rely on other programmers' work, include libraries, use modules... but whatever he does, he still has to actually write the lines of code. That's precisely why quite a few programmers still prefer to use good old Emacs,vi or some other simple text editor. This wouldn't be possible if the act of programming were something very distinct from writing lines of code. It should be patently obvious. The tools currently available may help in this writing activity, just the same way the computer —and the typewriter prior to that— helped the writer of a novel, but the core of the activity remains the same: writing line after line.

Now, said all that, assumed that the core of programming is actually writing, we can also reach other conclusions. For instance, there are always different writing styles. Some of them may be considered better than others, more efficient than others, even more aesthetically pleasant than others. In other words, programming can be considered, to some extent, an art or, at the very least, a craft. To some people, this may be a mind-boggling conclusion, but I think it's clearly derived from everything we have seen before.

Jaron Lanier's wild but suggestive ideas.

If the discussion is about software engineering and changes of paradigm, of course the conversation had to turn to Jaron Lanier sooner or later. In particular, Rosenberg mentions his essay Gordian software, published on the Edge website:
Is the entire edifice of software engineering as we know it today a Potemkin village facade? Do we need to start over from the ground up?

Today, one vocal advocate of this view is Jaron Lanier, the computer scientist whose dreadlocked portrait was briefly imprinted on the popular imagination as the guru of virtual reality during that technology's brief craze in the early 1990s. Lanier says that we have fallen into the trap of thinking of arbitrary inventions in computing as "acts of God".

"When you learn about computer science," Lanier said in a 2003 interview, "you learn about the file as if it were an element of nature, like a photon. That's a dangerous mentality. Even if you really can't do anything about it, and you really can't practically write software without files right now, it's still important not to let your brain be bamboozled. You have to remember what's a human invention and what isn't."

The software field feels so much like the movie Groundhog Day, Lanier says today —"It's always the same ideas, over and over again"— because we believe the existing framework of computing is the only one possible. The "great shame of computer science" is that, even as hardware speeds up, software fails to improve. Yet programmers have grown complacent, accepting the unsatisfactory present as immutable.

(...)

Instead of rigid protocols inherited from the telegraph era, Lanier proposed trying to create programs that relate to other programs, and to us, the way our bodies connect with the world. "The world as our nervous systems know it is not based on single point measurements but on surfaces. Put another way, our environment has not necessarily agreed with our bodies in advance on temporal syntax. Our body is surface that contacts the world on a surface. For instance, our retina sees multiple points of light at once." Why not build software around the same principle of pattern recognition that human beings use to interface with reality? Base it on probability rather than certainty? Have it "try to be an ever better guesser rather than a perfect decoder"?

These ideas have helped the field of robotics make progress in recent times after long years of frustrating failure with the more traditional approach of trying to download perfect models of the world, bit by painful bit, into our machines. "When you de-emphasize protocols and pay attention to patterns on surfaces, you enter into a world of approximation rather than perfection," Lanier wrote. "With protocols you tend to be drawn into all-or-nothing high-wire acts of perfect adherence in at least some aspects of your design. Pattern recognition, in contrast, assumes the constant minor presence of errors and doesn't mind them."

Lanier calls this idea "phenotropic software" (definining it as "the interaction of surfaces"). He readily agrees that his vision of programs that essentially "look at each other" is "very different and radical and strange and high-risk". In phenotropic software, the human interface, the way a program communicates with us, would be the same as the interface the program uses to communicate with other programs -"machine and person access components on the same terms"- and that, he admits, looks inefficient at first. But he maintains that it's a better way to "spend the bounty of Moore's Law," to use th extra speed we get each year from the chips that power computers, than the way we spend it now, on bloated, failure-prone programs.

(Rosenberg: pp. 291-294)


Yes, that's what I thought too. The problem with Lanier's ideas is that they sound great, wild and innovative but... difficult —if not downright impossible— to actually implement in the real world. His appears to be the typical attitude of the researcher always working on the edge or the artist too excited with his own creation to realize that most people care more about functionality than aesthetics, abstract concepts and "potential". Let's face it. Computers are a tool. Yes, they can transform our lives and, to a great extent, have already done so. Still, they are tools, and that's how most people use them. The next time I have to sit down to write some code, if the alternative to paying too much attention to the "file paradigm" is to design some "phenotropic software" that allows for the "interaction of surfaces", I think I will pass. I know. Sorry, I'm boring.

Mind you, this is not to suggest that Lanier's work is useless. I do think it has a place. It's important to have people who always try to push the envelope a but further, people who are crazy enough to dream with new approaches to the way we do things. That's fine. It's only that those wild ideas obviously need to be polished a bit before they can be applied to our daily lives. As for me, I'm just a regular guy, not a visionary.

miércoles, 23 de julio de 2008

Alan Kay's cellular biology paradigm of software development.

Rosenberg tells us about Alan Kay's wild idea of applying cellular biology to the world of computing:
He is not suggesting, however, that anyone stop at cathedrals. There can be a real science of computing, he believes; we just haven't discovered it yet. "Someday," he says, "we're going to invent software engineering, and we'll be better off." And those discoveries and inventions will take their cues, he thinks, from cellular biology. Ultimately, he says, we need to stop writing software and learn how to grow it instead. We think our computing systems are unmanageably complex, but a biologist —who regularly deals with systems that have many orders of magnitude more moving parts— would see them differently, Kay maintains. To such an observer, "Something like a computer could not possibly be regarded as being particularly complex or large or fast. Slow, small, stupid —that's what computers are."

(Rosenberg: p. 288)

It's the same old idea also expressed by Stephen Wolfram in A New Kind of Science: complexity as something that emerges from simple programs, from simple logic. If we were able to find the smallest snippets of computation needed by the different modules that make up a larger program, all we would have to do to create new applications from that moment on is to combine and recombine these modules. Of course, the problem is how to find about those smallest snippets of computational logic. That's where we all get stuck. Otherwise, the ideas is attractive enough, without any doubt.

The Law of Leaky Abstractions and understanding the basics.

Sooner or later, software developers will have to come to the realization that theirs is hardly the only human activity that ends up creating bugs. Sure, other disciplines don't call them by that name, but that doesn't mean they are not haunted by human errors, miscalculations, the passage of time, erosion, bad materials, etc. As a matter of fact, what makes software stand out in this respect is that there is an official (and quite often publicly available) list of all these errors associated to every project. As Joel Spolsky once explained, the complexity underlying our shiny tools is prone to cause leakages:
In an essay titled The Law of Leaky Abstractions, Joel Spolsky wrote, "All non-trivial abstractions, to some degree, are leaky. Abstractions fail. Sometimes a little, sometimes a lot. There's leakage. Things go wrong." For users this means that sometimes your computer behaves in bizarre, perplexing ways, and sometimes you will want to, as Mitch Kapor said in his Software Design Manifesto, throw it out the window. For programmers it means that new tools and ideas that bundle up some bit of low-level computing complexity and package it in a new, easier-to-manipulate abstraction are great, but only until they break. Then all that hidden complexity leaks back into their work. In theory, the handy new top layer allows programmers to forget about the mess below it; in practice, the programmer still needs to understand that mess, because eventually he is going to land in it.

(Rosenberg: pp. 281-282)


We come across this sort of situation fairly often. It wouldn't be the first time my wife runs into a showstopper problem while putting together a set of web pages on iWeb. The applications does wonders as long as you don't stray too far from what it deems the regular path. As soon as you need to do something different or special (or sometimes, even if you don't), you will run into a problem. The GUI will fail to display things just the way you want them. This table or that font just doesn't look the way it should. Yes, WYSIWYG editors are cool. They allow your regular Joe to edit web pages without having a clue as to how HTML truly functions. On the other hand, as soon as something fails you need to take a look at the inside, read the actual code (and, oh, by the way, what awful job these tools do at writing the actual code underlying the whole thing!) and know what you are doing if you truly want to fix it. In other words, you still need to be familiar enough with the underlying technology that you are using. This is precisely the main problem with these tools. They start being developed as a way to provide some help, a way to automate boring mindless functions. Yet, very quickly they turn into these magic tools that will let you do whatever it is you need to do without having a clue as to what you are actually doing or how things truly work beyond all the pretty interface. That's where we run into trouble. It's like relying on speech recognition software without being able to read. As long as the program gets it right, you're OK. But as soon as it has a mistake, you will be totally lost. Actually, it's even worse: even if the program gets it right, there is no way for you to know.

Two different roles of the software programmer, according to Simonyi.

Charles Simony, the famous Hungarian-American programmer who oversaw the creation of Microsoft Office, distinguished between two separate roles of software design:
In Simonyi's view, one source of the trouble is that we expect programmers to do too much. We ask them to be experts at extracting knowledge from nonprogrammers in order to figure out what to build; then we demand that they be experts at communicating those wishes to the computer. "There are two meanings to software design," he says. "One is designing the artifact we're trying to implement. The other is the sheer software engineering to make that artifact come into being. I believe these are two separate roles -the subject matter expert and the software engineer."

(Rosenberg: pp. 278-279)

I imagine he is referring to the same sort of separation of roles we see between the architect (or the engineer) and those who lay the bricks and finally shapes the original plans in real life. If that's what he is referring to, it definitely is a true distinction. On the one hand, you have the programmer as a designer, the professional who makes the long-term decisions about the overall design of the application, the language to be used, the resources thay will be needed, etc. Then, on the other hand, you have the programmer as coder, the one who actually writes the code and implements the plans laid out by the designer. We're definitely talking about two separate roles, two different phases of the software development. However, what's not so clear to me is that they need to be done by different people too. Perhaps that should be the case, ideally. However, programming has not reached the level of reliability and abstraction that's so prevalent in other fields, such as the construction business, for instance. It's not so clear to me that we can safely divide software development teams in groups of designers and coders, the latter just implementing whatever the former lay out on paper. There is a good amount of wishful thinking in that idea, I'm afraid. As a matter of fact, it could very well be that you could end up with a worst case scenario: software designers who have little experience in the nitty-gritty business of actual programming (this is not, obviously, Charles Simonyi's case) putting together their pie-in-the-sky schemes, while programmers who have never been asked to take a bird's view of their projects continue working on spiritless line of code after line of code without having a clue how the whole thing fits together. I seriously doubt this would represent a step forward for the world of software development as we know it.

The physics of software.

Rosenberg brings up yet another interesting point when it comes to software development:
" 'Software engineering' is something of an oxymoron," L. Peter Deutsch, a software veteran who worked at the fabled Xerox Palo Alto Research Center in the seventies and eighties, has said. "It's very difficult to have real engineering before you have physics, and there isn't anything even close to a physics for software."

(Rosenberg: p. 276)

However, is it even possible to find out the "physics of software"? Is there one? The natural answer to that question is that, obviously, there is none. Software doesn't have a physical existence of any type whatsoever. It only exists in the form of bits and code and, prior to that, in a programmer's mind or, at the very best, some well documented specs. In that case, we could assume that Deutsch is right on the money: there is (and there will never be) any true software engineering discipline. And yet, is Deutsch's premise correct? Is it true that only physical things can be "engineered"? I suppose it all depends on the approach one takes to the issue. If we start from a mechanistic point of view, the assumption that only something physical can be engineered appears to be true. However, if we take a more systemic approach, it would appear as if everything is information, bits that flow around and can be channeled in one form or another. If his latter vision were true, then the problem is not so much that software engineering would be impossible but rather that we'd have to modify our own concept of what engineering means.

The invisibility of software.

Rosenberg speculates that the invisibility of software makes it more difficult to design it in a reliable manner as it (supposedly) happens in the physical world:
As we have seen, software sometimes feels intractable because it is invisible. Things that we can't see are difficult to imagine, and highly complex things that we can't see are very difficult to communicate . But invisibility isn't the only issue. We can't see electricity or magnetism or gravity, but we can reliably predict their behavior for most practical purposes. Yet simply getting a piece of software to behave consistently so that you can diagnose a problem often remains beyond our reach.

(Rosenberg: p. 275)

Since he discusses this in the context of GUI programming, it automatically triggered an idea in my mind: would it help if we were to build Shockwave Flash mockups? Would that make software more and visible and, at the very least, help understand the way the different GUI components interact with eachother? It may be worht a try. Obviously, I don't think it would be of any help when it comes to the internal logic of a program, but it may be of some use in the design of the graphical front-end, especially when it comes to obtaining some quick up-front input from the customers.

A public infrastructure of servers.

Somewhere during the development process of Chandler, Kapor rethinks the peer-to-peer foundations:
Kapor was rethinking the merits of peer-to-peer anyway. At a June meeting where he formally announced the adoption of a server-based sharing design to his staff, he explained, "I've had a significant change of point of view. There was a kind of frontier idealism that was well intentioned but not practical on my part. The issue is about empowering people. It's not about the infrastructure. Maybe we need a robust public infrastructure of servers to let people do what they want to do. My and OSAF's original position was, electricity is good, therefore everyone should have their own power plant! Unconsciously, I always imagined that user empowerment somehow meant a server-free or server-light environment. Now I think that's actually wrong."

(Rosenberg: p. 214)

Could that "public infrastructure of servers" be what Google and others have been building in the past few years? The time when individuals can have their files "on the cloud" is here already. Millions of people have their music, pictures and opinions hosted on sites such as Google Documents, Blogger, MySpace, Facebook, Picassa, Shutterfly, Flickr, YouTube and many others. Right now they access all this information via snappy web applications written in AJAX but there is nothing stopping us from adding these services to regular, old-style rich apps. Actually, the folks at the Mozilla Labs are already working on some Firefox extensions to do exactly that. In this respect, it could very well be that the project to build Chandler started too early for its own sake.

martes, 22 de julio de 2008

Boolean algebra as the language of the brain.

Petzold's book is a true gem. Its aim is to introduce us to all the main theories underlying the discipline of Computer Science, at least in a very basic manner. However, along the way it also displays some magnificent examples of philosophical musings about technology and the world of computing, such as the following comments about Boolean algebra:
The title of Boole's 1854 book [An Investigation on the Laws of Thought, on Which are Founded the Mathematical Theories of Logic and Probabilities] suggests an ambitious motivation: Because the rational human brain uses logic to think, if we were to find a way in which logic can be represented by mathematics, we would also have a mathematical description of how the brain works. Of course, nowadays this view of the mind seems to us quite naive. (Either that or it's way ahead of its time.)

(Petzold: p. 87)

Sure, it sounds like just one more far-fetched attempt to uncover the deep secrets of the human brain —one of many more designed by ambitious thinkers throughout the centuries— but, let's be fair, without people like Boole our world wouldn't be what it is today. The development of our civilization as a whole depends on crazy projects like this.

lunes, 21 de julio de 2008

WebDAV, peer-to-peer and the influence of hype.

Who doesn't remember the big peer-to-peer trend that happened a few years back? Napster showed up in the radar, it quickly became the next big thing and... well, it quickly went away too. Yes, it also served a purpose. Who thinks that we'd have the iTunes store today without that little experiment? Still, it's also true that peer-to-peer became fashionable, trendy, cool, the thing to do and, all of a sudden, everybody wanted to fit the framework into whichever project they were working on. According to what we can read in Dreaming in Code, that's precisely what happened to Kapor and the team behind Chandler:

If WebDAV could do it, why was it so har for Chandler? Chandler's peer-to-peer approach meant there was no central server to be what developers call, with a kind of flip reverence, "the source of truth". WebDAV's server stored the document, knew what was happening to it, and could coordinate messages about its status to multiple users. Under a decentralized peer-to-peer approach, multiple copies of a document can proliferate with no master copy to rely on, no authority to turn to.

Life is harder witout a "source of truth". For programmers as for other human beings, canonical authority can be convenient. It rescues you from having to figure out how to adjudicate dilemmas on your own. After just a few weeks at OSAF, Dusseault became convinced that the peer-to-peer road to Chandler sharing was likely to prove a dead end. The project had little to show for its efforts to date anyways. "But it was like, we're doing peer-to-peer. We have to. We said we would. We decided to."

(Rosenberg: p. 213)

WebDAV was certainly much better suited to the project they had in mind that peer-to-peer. The same could be said of the traditional client-server approach. So, what's the morale of the story? Even the best hackers make mistakes when they let themselves be influenced by hype.

An innovative feature for a PIM: stamping.

Interesting feaure:
Yin explained the latest thinking on items. Most PIM programs required users to decide up front, when they created a new item, what it was: Were you creating a new email? A calendar event? A to-do task? Chandler would instead let you sit down, start typing a note, and decide later what kind of item it was. Like the human body's undifferentiated stem cells, notes would begin life with the potential to grow in different directions. This design aimed to liberate the basic act of entering information into the program from the imprisoning silos. It also made room for Yin's proposed solution to the item mutability problem: The mechanism users would employ to specify the "kind-ness" of an item would be called stamping.

Say you had typed a note —a couple of sentences about a meeting— and then wanted to put the meeting on the calendar. You would stamp the note as an event. Chandler's detail view would add fields that let you specify a date and a time; your generic note was now an event. Later, if you wanted to invite a colleague to the meeting, you could take the same note and stamp it as an email. A "to" field and a subject line would appear in the detail view. You would fill it out and click on a "send" button.

(Rosenberg: pp. 189-190)

I don't think any other existing PIM software allows you to do that. Apparently, many of these innovative features in Chandler are actually carried from Lotus Agenda, an old piece of software that Kapor wrote back in the Lotus days. It's a pity that today's PIM software is, for the most part, quite run-of-the-mill. When it comes to doing simple things (i.e., dealing with email or taking care of our appointments on a digital calendar) they are too complex and prone to corrupt our (valuable) data. As a matter of fact, it wouldn't be the first time I try to use one of these fancy applications and always end up reverting to good old time-tested mutt. No, mutt is not cool and fancy with all its bells and whistles, but it is extremely flexible, it lets me configure it precisely the way I want it and, above all, I can trust it with my email. I won't corrupt the data as soon as the email folders are too large, forcing me to deal with an emergency crisis at the worst possible moment. So, if I am going to use one of these newflangled applications, it'd better be something different, creative, innovative. It'd better be something that seemlessly links my email to my calendar, my tasks and my contacts. Otherwise, why even bother with a new app?

Tree structures vs. semi-lattices (and the advantage of organically grown, dynamic structures).

I found the following two paragraphs quite inspiring:
One day in April 2004, Chao Lam sent Mimi Yin a link to an article that he had found in a blog posting by a writer named Clay Shirky, a veteran commentator on the dynamics of online communication. Shirky had written about his rediscovery of an old article by Christopher Alexander, the philosopher-architect whose concept of "patterns" had inspired ferment in the programming world. The 1965 article titled A City Is Not a Tree analyzed the failing of planned communities by observing that typically they have been designed as "tree structures". "Whenever we have a tree structure, it means that within this structure no piece of any unit is ever connected to other units, except through the medium of that unit as a whole. The enormity of this restriction is difficult to grasp. It is a little as though the members of a family were not free to make friends outside the family, except when the family as a whole made a friendship."

Real cities that have grown organically —and real structures of human relationships— are instead laid out as "semi-lattices", in Alexander's terminology. A semi-lattice is a looser structure than a tree; it is still hierarchical to a degree but allows subsets to overlap. Why do architectural designs and planned communities always end up as "tree structures"? Alexander suggests that the semi-lattice is more complex and harder to visualize and that we inevitably gravitate toward the more easily graspable tree. But this "mania every simpleminded person has for putting things with the same name into the same basket" results in urban designs that are artificially constrained and deadening. "When we think in terms of trees, we are trading the humanity and richness of the living city for a conceptual simplicity which benefits only designers, planners, administrators, and developers. Every time a piece of a city is torn out, and a tree made to replace the semi-lattice that was there before, the city takes a further step toward dissociation."

(Rosenberg: pp. 184-185)

Aside from the obvious implications all this has for urban planning, there are also some consequences for software development. In this sense, the traditional approach to programming that viewed it as something akin to building a bridge, a huge work of engineering that needed to be planned way ahead, discussed, negotiated, documented and supervised prior to even laying out the first stone ought to give way to a different, more dynamic approach. This is precisely what has been happening in the last decade or so with the rise of agile methodologies of software development and, in general, the open source approach. Things are now to be understood in a more dynamic manner, as a flow, as something more amorphous, difficult to comprehend and control, an ever-changing set of specifications and features that needs to be managed in the sense of being navigated through —surfed even—, more than controlled. Reality is too rich and diverse to shoehorn it into a predefined shape.

Now, what I find quite interesting in all this is that the above description is quite compatible with the works of Gilles Deleuze, Félix Guattari and Manuel de Landa. Reality is to be described as a rhizome instead of a tree. Hierarchies may exist, but in a completely different form. They are not absolute anymore. They are highly dependent on the position they (and us) occupy in the whole landscape of things. I find it difficult to believe that a philosophy like this may not rise to take center stage in our century, replacing the outdated mechanistic approach of the past.

Linus Torvalds on avoiding large projects.

In accordance with the principles stated in a previous post about developing in Internet time, open source projects tend to stay away from large and ambitious plans. This is not to say that they never reach the status of large projects, of course. However, it does mean that they usually start as small, little projects that try to "scratch an itch" and, given enough interest and a good amount of contributions, may grow to something as large as the Linux kernel, GNOME or KDE. These ideas were pretty well expressed by Linus Torvalds in an interview published by Linux Times June 2004 where they asked him if he had any advice for people starting large open source projects.
"Nobody should start to undertake a large project," Torvalds snapped. "You start with a small trivial project, and you should never expect it to get large. If you do, you'll just overdesign and generally think it is more important than it likely is at that stage. Or, worse, you might be scared away by the sheer size of the work you envision. So start small and think about the details. Don't think about some big picture and fancy design. If it doesn't solve some fairly immediate need, it's almost certainly overdesigned".

(Rosenberg: p. 174)

Once more, the emphasis is on a project that's small, releases quick and nimbly responds to the users' feedback. This mindset is at the very core of the open source development model. Commercial software companies may stress that their approach is quite different, refusing to treat customers as beta testers. However, this is quite disingenous. Anybody who has gone through the experience of running the first public release of any software product knows what I mean.

Developing software in "Internet time".

When it comes to software development, something changed in the last 15 years or so. Sadly, we have not figured out the way to write software without bugs, neither have we come up with a way that allows non-programmers to put applications together. However, what has definitely changed is the time our programmers are allowed to sit on a project before they release the code. Let's put it this way: code has become a big business, which means that the time that can be dedicated to the development of a particular product has now shrunk significantly, since there is a need to monetize it as soon as possible. And what changed all that? It was the Internet and, above all, Netscape. Michael Toy, ex-Netscape employee, has an obvious bias for developing software in what has become to be known as Internet time:

"... I frankly admit that I am heavily biased toward: Let's ship something and put it in people's hands and learn from it quickly. That's way more fun, way more interesting, and, I think, a much better way to do things than to be sure that what we're doing is going to last for ten or fifteen years. It looks like Chandler is trying to be very architecturally sound and to be almost infinitely willing to delay the appearance of the thing in order to have good architecture. It's Mitch's money, so he can make that trade-off any way he wants. And it could be that that willingness to go slow is going to pay off hugely in the future. But it's really hard for someone who wants to ship software next week."

(Rosenberg: p. 171)

It's the very same approach taken by open source development: release quick and release often, then make changes depending on the users' feedback. This is quite far from the original intent to reach perfection that was shared by pretty much every original software engineer. Back then, the aspiration was to "write software like we build bridges". I'd say the overall attitude has changed now. We don't think we can build the perfect program anymore. We prefer to think of it all as a process, not an end product. Even better, it is a process that never ends. We put together a prototype and release it out there. Users give it a try and provide some feedback —some of them may even provide code too! Then, we take that and build upon the original prototype. In other words, software development is less software engineering —building a definite product from the well defined plans— than a craft or an art always trying to perfect itself by the use of a set of so called good practices. This is the hige shift in mentality brought about in the field by Internet time.

viernes, 18 de julio de 2008

The rise of the web-centered paradigm.

Already back in 2003, the Chandler programming team had to answer a key question towards the beginning of their development effort: should they write a webapp or a traditional heavyweight desktop application?

Hertzfeld, impatient to move Chandler along, proposes a radical idea: Mozilla, he points out, is already structured to incorporate other programs as plug-ins. Why not build Chandler itself as one big Mozilla plug-in? Of course, he admits, there'd be problems. A browser-based design would certainly require a lot of rethinking of Chandler's goals. But relying on the browser's interface would save the programmers enormous amounts of the labor involved in building a new interface themselves.

"There's so much work ahead of us. It would be great to strap on some booster rockets," he says.

Michael Toy had often brought up the Mozilla option himself, but this time he raises cautions. "It's been forty thousand years since the invention of the Internet, and we still don't have a way for dumb people to make Web sites that are useful. And a Web browser is not a very good interface to something that is not the Web. It just seems like we'd be strapping a bad backpack on before we start walking".

(Rosenberg: pp. 154-155)

This was obviously before AJAX, Google Maps, Google Mail and a whole slew of other cool web applications Google (among others) has released in the past few years. I'd say by now we have finally reached a point where a webapp can do almost as much as a typical desktop application, and it certainly is the case already for things like email, tasks and calendars. As a matter of fact, there are plenty of users who already rely on web email for their work and personal communications. It's reliable, fast enough, OS independent, relieves the user from worrying about backups and, above all, it's mobile. It finally looks as if the time has come for the web to take over and render the underlying operating system almost irrelevant. This is something that many could see coming since the second half of the 1990s, but it truly didn't become a real possibility until these latest developments that took place in the past couple of years.

Work meetings that go into an infinite loop.

Ah, how did the following view exchange reminded me of so many work meetings!
"The question is," said Mitch Kapor, deep in the middle of a long meeting in a series of long meetings, "How do we sequence things to avoid spending an infinite amount of time before anything useful happens?"

"It's only infinite if you're stuck in a loop," Hertzfeld replied.

(Rosenberg: p. 152)

Vitruvius applied to software design.

Inspiration can truly come from anywhere, but when it comes to software design it is clear that there is a connection between it and architecture, as Kapor noted:
Software design, Kapor argued, was not simply a matter of hanging attractive graphics atop the programmer's code. It was the fundamental creative act of imagining the user's needs and devising structures in software to fulfill those needs. As the architect draws up blueprints for the construction team, so the software designer should create the floor plans and elevation views from which software engineers would work. Reaching back to ancient Rome, Kapor proposed applying to software the architecture theorist Vitruvius' principles of good design: firmness —sound structure, no bugs; commodity —"A program should be suitable for the purposes for which it was intended"; delight —"The experience of using the program should be a pleasurable one".

(Rosenberg: p. 149)

The terms may sound outdated, but the idea truly still applies: a well designed software should be solid and consistent, able to meet the requirements and a joy to use. Apple's designers have always put this advice to good use.

jueves, 17 de julio de 2008

The paradox of reusable code.

The dream of code reuse has been with us for quite sometime. It promises to speed up the time it takes to write a given project and, even more important, it also promises sturdy, well tested components that may overtime become completely free of bugs. This is what people mean when they talk about modularity. Well, as Rosenberg explains in his book, code reuse is widespread these days and it does contribute towards solving some of the problems programmers had to deal with in the past —especially, it does speed up the time to completion. However, it is not free of its own problems.
Here is one of the paradoxes of the reusable software dream that programmers keep rediscovering: There is almost always something you can pull off the shelf that will satisfy many of your needs. But usually the parts of what you need done that your off-the-shelf code won't handle are the very parts that make your new projects different, unique, innovative —and they're why you're building it in the first place.

(Rosenberg: p. 102)

This is almost by definition. A reusable component exists because a relatively high amount of people had a particular need in the past and it was written to satisfy that need. However, any project that is truly unique and innovative will have, by definition, needs that didn't exist before. It's one of those doh moments, isn't it?

However, there is another face to this paradox that's even more worrisome. What happens if a given project decides to use a particular reusable component from the get-go and later on finds the need to innovate and/or make some deep changes? Either they have to invest the time to learn the component —and we are talking about large and difficult pieces of programming here, such as Zope— or they have no other choice but to shoehorn their innovations into it. Needless to say, this leads to more bugs and, eventually, to an unmaintanable project.

miércoles, 16 de julio de 2008

Dreaming in Code.

The author tells us the story of a group of programmers who are trying to write a killer PIM app, something much better than MS Outlook and Exchange, which is overkill for most small businesses (not to talk about personal use). They are looking to build an open source peer-to-peer application that aspires to be far more flexible than anything else seen yet. But more important, the brains behind the whole operation happens to be none other than Mitch Kapor, the mythical programmer behind Lotus 1-2-3 and the name of the new application is Chandler. In any case, perhaps far more important than the particular story Rosenberg tells us about the experience of programming this particular application it's all the wonderful reflections about the history (and the job) of programming software that he shares with us.

Technical description:
Title: Dreaming in Code: Two Dozen Programmers, Three Years, 4,732 Bugs, and One Quest for Transcendent Software.
Author: Scott Rosenberg.
Publisher: Crown Publishers
Edition: New York (USA), 2007.
Pages: 400 pages, including index.
ISBN: 978-1-4000-8246-9