#0055
Chess stories

A chess boardI've already mentioned my friend and MIT classmate Frank Model in a few entries. Note 1 After receiving his Bachelor's degree from MIT in 1963, Frank went up the Charles river to Harvard to pursue a doctorate in Chemistry. A good chess player, he often played in informal lunchtime competitions. When he found out that I had access to the Greenblatt computer chess program Mac Hack VI, he asked me if he could play against it.

Now, Frank is also a poker player, and perhaps for that reason, is adept at "psyching out" his opposition. In principle, chess is a deterministic game, so in theory, that sort of tactic shouldn't do you any good. But in practice, if you can rattle your opponent, he's apt to make some mistake. I think that Frank was able to make use of such techniques even in chess. If he made a move firmly, and stared meaningfully at the other player, for instance, the other player might start worrying about some important combination that he might have missed.

A DEC 340 displayThe trouble was, the computer cannot be rattled. All moves are the same to a computer program, just grist for its mathematical analysis engine. The computer monitor displays no particular response at all - it just accepts your move, and then is silent until it has a response. The only clue as to what it's doing is that it might take a bit longer to compute some replies than others, but there's not even much variation there. So my impression was that it was Frank who got a bit rattled in his first game. He made some mistakes, and he lost.

Frank was not particularly pleased with the result, because I think he knew that he could have played better than he had, and also that he was probably a better player than the computer program was at that point in its development. The computer can't be said to have "psyched him out", at least not on purpose, because there was no such logic in the program. But perhaps it did so indirectly, by having, in effect, the ultimate "poker face".

So Frank insisted on a rematch. I don't know what he did to prepare himself mentally for the second game, but as I recall, he trounced Mac Hack VI handily. Note 2

I've talked in an earlier entry, "I resign", about the anthropomorphization of computers. At the current state of the art, the attribution of human-like emotions to a computer generally comes from the humans involved, and not from the computer's program.

Joseph WeizenbaumA famous program in the field of Artificial Intelligence, called "ELIZA", was written by Joseph Weizenbaum (left), a professor at MIT in the sixties. Note 3 Patterned after an earlier program called "DOCTOR", it simulated the responses of a Rogerian psychotherapist. Weizenbaum said that ELIZA "[parodied] the responses of a nondirectional psychotherapist in an initial psychiatric interview." Note 4   He made the program operate in a therapeutic context to "sidestep the problem of giving the program a data base of real-world knowledge", Note 5   because a therapist can get away with replying with vague statements without actually understanding what the "patient" is discussing.

ELIZA worked using very simple grammatical parsing of the sentences typed in by a user, often making minor modifications to a sentence and turning it back in its reply. For instance, if a user typed, "I hate my father", the first thing the parser would do would be to change the "I" to "you" and the "my" to "your", giving "you hate your father" (color just used here for clarity). It could then reply with "Why do you hate your father?", or "You say you hate your father". Yet with trivial processsing like that, the program could be so convincing that people would become emotionally involved in their conversations with it.

I found that to be the case myself when, some time in the seventies, I programmed a version of ELIZA on an early microcomputer that was being used by Micronetic Systems, a start-up company I worked at after leaving MIT. A secretary at the firm, over lunch, used to pour her heart out to the program, "discussing" with it important issues in her personal life. Then, for privacy, she would shred the teletype paper containing the dialog. The extent to which she attributed human characteristics to a very simple computer program, even after we explained it to her, was amazing to me.

Back to Mac Hack VI, Richard Greenblatt's chess program. One of the graduate students in the lab, Carl Hewitt, used to play against the program to improve his chess. A game would proceed until he got into some difficulty, revealing that some earlier move of his had been a bad one. Like many computer programs, Mac Hack VI had an "Undo" command, actually called "Unmove", which was invoked by typing the single character <Ctrl-u> (we are now all familiar with computer keyboards, so you know that means to type the character "u" while holding the "Ctrl" key). Each <Ctrl-u> would back up one "ply" (that is, a move by either the computer or the user, whichever had been last). Upon realizing that he had made a bad decision earlier in a game, Carl would just press <Ctrl-u> repeatedly, until he had backed up to the point of his error. He would then make a different and hopefully better move, and the game would proceed.

This was, and still is, a great way to improve your chess. A human opponent would generally not tolerate that sort of play, but a computer, obviously, doesn't care. So "un-moving" when playing against a computer is a great way to learn. Sometimes users used a single <Ctrl-u> to take back a mistyped move, but longer sequences of <Ctrl-u>s could be used to return to an earlier point in the game.

Except while backing out of a bad situation one day, when Carl typed his fourth successive <Ctrl-u>, the program suddenly responded with the message, "OK, Hewitt, I've had it with your cheating", and the operating system then logged him out.

Although Carl was briefly stunned, he of course quickly knew what had happened. Someone had altered the chess program, adding logic to watch for four successive <Ctrl-u>s if the user's log-in name was "CEH" (that would be Carl). If so, deliver the above message, and issue a "log-out" command. It was just a joke - hacking was always alive and well in the Artificial Intelligence group.

Marvin MinskyOne final computer chess story, not involving Mac Hack VI. It is said that back when Professor Marvin Minsky (right) and Professor John McCarthy were co-directors of the AI Lab (before McCarthy departed for Stanford), they were separately invited to play chess against student-written computer programs. One was invited to play a program running on MIT's TX-0 computer, and the other was invited, at the same time, to play against a program running on a PDP-1 computer. Each was unaware of the invitation that had been issued to the other.

When the games commenced, both professors noted that the programs on these small minicomputers proved to be surprisingly strong. In fact, eventually they both started to smell a rat, that is, to suspect that some sort of trickery was involved. And indeed, that was the case. There were in fact no chess playing programs at all. The TX-0 and the PDP-1 were in adjacent rooms in MIT's Building 26, and they had recently been connected by a data communications channel. When a chess move was typed into the PDP-1, it merely passed the move on to the TX-0, and then started madly flashing its console lights, to look as if it were intensively computing. The TX-0 would type out the move as its own, and await a response. Then, when a move was typed into it, the reverse would happen - it would send the move to the PDP-1, and start its own computational show. Thus, the two professors were playing a chess game against each other, but to each, it appeared that he was playing against a computer.

So it is said. The trouble with this wonderful hack is that the story appears to be apocryphal. Professor Minsky was my Ph.D. thesis advisor back in the late sixties, and I recently ask him about the incident. He replied in an e-mail message, "I don't think that the Minsky-McCarthy chess story actually happened. I did once try to fool Professor Herb Teager into believing that we had a working MACSYMA-like program that spoke English. He gave it a problem I couldn't solve, so I typed "ACCUMULATOR OVERFLOW" to simulate a system crash. But I misspelled the first word, and he caught on."

The Ray and Marie Stata CenterThese tales of chess were just incidental events I saw going by within the wonderful world that was the Artificial Intelligence Lab in the sixties. MIT's computer science laboratory is still part of the Electrical Engineering Department. Called CSAIL ("Computer Science and Artificial Intelligence Laboratory"), it is now housed in the quirky Ray and Marie Stata Center, designed by architect Frank Gehry.

#0055   *CHESS   *MIT   *TECHNOLOGY

Next in blog     Blog home     Help     Next in memoirs
Blog index     Numeric index     Memoirs index     Alphabetic index
© 2010 Lawrence J. Krakauer   Click here to send me e-mail.
Originally posted November 18, 2010

Footnotes (click [return to text] to go back to the footnote link)

Note 1:   I've previously mentioned Frank in connection with his speaking and studying German, in entry #0009, Sehr gut, and entry #0026, Herr Bon. He's now become a butterfly photographer - click the next link to see Frank's beautiful butterfly pictures on Flickr.  [return to text]

Note 2:   Frank is modest in his recollection of his chess talents, saying, "It was all bluff. It's amazing how well that works with humans."   [return to text]

Note 3:   The late professor Weizenbaum was an excellent storyteller. Here are a couple of tales I heard him recount.

Attending a presentation at some computer conference, he sat with another professor of Computer Science who sported long, unruly hair, sort of like a lion's mane, and was dressed in dirty, wrinkled clothing (not entirely atypical for a computer programmer). Weizenbaum's companion stuck a finger up one nostril, and looked intently around the room. Then, with his finger still up his nose, he remarked, "You know, Joe, some pretty weird people come to these conferences."

Weizenbaum also once reported that while driving through Harvard Square one day, he spotted a prime parking space right in the square! Whipping across several lanes of Massachusetts Avenue, he pulled cleanly into the space. He sat for a bit extremely pleased with himself, as it's nearly impossible to find a parking space right in Harvard Square. But then it slowly dawned on him that he had had no intention of stopping in the Square at all. He had merely been passing through on the way home. Sometimes your reflexes just take over.  [return to text]

Note 4:   Weizenbaum, Joseph (1976), Computer power and human reason: from judgment to calculation, W. H. Freeman and Company, ISBN 0-7167-0463-3, p. 188.   [return to text]

Note 5:   Ibid, pp. 188-189.   [return to text]