Remember when Deep Blue beat world champion Garry Kasparov at chess back in 1997?
Although Deep Blue won that six-game match, many AI commentators pointed out that the accomplishment was focused on a very narrow cognitive area – understanding the rules of chess, and selecting the best move to play within a specified time format.
While the accomplishment was significant, it was still more about programming and CPU speed than about anything we would call “machine intelligence”.
Now 14 years later, IBM is back. This time it’s all about Watson, a super-computer that will compete on Jeopardy against two of the all-time champions of the well-known quiz show.
“So what?”, I hear you say, but actually you should expect to see some mind-boggling technology at play here. Voice recognition, natural language semantic analysis, heuristic search algorithms… Just thinking about how to program all of that – and get a response within two or three seconds – is enough to make anyone’s head hurt.
In order to win Jeopardy (or even participate), Watson needs to handle not only a vast range of subjects, but also the cognitive challenges involved in understanding and analyzing Jeopardy clues (not to mention Alex Trebek’s sense of humor). These usually require some level of linguistic intuition and cultural awareness, in addition to encyclopedic knowledge.
So although it is still about programming and CPU speed, this time around the computer’s performance is likely to cross the line into a domain that most of us would consider “intelligent” in some sense.
Sceptical? Then think about the original test of machine intelligence proposed by Alan Turing – “A human judge engages in a natural language conversation with one human and one machine, each of which tries to appear human. All participants are separated from one another. If the judge cannot reliably tell the machine from the human, the machine is said to have passed the test.” [source Wikipedia]
Of course, the Turing test raises a host of questions, among which my favorite is whether the ability to behave indistinguishably from a human is, in itself, a definition of intelligence…
But I digress. Regardless of how one chooses to define intelligence (or lack thereof), it will be interesting to see how Watson performs tonight and the following two days. Make sure to watch these episodes of Jeopardy, if you are even vaguely interested about how advanced computer science has really become…
Finally after a “drought” of several months I pulled off a solid win as black using the Dutch Defense.
I favor the Dutch – and the Bird (1. f4) when playing white – because these systems make for dynamic play with several centers of activity. However, I don’t get to use the Dutch very often, because not many people open with 1. d4.
In this game I feel I gained the upper hand early. White’s moves 5. e4 and 9. dxe4 gave me the idea of bracketing the white king via the open d and f files.
Once white exchanged his knight for my bishop (11. Nxe7+) I felt the game was going my way. I like to keep my knights as long as possible in the Dutch/Bird systems. The pawn structures tend to hang around for a long time, blocking the diagonals and cramping the bishops. So having a knight against a bishop often proves to be a significant advantage.
12. … Be6 and 13. … Na6 connect my rooks, and on the next move I take ownership of the d-file, preparing to mount my assault up the center.
White realizes it’s high time to castle, but some diversionary attacks on his queen give me tempo to bring up the artillery and block his king in the center of the board. The game is not won at this point, but I certainly have a strong positional advantage…
I could not have anticipated white’s blunder 18. Nc3?? – but even without that, I think the game would have gone my way had we played it out. For example, 18. Qxd4 Nc2+ 19. Nxd4 definitively removes white’s castling option, and leaves his king rather exposed in the center.
It’s quite satisfying to win a game like this against a player rated 100 points higher!
According to this New Scientist article, MRI analysis shows that chess grandmasters’ brains work differently from those of chess novices, when deciding whether any of the pieces in a given position are in check.
“Bilalic [the researcher] had expected the expert players to use a faster version of the processing mechanism used by novices.”
I find this assumption surprising, especially because Merim Bilalic appears to be a FIDE Master ranked 76th in his country. Of course that could just be a coincidence…
It is widely known that grandmasters develop a holistic view of the board, instinctively recognizing patterns that they have seen thousands or millions of times before, and interpreting the chess position as a visual map of zones of influence and threat. By contrast, the novice will typically take a more analytical approach, leading to a laborious square-by-square evaluation of the position.
Basically it’s a classic left-brain / right-brain scenario, and the grandmasters are able to recruit both sides of the brain to process the problem intuitively and analytically at the same time. In much the same way, an experienced mathematician will “see” a proof intuitively, in parallel to (or even before) actually working it through step by step.
I’m looking forward to seeing the original research paper when it’s published in PLoS One, and finding out who the grandmasters were who participated in the experiment!
Also it would be interesting to see if there is a correlation between a chess player’s official rating, and their ability to perform well in right-brain (holistic, intuitive) tasks unrelated to chess. Do any readers have pointers to such research? Feel free to comment…