Does super-computer Watson stand a chance on Jeopardy?

On February 14, 2011, in Miscellaneous, by Robert Dallison

Remember when Deep Blue beat world champion Garry Kasparov at chess back in 1997?

Although Deep Blue won that six-game match, many AI commentators pointed out that the accomplishment was focused on a very narrow cognitive area – understanding the rules of chess, and selecting the best move to play within a specified time format.

While the accomplishment was significant, it was still more about programming and CPU speed than about anything we would call “machine intelligence”.

Now 14 years later, IBM is back. This time it’s all about Watson, a super-computer that will compete on Jeopardy against two of the all-time champions of the well-known quiz show.

“So what?”, I hear you say, but actually you should expect to see some mind-boggling technology at play here. Voice recognition, natural language semantic analysis, heuristic search algorithms… Just thinking about how to program all of that – and get a response within two or three seconds – is enough to make anyone’s head hurt.

In order to win Jeopardy (or even participate), Watson needs to handle not only a vast range of subjects, but also the cognitive challenges involved in understanding and analyzing Jeopardy clues (not to mention Alex Trebek’s sense of humor). These usually require some level of linguistic intuition and cultural awareness, in addition to encyclopedic knowledge.

So although it is still about programming and CPU speed, this time around the computer’s performance is likely to cross the line into a domain that most of us would consider “intelligent” in some sense.

Sceptical? Then think about the original test of machine intelligence proposed by Alan Turing – “A human judge engages in a natural language conversation with one human and one machine, each of which tries to appear human. All participants are separated from one another. If the judge cannot reliably tell the machine from the human, the machine is said to have passed the test.” [source Wikipedia]

Of course, the Turing test raises a host of questions, among which my favorite is whether the ability to behave indistinguishably from a human is, in itself, a definition of intelligence…

But I digress. Regardless of how one chooses to define intelligence (or lack thereof), it will be interesting to see how Watson performs tonight and the following two days. Make sure to watch these episodes of Jeopardy, if you are even vaguely interested about how advanced computer science has really become…

Details here:

Tagged with:  

Is Chess a Right-Brain Activity?

On January 11, 2011, in Chess, by Robert Dallison

According to this New Scientist article, MRI analysis shows that chess grandmasters’ brains work differently from those of chess novices, when deciding whether any of the pieces in a given position are in check.

“Bilalic [the researcher] had expected the expert players to use a faster version of the processing mechanism used by novices.”

I find this assumption surprising, especially because Merim Bilalic appears to be a FIDE Master ranked 76th in his country. Of course that could just be a coincidence…

It is widely known that grandmasters develop a holistic view of the board, instinctively recognizing patterns that they have seen thousands or millions of times before, and interpreting the chess position as a visual map of zones of influence and threat. By contrast, the novice will typically take a more analytical approach, leading to a laborious square-by-square evaluation of the position.

Basically it’s a classic left-brain / right-brain scenario, and the grandmasters are able to recruit both sides of the brain to process the problem intuitively and analytically at the same time. In much the same way, an experienced mathematician will “see” a proof intuitively, in parallel to (or even before) actually working it through step by step.

I’m looking forward to seeing the original research paper when it’s published in PLoS One, and finding out who the grandmasters were who participated in the experiment!

Also it would be interesting to see if there is a correlation between a chess player’s official rating, and their ability to perform well in right-brain (holistic, intuitive) tasks unrelated to chess. Do any readers have pointers to such research? Feel free to comment…

Tagged with:  

Follow my training on DailyMile

404 Not Found

404 Not Found

  • Code: NoSuchKey
  • Message: The specified key does not exist.
  • Key: people/robertdallison/training/widget.js
  • RequestId: B279714AD75DCC57
  • HostId: /WVSDDdHP1bzqpdusKuq1z19XRK/xB5OeS+4FbwgKtS6VCyYPe7FOHgj/SdExlMXIQPvNWfed/U=

404 Not Found

404 Not Found

  • Code: NoSuchKey
  • Message: The specified key does not exist.
  • Key: people/robertdallison/widgets/distance/large.js
  • RequestId: 9771D9473F356881
  • HostId: ZUK/nNjLGnpFuM+4l12ztqQ6QfihNbpY4ZzK07a+x8iJ1by1atW0b/qqrBf069Sv95VasYMb8Jg=

404 Not Found

404 Not Found

  • Code: NoSuchKey
  • Message: The specified key does not exist.
  • Key: people/robertdallison/events/widget.js
  • RequestId: B4F86E1591E2FAF9
  • HostId: a6Coz1xV65cMCS+9KxHT3W/hVI2APvHhqerx2ZC3Wz0KFX+FFdLsh8bOhsCA8C9ZiQQ5ZJbWkak=

  • No upcoming events.

  • Running PRs

    5K - 23:17 (Oct 16, 2010)
    5K - 23:37 (Mar 13, 2010)
    5K - 26:23 (May 23, 2009)
    10K - 49:09 (Oct 3, 2010)
    10K - 49:36 (Nov 26, 2009)
    10K - 51:36 (Oct 4, 2009)
    13.1 - 1:46:28 (Mar 07, 2010)
    13.1 - 1:52:23 (Nov 13, 2009)
    26.2 - 3:53:49 (Jan 30, 2011)
    26.2 - 4:39:14 (Jan 31, 2010)

    Running Goals

    5K - 22:00
    10K - 45:00
    13.1 - 1:40:00
    26.2 - 3:45:00
    Feb 2012, four marathons in 4 days
    May 2012, ten marathons in 10 days