The uncanny valley of intelligent software

December 23, 2008 at 1:52 pm (PT) in Usability

There’s a phenomenon in robotics called the the uncanny valley. It’s also commonly used in the context of computer graphics, where cartoonish characters (e.g. The Incredibles) are more acceptable than those that aim for realism but fall short (e.g. The Polar Express, Beowulf). It’s also been used in the context of user interface look-and-feel. I think there’s an uncanny valley for “intelligent software” too.

Software is becoming increasingly complex, and it’s not uncommon for programs to provide knobs to control their behaviors. Providing too many knobs, however, can overwhelm a user with choices. Programs can combat this by providing fewer knobs and picking default settings appropriate for common use cases, and by trying to do more actions automatically for the user. They can go too far, however; programs that try to make too many decisions on their own become more mysterious, sometimes seeming unpredictable, out-of-control, and annoying.

The Microsoft Office Assistant (“Clippy”) seemed like a good idea at the time, but it was artificial intelligence gone awry. It tried to be smart to recognize when users needed help, but it wasn’t smart enough to know when it was unneeded or when its advice was off-base. Similarly, this is also one of the reasons why I hate Facebook’s “News Feed” (nee “Top Stories”). Facebook uses some unknown weighting algorithm to pick which items to show in what order, but it ends up seeming random; some items seem chronological, but some aren’t.

This is something we deal with at VMware; we provide a number of knobs to allow users to tune virtual machines for their needs. Some settings don’t make sense when used together though. At what point is this obvious, at what point do we take the easy way out and display a message explicitly explaining that enabling option X will disable option Y, and at what point does the message itself become an annoying obstacle?

I confess that I don’t know that it’s actually a valley, though; it could just as easily be a cliff. Are computers that act perfectly like humans really what we want? Humans often don’t do a good job of understanding what other humans want either.

Newer: Memorable Massachusetts milestones
Older: Flight from the Twilight Zone (or: Remind me why I fly to Boston for Christmas?)

No Comments Yet »

RSS feed for comments on this post.

Leave a comment

(will never be displayed)


Allowed HTML tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>