Asimov’s Three Laws of Robotics Applied to User Experience
If you have seen the new Lincoln movie, hopefully you did not mistake Daniel Day-Lewis’ fantastic performance with a robot. And yet, the release of this movie did coincide with me thinking about a robotic president. Those familiar with Disney history will remember one of their first uses of robotics was in the Great Moments with Mr. Lincoln attraction. An attraction where a robotic version of Lincoln addresses the audience with parts from his famous speeches.
At the same time I was reading more about this robotic history, the io9 blog posted a video of science fiction author, Isaac Asimov, describing his famous Three Laws of Robotics – often used as plot devices in his stories. Upon hearing them again with new ears, a connection to product design and a new article was inevitable. As an introduction and inspiration, watch Asimov’s explanation (about 45 seconds):
A brief bit about Audio-Animatronics
According to Disneyland Inside Story, Walt Disney’s utilization of robotic performers began in the early 1950’s. He wanted to create a traveling exhibit called, “Disneylandia” which would feature one-eighth scale figures that would replay American folklore and history on a small stage. This plan was scrapped when a cost analysis proved it would not repay the investment, however the idea morphed into full-sized figures that could stay at a permanent location.
As time progressed, electronics and technology began to enable Walt’s vision. Instead of using cams and levers to control three-dimensional animations, hydraulics and pneumatics could be coupled with computer systems for more lifelike creations.
While the Lincoln attraction was the initial focus, early efforts proved challenging to achieve Walt’s goal of realistic mouth movements. This work was halted in favor of creating robotic birds based on some toys Disney found in his travels. The Enchanted Tiki Room was the result. At the time, this 1963 attraction had “225 Audio-Animatronic performers directed by a fourteen-channel magnetic tape feeding one hundred separate speakers and controlling 438 separate actions.”
It wasn’t long before Lincoln would make an appearance. For the 1964 World Fair, Disney developed the attraction with sponsorship from the state of Illinois. It was later placed in Disneyland and opened the door for enhanced attractions with everything from singing pirates to recreated scenes out of famous movies.
Over time, the Audio-Animatronics in this attraction and others continued to advance. Imagineers dubbed the original model A-1. Now, the A-100 series is in use which provides a much higher degree of flexibility over things like individual fingers or eyebrows for better facial expressions. In the past few years, they have created “Autonomatronics” with sensors that can provide more interaction with guests. A recent project revealed by Disney Research even allows a robot to juggle with a human counterpart – throwing and catching a ball between both.
Three laws, reinterpreted
For the most part, one significant limitation in Disney’s prior use of robotics is their lack of interactivity. They are simply actors on a stage. In contrast, robots, as Asimov envisioned, would be helpers that made their daily lives of people easier – a goal many designers share for their products.
Listed below are Asimov’s original Three Laws. While each originally referred to a “robot,” I am striking through that text. In its place, I am using “product” to refer to anything made and used by people. This could be anything from your camera to your favorite photo sharing web site.
FIRST LAW: A
robot product may not injure a human being, or, through inaction, allow a human being to come to harm.
If I had to guess, my assumption is that most readers do not work in an industry where their products could bring direct physical harm to people. Notable exceptions might include those working in the transportation, military or medical fields. In those situations, there are often numerous government regulations that seek to “assist” designers and provide guidance.
While not one of those listed above, the company I work for produces products that must adhere to the highest safety standards to keep both the user *and* their belongings safe. This expands the discussion from keeping people physically safe to thinking about the other types of harm that may befall our users. Examples include:
- Property loss – Products should seek to not only protect physical property but virtual goods as well. For example, the text editor I am using to write these words automatically saves my work in the cloud every few seconds. Should some tragedy befall my computer, I am guaranteed to still have my document preserved.
- Financial loss – In this cashless era, our personal finances are more exposed than ever. Whether we are checking our online balances or buying a song off of iTunes, we are trusting the systems to be safe. Designers must continually find ways to protect people from phishing and other predators. Likewise, new payment systems (like NFC on mobile phones) will need to be robust from misuse.
- Privacy loss – What happens in ____, stays in ____. Las Vegas should not be the only acceptable answer to those blanks. Given the rise in identity theft, there is clearly more that can be done to ensure personal information is not accessible by others.
- Reputation damage – While closely related to privacy, I believe there is also benefit in protecting people from even their own simple mistakes. For example, ever send an email and instantly regret it? Perhaps it was an embarrassing spelling mistake or the dreaded reply-to-all? I know I’ve wish Gmail’s undo send feature was available to me at work a few times!
- Time loss – For many people, their time is a most precious possession and is closely protected. Inconveniencing them with tasks that are more difficult than necessary will result in great dissatisfaction. Therefore it should come as no surprise that efficiency is one key component in product usability.
SECOND LAW: A
robot product must obey the orders given it by human beings except where such orders would conflict with the First Law.
In many ways, this law parallels many usability heuristics which cite the need to:
- “Support internal locus of control.” Schneiderman
- “Keep users in control.” Porter
- “Stay out of people’s way.” Hess
- “User control and freedom.” Nielsen
- “The system provides more information to users when that ask for it.” Weinschenk
Ultimately, this is about matching the user’s expectations and providing the perception of control, even if the reality is different. Some people may want control over their car’s transmission, others will not. But hardly anyone wants control over the many pump systems that keep a car driving down the road safely. I perceive that I am in complete control of my car even though it is making thousands of decisions for me constantly.
Frustration arises when that control is taken away. If I am navigating through a web site and a pop-up or overlay message appears, my task is interrupted, and the site is not obeying my orders. No, I don’t want to take a survey (but, I at least have sympathy for them). Heck no, do I want to hear about the latest sale on bobbles (and, I have no sympathy).
This frustration is somewhat linked to a phenomena like learned helplessness. The theory states that when people feel repeatedly not in control of a situation or their lives, mental trauma like depression may result. In user interfaces, Norman suggests this will also cause a person to blame themselves instead of the product for being poorly designed, creating a negative emotional result.
There are many ways to enhance a sense of control or prevent frustration. For example, even if a system automatically makes a decision for a user, providing a simple way to undo or override that decision will be appreciated.
Natural user interfaces that promote direct manipulation or gestural widgets can also increase the perception of control. Holding an iPad and swiping through a tumbler with your fingers provides a greater sense of control than using mouse to slew through values with up and down buttons. However, with that increased realism comes the increase in computing power necessary to maintain the illusion. If the system lags behind the user even .1 seconds, it will no longer feel as responsive.
THIRD LAW: A
robot product must protect its own existence as long as such protection does not conflict with the First or Second Law.
This final law does not seem to have as immediate benefit to the user as the first two laws. However, if we reframe the premise and assume the user owns the product, they would very much want it the product to protect itself.
At the basic level, the designer and developer should create a robust product that is not prone to breaking. This implies robust code or parts that ensure longevity under a variety of circumstances and use cases. Likewise, it should fail gracefully so that laws one and two are maintained.
Other solutions may include placing protective systems in the product itself. For example, most bedside alarm clocks come with a battery backup option in the event of a power failure. A higher-tech example is the Sudden Motion Sensor system found in Apple laptops. They use built-in accelerometers to protect the hard drive. It can detect when the product is being dropped and will move the read/write head away from the disk platter to prevent damage to sensitive parts.
Thanks for visiting
Before any purists complain, I am also aware a fourth (actually, ‘zeroth’) law of robotics was added over time: “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.” In fact, this may be the highest aspiration for the design community – the ability to impact a whole society in a positive way. I wish you all the best in achieving that goal.