A UX “Envelope of Protection” – Insights from Disney’s Selfie Stick Ban
The object in the lead photo may look like some medieval torture device or weapon, but its real purpose is just the opposite. Disney uses this device to test what they call an “envelop of protection” to ensure rider safety.
In recent days, the internet has been filled with articles regarding Disney’s ban on selfie sticks. On Reddit, a Disney employee explained this ban by referencing the “envelope of protection” and this device. Those spiky things represent the extremes of a person’s reach as determined by ergonomic data. The rider should not be able to touch any exterior elements during the experience which could be whizzing by at high rates of speed. The selfie stick artificially increases a rider’s reach. The danger is that the stick could now hit surrounding elements causing harm to either the rider, another guest, or the ride itself.
Human error (and selfie sticks)
Regardless of whether you feel that buying a ”selfie stick” is in itself, a tragic human error, using them on a thrill ride is clearly pushing the limits. In fact, Disney had repeatedly advised guests against this practice before the ban. And yet, riders did it anyway. What does this say about human behavior?
Slips: Wanting to accomplish the right goal, but accidentally performing a task incorrectly. Like walking across a wet floor with every intention of staying upright, but unexpectedly losing footing perhaps due to lack-of-skill in that situation. An example from mobile UIs could be trying to touch one of several small links, and fat-fingering it to select the wrong one.
Mistakes: Wanting to accomplish the right goal, but intentionally selecting the wrong activity to do so. This can sometimes be attributed to lack-of-knowledge and the resulting impact may not be immediately obvious. Extending the analogy from before, perhaps the user hit the link they were aiming for, but misunderstood the information architecture leading them down the wrong path.
And yet, I’m not sure if either of these quite capture the “it won’t happen to me” mentality. The roller coaster rider is given a warning not to use the stick (they have the knowledge), and they successfully press record (they have the skill), but unfortunate outcomes could still result.
Perhaps it could be argued they rider did not have sufficient knowledge about the consequences and were not fully-trained for that level of complexity. For my own amusement, I am going to coin a new term for this combination. Introducing a new type of error: Selfie-stick On a RollercoastEr … SORE. It describes both the cause and effect of some poor decision making.
Preventing users from getting SORE
There are many good reasons for doing this. First, it’s just the right thing to do – we want our customers to have the safest and most pleasing experience possible. Secondly, there is a financial motive. Many Human Factors Ph.D.’s have made decent money as expert witnesses in litigation.
Even if you aren’t creating amusement park rides, your product designs may make people susceptible to becoming SORE. Your email platform may be a conduit to inflammatory reply-to-all messages. Your social networking app may make it too easy to drunk message an ‘ex’ in the middle of the night.
Here are a few ideas how we as designers and developers can create that envelope of protection:
- Test the extremes. This is basically what Disney is doing with their device and is not necessarily a novel approach. Manufacturers often utilize test fixtures, like a choke tube, to evaluate their products for any potential child safety concerns. Within the world of software, these tests may include examining for stability with edge use-cases (like uploading thousands of photos when one or two is typical).
- Implement error recovery features, like undo or auto-save. These features are beneficial for all forms of error. Even Gmail will now let you ‘undo’ a recently sent email within a few seconds. Auto-save is another awesome feature that helps prevent unexpected situations. However, I often wonder why software developers are not saving an undo *history* more often? With storage space becoming plentiful, I would love to be able to open a previously saved file and have a familiar method of fixing recent errors (without using complex features like change-tracking).
- Artificial intelligence and action analysis warnings. Perhaps more controversial, and relatively unexplored, is some form of automated proactive protection. Cars and trucks now have systems which can detect drowsy driving or impending collisions. Building more knowledge about the user into other software systems may allow them to flag possible errors. For example, a calendar program could learn that I set most of my meetings between 9am and 4pm, and warn me if I create a new entry at 3am. A web site could check my proposed password against an internet search of my data, and explain to me why it is a bad idea to use publicly accessible info like my dog’s name. And, I’m sure there are even danger signs a messaging app could look for and quietly hold-on to certain texts and confirm their send at a more sober moment.
Thanks for visiting
This article barely scratches the surface of human error. It does not cover even worse classic blunders like getting involved in a land war in Asia or going against a Sicilian when death is on the line.
For those interested in learning more, a fun and often scary read is the book: Set Phasers on Stun. It was my first introduction to the topic. If you have a favorite story or link, feel free to share below.
Be careful out there!