The goal of cognitive computing is to get a computer to behave, think, and interact the way humans do. In 5 years machines will emulate human senses, each in their own special way. Every year IBM makes predictions about 5 technology innovations that stand to change the way we live within the next 5 years.
“It has to be very intuitive for the users to use our app.”
I’ve heard that so many times when a bunch of people were designing an app. Most of them couldn’t explain how to make it intuitive, they just know that ‘to be intuitive’ is a good thing in user experience, since it’s somehow related to the ease-of-use.
So what does intuitive actually mean?
If you said it’s when users know what to do with the interface the moment they use the app for the first time, then you’re completely wrong. Unless they have supernatural power, users don’t know what to do, they just assume something will work as they expected to be. This expectation comes from their past experiences with another interface that looks similar with the new one.
This is what intuitive really means: it’s designed to be familiar. The app that’s intuitive, is designed to be familiar. That’s why Google+ looks similar to Facebook (see this side-by-side comparison). That’s why WhatsApp on iPhone looks similar with the native messaging app, and a lot more examples of how apps have similar look and behavior.
I suggest that we replace the word “intuitive” with the word “familiar” (or sometimes “old hat”) in informal HCI discourse. HCI professionals might prefer another phrase:
Intuitive = uses readily transferred, existing skills.
If we’re going further into human brain, to explain why having a familiar interface is good, and not for the sake of being a copy cat, it’s because when something happens as you would expect it to be, a cell called dopamine neurons secrete a little burst of enjoyment, it makes you happy, thus giving you a good feeling. But if something happens differently than you expected it to be, your dopamine neurons decrease the firing rate, sending a prediction-error signal, and it makes you upset because your prediction was wrong.
Creating a new and innovative interface/interaction is good, but it might make users feel upset, especially if they have to learn to use your interface to interact with the core service that you provide. Also remember that users are using a lot of apps on their devices, and most of them have similar interfaces. Users don’t have time and are too busy to learn new interface.
My suggestion would be: adapt a familiar interface and help users to use your core service. If users hardly use your interface, the chances are they’ll never reach your core service.
Interesting talk. Especially the part where he showed a design of a subscription page. (minute 12:30)
I read this on a train window:
I didn’t realize that actually it’s a series of posters, until I had to move 3 steps to the right when more people came into the train:
I assume the expected behavior from people who read these posters, is to download the iPhone app, so they can buy the insurance. But I doubt people would easily relate those two posters as a series. There are several reasons for that, and one imporant factor is to consider how human eyes work.
There’s a good explanation in What Everyone Should Know About the Human Eye:
The eye has two basic states: it can be in a fixation or a saccade. A fixation lasts between 200 and 400 ms and is characterized by a relative lack of eye movement. A saccade is the brief, simultaneous movement of both eyes to a new fixation point. Saccades typically last less than 200 ms, in which time the eyes usually rotate less than 20 degrees. Very little information is retained or processed from the eye when in a saccade.
To be short : People will read the poster in a series of short glances followed by short hops. We need to keep related content close together, or it becomes a chore to read it.
The call-to-action poster (the right one) is disconnected from the awareness poster (the left one). If you imagine the ideal flow in people’s mind would be: “Ok, I think I need a travel insurance. So, what should I do next?”
You see how far it was for me to see the second poster: I have to move 3 steps to the right!
Another important concept of how human eyes work, is how the brain comes pre-equipped with special processing centers for the detection, recognition, and processing of faces.
From the site:
Here’s where it gets cool: not only do people love to look at faces, but we often use them as clues as to where else to look. Following a person’s gaze is almost a reflex. James Breeze demonstrated this really well in a blog post called “You look where they look.”
His experiment was simple: about 100 people were shown a picture of an advertisement with a baby and some text. Half the time, the baby was facing the reader, while the other time, the baby was looking at the text. Breeze found that not only did the people shown the baby looking at the text pay more attention to the text, but they actually stopped looking at the baby faster in order to follow its gaze.
So, there are two improvements than we can do on these posters:
- Add a face that is looking at the text, so people would pay more attention.
- Make them closer to each other, so people can easily read and relate those two posters, then hopefully download the app to buy the insurance.
I did a quick wireframe for the improvement:
What do you think?