Archive for the ‘UX’ Category
TV watching is mostly a ‘lean back’ activity, I want to sit back, relax and be entertained without having to do too much. This used to be the case when there were only a handful of free, terrestrial channels and VHS or DVD but TV is now a lot more complicated and fragmented with cable, satellite and on-demand services. When I sit down and want to find a show or a film there are quite a few places to look:
- Live TV
- Personal digital film library (from a PC)
- Recorded programmes on a PVR
- Catch-up TV: BBC iPlayer, ITV player, Channel 5 player, Channel 4 OD
- On-demand services e.g. Netflix that may be accessed from a variety of sources such as Apple TV, consoles etc.
When thinking about what to watch, the user (they’re now users rather than merely viewers) needs to remember all the different places to look; each one has its own interface (and ‘Smart TV’ itself has its own interface) and sometimes remote control that both needs to be learned and switched between resulting in a high cognitive load. A solution that combines these services into a single, unified interface would hugely improve the user experience. This is not as difficult to achieve as it might seem, YouView has already taken a step in this direction, although this lacks in a number of areas.
Another consequence of this is that there’s no centralised place to discover content. Although it’s true that people do choose to watch channels (“Channel 4 OD often has things I like so I’ll browse that”), they also like to choose specific programmes, genres or want to be enticed with what’s hot or new, somewhat independent of the content provider. Various TV services currently do a poor job of suggesting the most relevant content and they don’t talk to each other, or to your friends. Even BBC iPlayer doesn’t offer Facebook login or much opportunity to rate programme’s that could be used to guide future recommendations.
Browsing web pages on the TV – it just doesn’t work sitting back, and from that distance and it takes us away from what we’re watching. It’s not surprising that not many people are using this functionality that comes bundled with Smart TVs. Besides, we have phones and tablets for that. Some sites do make sense, such as YouTube, but these are better re-designed for the platform and presented as an app or a channel, as they have started doing. Other sites, such as BBC News, would be better becoming hybrid sites/apps/channels with an emphasis on browsable video content.
What is not the way forward, as some high profile companies think, is a hugely complicated remote control with a physical keyboard. They don’t fit in our hands, they require too much effort for most people and we need to look at them to use them. Which also rules out touch screen remote controls, at least for the main functions, e.g. volume, where it requires so little effort to use these without having to look. A small touch screen might be useful for secondary contextual functionality which would avoid the need for tons of buttons, for example to provide a keyboard for occasional use. There are various new methods coming onto the market now from voice to gesture and these may have a place but great care needs to be given to how they work and which functions they are suited to. For example, arm waving is never going to be practical, at least not for frequently used functions, there might be room for simpler flick gestures that don’t require much movement or effort but this would have to have a clear benefit over pressing physical buttons that require little effort or learning curve. Talking to technology is still a little strange and requires more effort than you think but voice might have a place if the alternative would be to type.
Apps have a place on TVs but as with any platform certain types will be more suited to it than others. TV or video apps mostly shouldn’t have separate interfaces, as I’ve outlined above, smaller production companies could still get space on the main interface, in a similar way that the iTunes music store handles the long tail of artists. On the TV you might want to search for an independent service and add this to your main channel roster so it becomes part of the main channel guide interface. Games is an obvious category that would work well, large numbers of people already use consoles with their TV’s and apps would mean many gamers could be happy without buying a console.
To improve the user experience of TV I would like to see:
- All content available through a single interface
- Great content recommendations based on my viewing habits, ratings and those of my friends drawn from all content providers
- Hardware that incorporates set top box functionality to reduce clutter
- Faster processing and loading of content (remember what Google said about speed being a feature)
- A simple remote with a few physical buttons augmented by voice, possibly gesture and a small touch screen for the longer tail of less used functions.
Most people’s knowledge of their parents’ lives before they were born is pretty sketchy. Human memory is inherently poor at remembering many of the things that happened over, say, 20 years. There are also things we don’t want to remember and things that get recalled but not quite in the way they happened, memory is reconstructive (see below for some classic psychology papers on memory). So today children don’t have much to go on, a handful of photos or anecdotes from the parents or maybe a longtime friend.
Use of social technologies such as email, text messaging and social websites, such as Facebook is increasingly pervasive; for those growing up with it in particular it tracks and documents with an accuracy and level of detail never seen before and there is little digital decay. Everything you’ve bought online, things you’ve commented on, places or events you’ve been to, people you’ve been with, all your relationship status’ have been logged. If this information is retained and is accessible over time our children will have access to a source of primary information about us that would leave a historian salivating.
Of course, people can delete this history and who knows what course these technologies will take in coming decades. But I bet that people will keep it and that it will be accessible – in fact I think its accessibility will vastly increase. It will become much more structured and we will have far more control over how it can be manipulated in order to extract trends, signals and meaning from (please excuse my use of Google Analytics style language). Further, I don’t think people will delete it. Remember when your grandparents died and you or your parents were left with all the photos and maybe other documents/letters that were a record of their lives, part of your family history? I think our trails left on social technologies will become the modern replacement of that.
How will this be managed, do we leave our passwords in our will? Or in what way this is left – lock stock, or in some edited form has yet to have established social norms but certainly these will evolve and debates have already begun, although as is often the case lagging behind the pace of change.
Some great articles on memory
I recently went on a beach holiday (Costa Rica, if you’re interested) where I used my iPad extensively as an ereader, using iBooks and the Kindle app. A problem that I’d encountered before became particularly pronounced as I read for longer periods of time – that of how to hold the device and how to interact with it in a comfortable way while reading.
Because the iPad has a touch screen it has to be held at the edges to avoid activating something such as turning pages or selecting text. The ereader functionality really highlights this limitation because unlike most other uses, say games or web browsing where the user is often interacting with the screen, or for video where its just propped up to be stared at. However when reading in iBooks or the Kindle app the user spends much of the time just looking at it but also holding it and only sometimes needing to tap the screen to turn the page. It is in this scenario that having to hold it carefully at the edges becomes a chore. If you accidently touch the screen its liable to flip pages, whereas a webpage which if accidently tapped may do nothing or maybe just scroll a little so that its easy to get back to where you were.
When using the iPad as an ereader it is awkward to hold up in the air with one hand, perhaps to block out the sun while lying on your back, or if you’re sitting in a chair and holding a drink without accidently activating something on the screen or at an angle that makes it hard to hold with one hand right at the edge given its weight. This really restricts the ways in which it can be held and runs counter to the benefits of a tablet over a more fixed device. Unlike desktops, and to a lesser extent laptops, tablets conform to the users’ physical position and location making them much more comfortable and enjoyable to use and is one of least appreciated and talked about reasons for their popularity (for more on this see the form factor section of the blog below).
One novel solution would be to allow settings to control locking the screen (just for ereader apps) for any kind of input, perhaps to even include the home button. You could exit this mode from either within the app or by pressing and holding the home button. To turn pages the user could use the volume control, up for next page, down for previous. A trade off would be loss of use of that function, but how many people would be listening to music and want to change the volume while reading? This would mean that the user could hold the device in virtually any position but still be able to easily carry out the most common task, turning pages, and without looking since the volume control is a physical control. Alternatively there could be an option to use voice control, while also locking the screen, so just murmuring (the system could learn your voice and these narrow commands and so process the command even when the noise to signal ratio was high) next or previous would perform the action.
And that’s it.
When I say mobile I’m really talking about post-PC devices in general, but that term sounds too pretentious to put in a title. The category of post-PC is extremely broad, and could include all sorts of ubiquitous or embedded devices, however for this blog I’m focusing on mobile and tablet and the landscape today rather than making wild predictions into the future.
Obviously an important reason to be paying attention to post-PC devices would be if they were prevalent among users and that their number and usage were growing. This is clearly something we are seeing, and has been a trend since mobile phones became mainstream about 10 or so years ago. Although it’s only recently that handsets have become connected to the internet in a user friendly way, which in part is the cause of a step change in their usage. Data from the mobile industry is produced daily (Luke Wrobleski aggregates some of the highlights every week here). The importance of post-PC devices is clear, smartphone and tablets combined now outsell desktop and notebooks combined
and sites such as Facebook and Twitter now get 33% and 55% of their traffic respectively from mobile (source: slides 7 and 18) with those numbers growing strongly.
Post-PC devices are not only growing in popularity but technically they are improving rapidly and much more quickly than desktops; think high resolution, multi-touch displays, voice activation and ambient light sensors. Some OS features are leading the way for desktop OS’s, such as the app store; and other advanced features are really only made possible by the portable nature of mobile devices such as the gyroscope, accelerometer, GPS, camera and digital compass. What makes this especially interesting is that all these advances are becoming available to significant numbers of people and far more quickly than if they were on desktop PCs. Although the very latest smart phones can be expensive, the iPhone starts at £499, the payment model (although it varies between countries) means the cost is often subsidised and spread out so that it is more affordable and accessible than buying a new computer. This cycle and the significant advances made in the technology mean that people upgrade their mobile devices far quicker than they do their desktops or laptops.
Control and comfort
Currently the way you interact with a computer is by sitting in a relatively fixed position in front of a static monitor and keyboard. You have to conform to the position of the computer, not the other way around. There is some increased flexibility with a laptop because you can sit in bed or on the sofa with it and move it around with you to a wider variety of locations or positions. But even this isn’t ideal – laptops, including net or ultra books are pretty heavy, and because of their size, shape and keyboard you can’t hold them with one hand and move around with them too easily. The iPad however weighs 601g, which means it’s 4.5x lighter than a laptop and about 2x lighter than even a netbook. Of course phones are even lighter and more portable. Tablets and phones are clearly more comfortable to use, especially for long periods of time and the comfort gained from the control the user has over position and location of use should not be underestimated. A sense of control is a significant psychological factor that most commentators in the media fail to report on when talking about the success of these products, instead focussing on visual appeal or a list of features.
The other factor which makes touch devices special products is their direct manipulation. They are controlled with your finger, there is no intermediary between your action and the response on the screen, no keyboard, mouse or stylus. They’re responsive, accurate and, because they use capacitive rather than resistive screens, they don’t require any pressure. This is one of the reasons for the success of touch screens, this direct manipulation and the corresponding fast system response gives the user a real sense of control which is not to be underestimated for a positive user experience.
This perception of user control and convenience both in the micro-interaction on the screen and the macro-interaction regarding the position and location of use has contributed to the success of these devices in integrating with people’s daily lives. The phone in particular affords an intimate experience and it is really the only private computer people use and typically the only one people have with them at all times. When I worked at American Express a statistic that was often quoted was that it takes the average person about 10 hours before they notice they’ve lost their wallet compared to about 5 minutes for their phone. Given these factors it’s not surprising that the phone, and to an increasing extent the tablet, have for many people replaced the following:
Camera (the iPhone is the most popular camera used on Flickr)
Portable gaming device
And on the horizon, wallets, keys, PCs, bar code scanners, and who knows what else…
So it is the large sales, high usage, usage in a variety of settings, novel interaction methods, the opportunity to help define interaction and design standards and cutting edge software and hardware which sets mobiles and tablets apart from desktops and means they are more interesting products to work with as a UX designer.
Following on from Darian’s recent blog about registration creating a barrier to usage of apps I’d like to add some of my thoughts to the arduous process of dealing with logging onto websites or apps following registration.
Having to remember multiple passwords is a fact of life for most of us now, a study a few years ago(1) showed that users typically manage about 25 accounts requiring a password. This led to the average user entering 8 passwords every day, given the expanding reach of technology this number may have since increased. Any improvements, however small, that can be made to assist the user logging on should be grasped.
I have a suggestion that will help some users remember their passwords for some websites and it is a suggestion that requires no technical development, new technology or shifting of user behaviour, just a bit of user centred thinking. When registering for a website it is common for the sign-up form to specify the password constraints e.g. must contain 8 digits, a capital and a number; however when it comes to logging back onto the site there is usually no such prompt to help steer the user in the direction of the relevant password.
See below from Google on the sign-up page:
And then nothing on the sign-in page (below left), even though they have even given you a prompt about the format of an email address is. There is still no prompt after clicking a link indicating you are having problems signing in (below right).
Another example here from Barclays Cycle Hire where you are prompted on the requirements of the password at the sign-up stage.
But then not reminded of this at the login stage:
In fact this pattern is widespread across websites.
Users prefer usability to security
Study after study(2,3 4) has shown that users prefer usability to security and select the easiest passwords they can get away with and re-use a small number of passwords over and over. It has even been argued that given the cost and benefits involved, favouring usability over security is actually rational(5). A user may have a six letter password they use when they can and also an eight letter password for when that is required and some capitalisation or number rules that can be repeated when demanded. When a user visits a website where they aren’t sure what the password is they are faced with some choices:
- Guess which of their passwords is needed
- Request a password reset
- Leave the website
Having to type in password after password is clearly not desirable. Neither is requesting a password reset, which may involve multiple steps, and also potentially having to think of a new password thus contributing to the overall password burden. And obviously just leaving the site benefits no-one.
Small changes can significantly improve usability
Giving the user a little help by indicating any password constraints at the login stage can reduce the number of possible passwords the user might need to try thus increasing the chances they will attempt a login. It can also convey to the user that you’ve thought about them and are trying to help, rather than leaving them staring at this blank field with no memory cues. A six letter password with no other constraints is common practice so this approach would mostly benefit those websites which impose greater constraints such as the inclusion of capitalisation or digits. Why don’t websites already do this, who can tell, lack of thought, designers blindly following other sites or perhaps it’s the security department not wanting to help ‘hackers’ by providing more information. Although this last point is an impotent argument as any self-respecting hacker would just go to the registration page to see if there are any constraints on the password.
If you do find sites which help users with a prompt then be sure to submit it to LittleBigDetails.com.
(1) Florêncio, D. & Herley, C. (2007). A large-scale study of web password habits. WWW 2007/Track: Security, Privacy, Reliability and Ethics. Session: Passwords and Phishing, 90, 657-665.
(2) Adams, A. & Sasse, M.A. (1999). Users are not the enemy. Communications of the ACM, 42, 41-46.
(3) Imperva (n.d.). Consumer password worst practices. Retrieved 01 June 2010 from
(4) Sasse, M.A., Brostoff, S. & Weirich, D. (2001). Transforming the ‘weakest link’ – a human/computer interaction approach to usable and effective security. BT Technology Journal, 19, 122-131.
(5) Herley, C. (2009). So long, and no thanks for the externalities: the rational rejection of security advice by users. In New Security Paradigms Workshop, 133-144.
One of the reasons Apple is successful is their model of vertical integration. From hardware to software to distribution (both off and online) Apple has sought to own and control as much of the user experience as possible as they believe that it is them, and not any other company, that is best placed to provide the optimum experience for the user.
Google’s purchase of mobile handset maker Motorola got me thinking about what Apple might do next.
The iPhone has clearly changed the mobile market, it’s not necessary to go over the details of this here, it’s been well documented elsewhere, but there is still a glaring weakness in the user experience of mobile devices. The mobile network.
The handset may look and feel great, have a good browser and offer lots of functionality, especially via apps but this can all become useless if the phone cannot get a signal or the transfer rate is too slow. Mobile voice networks have been around for decades now, although data for less, and it is remarkable how patchy a solid 3G signal is. Even a full strength 3G signal feels like dial-up compared to the 10Mb or 20Mb broadband lines that are common at home. When it comes to mobile on transport (which if you think about it are two domains that should fit together very well) the situation is even worse, no signal at all on the Tube and a poor service on overland rail due to tunnels and the landscape. Some other countries have solved this so it’s clearly not insurmountable.
I think Apple would do well to purchase a global mobile network operator and invest in the hardware necessary to ensure that the signal and bandwidth available matches the other aspects of the mobile user experience.
I received an email newsletter from Harpers Bazaar, the upmarket fashion magazine, referring to the ‘twittersphere’, it made me cringe and I started thinking about whether the word twittersphere is necessary and even a bit old-fashioned.
I realise that the locus of some topics or dialogue is within Twitter and so of course referring to this source is legitimate, however the suffix –sphere (and this applies to others such as –verse and for certain other domains, such as blogs) has the feel of something dated, like the term ‘information superhighway’. We don’t need a word like this to help understand or feel comfortable with the concept of a community like Twitter.
In short I think it has become a pleonasm, a redundant fixture to the word Twitter. Why not just say there has been ‘talk of X on Twitter’ instead of ‘talk of X in the twittersphere’. We know that anything newsworthy or worth referring to that comes from Twitter, that one might use the term twittersphere for, is inherently as a result of the community characteristics of Twitter (as opposed to a single person saying something). That is what defines Twitter as a medium, so let’s drop the –sphere.
This picture taken this morning of a bus in Brighton pretty much speaks for itself. One wonders what sort of development process led to a system where the learning curve was so flat and likely to lead to significant enough delays that it needed signs like this to be made and stuck onto the side of all buses.
I was at the UK UPA World Usability Day event a while ago where the start-up BuddyBounce.com presented. At the time of writing the site isn’t live but it seems to be some kind of social networking, event based webcam service that seeks to connect strangers. The speaker was very positive about how video calling was about to take-off and it started me thinking about why it hasn’t been more popular.
I’ve been optimistically waiting for the widespread adoption of video calling (I use this term casually to include webcams etc.) for years however it has only been used in some relatively niche scenarios, such as talking to relatives in other countries, despite the necessary technologies having been available for perhaps a decade. This article isn’t about whether video calling is generally a good or bad thing, I think there are many situations where it would positively contribute to communication, but why it isn’t used in these contexts.
I see a range of reasons that can mostly account for its lack of use, outlined below.
You may not want someone to see you if you’re not properly dressed/made up/in a tidy house or even to know your location. If someone can see you it might make it harder to be doing something else while talking to them (e.g. surfing the net, reading an email, sorting out washing) which people often find useful to do when on the phone but might prefer that the other person couldn’t see them doing.
There is also the fact that people generally aren’t that keen on seeing themselves on the screen, which video calling software does by default, and let’s be honest, the images produced by most webcams aren’t that flattering.
There is something strange about the psychology of speaking to someone on a video call. In real life if you were to watch two people having a conversation you’d notice that for a significant amount of time there is no eye contact between the individuals. People look at other things or people in the vicinity, they look at parts of a person other than their eyes or they just look into space or at their feet. People make eye contact for a number of reasons during conversation, for example to receive visual feedback that the other person is listening, is understanding or is interested. These purposes of eye contact are gained without the need for constant eye contact, in fact there is a range within which there is an optimum level of eye contact, and above that would be considered staring. Which brings me on to video calling.
I have made many video calls, through Skype, MSN Messenger, Apple’s FaceTime and even with Cisco’s absurdly expensive TelePresence system and I’ve also watched others making video calls. Something I’ve noticed is that that when taking part in a video call there seems to be a tendency to be overly focused on the person in the video call window, usually you can just see their face, or perhaps down to their waist. The reasons for this focus I’m not sure, perhaps it’s the novelty, an expectation to keep your visual attention on the other person. It could be that there is something about this mode of communication which means if the other person is looking away or moving away from the relatively fixed position you have to be in in order to remain in view of the other person it is considered lack of interest. Video calling is probably viewed in some respects as a type of phone call and some of the conventions that apply to that also apply to a video call; phone calls typically are short in duration compared to face-to-face meetings and there is an expectation of relative exclusivity of attention during the call. Phone calls are generally initiated by a single person and by the nature of the technology is directed at a single other person (conference calls or speaker phone calls are niche cases). This opening up of a private connection, that has a definite end point (which real life conversations need not have) with a single other person probably leads to more focused attention being given to them than would be if together in the same room.
There is also the issue with the difference in position of the monitor and the camera which means that it’s not really possible for two people to make actual eye contact, as opposed to merely looking at each other – if you look at the monitor rather than the camera you are looking at the other person but because you’re not looking at the camera they think you are looking in the middle distance or elsewhere. This disconnect caused by both people looking at the monitor rather than the camera probably also contributes to more looking at the other person than one otherwise might and contributes to a feeling of unnaturalness because of the disjointed perspectives.
This certainly has been a factor historically, at least for mobiles. Video calling has been a premium service i.e. expensive with often uncertain costs (mobile pricing used to be extremely complicated) that even the mobile networks have conceded has affected usage. However, this fails to account for why video calling hasn’t been widely used on PC instant messenger clients such as MSN or Yahoo! where it’s free.
It’s possible that there is another more mundane factor contributing to the lack of usage of video calls, the fact that it is different to the default of making a voice call. Although the technology has been around for a while there are issues of interoperability, if you have a video phone or instant messenger client with a webcam you need to ask the other person if they also have the necessary equipment and whether they wish to also initiate a video call (which they may not be technically proficient enough to set up). There could be awkwardness about this, because it’s new and not usual, we’re not used to being seen while on the phone,
the other person might think ‘why are they asking?’ – there’s something vaguely voyeuristic or nosey about it. Of course there’s also the hassle factor in figuring out how to use it.
Will Apple’s FaceTime change this?
FaceTime solves some of the problems of previous devices by building it into the system so it seems like more of a default. With the public’s love of everything Apple and their slick marketing of it it could help shift perception of video calling (and there is always the contribution that a new generation makes to push things forward). However, although I think this will lead to more usage of video calling most of the issues outlined above will still be drawbacks to making, or receiving, a video call so I think take up of it will still be slow and it will be many years before its routinely used for calls.
I just returned from Geneva via Thomson Air and a message on the wrapper of their delectable food caught my eye ‘Use by end: 21/03/11’. It reminded me of the site littlebigdetails, as a sort of inverse to it, and I wondered what previous passengers might have ‘used’ it for.