Monday, April 16, 2012

Duality by ART+COM


“Duality” is a reactive environmental installation in the city center of Tokyo, created by the Berlin-based media designers at ART+COM. The artwork was realised in January 2007 and is a permanent installation. The boundary between a walkway and an adjacent artificial pond was chosen as the location for the work. This interface between “liquid” (water) and “solid” (land) was thematically used and augmented by the question of “real” (water ripples) and “virtual” (artificial light waves). Passersby trigger the installation that interplays between solid and liquid, virtual and real, light and water:



Their footsteps generate virtual waves that transform to real water waves in the pond. Intended as a playful moment to enrich the commute, or to surprise the unexpecting, the installation proposes a different way of integrating media in public space. The installation is located outside an office building complex in central Tokyo, which is linked to a highly frequented subway station. The objective of the artwork was to evoke stronger identification of commuters and accidental visitors with the place. Using translucent glass floor to diffuse monochromatic LED matrix the ART+COM designers defined a unique aesthetic, different from standard displays. Making the installation interactive, reacting to the passersby's footsteps, they challenged the expectations of the behavior of public displays. They took a step further by extending the waves as physical motion in the adjacent pond. The original concept was inspired by the dual nature of light, the so called "Wave-Particle Duality," but through the development process, the immediate playfulness and challenging the expectations became at least equally important to the final realization.


The installation aims at creating an identity of the space. Pedestrians become aware of the space that they would usually cross without paying much attention to. It’s a beautiful example of how spaces can adapt depending on the people within them. Most of those concepts so far had an impact on the visceral and behavioral level of processing (Emotional Design: Why we love (or hate) everday things – Donald Norman). Think about changes in temperature or lighting to make someone feel more comfortable. Duality has an impact on the reflective level. It makes people have a moment of contemplation when they don’t expect it.

New research on emotion and cognition has shown that attractive things really do work better.In recent years, the design community has focused on making products easier to use. Design experts have vastly underestimated the role of emotion on our experience of everyday objects.Emotional Design analyzes the profound influence of this deceptively simple idea. 

Donald Norman lists the following 3 levels of design based on emotion:
  • Visceral Design (evolutionary responses)
  • Behavioral Design (bodily activity)
  • Reflective Design (mental activity)
Duality probably utilizes the Behavioral Design level and detects physiological changes based on body motion that are Non-Stylized.

In the future, will inanimate objects respond to human emotions? Is it possible to create emotional robots?

References:

Sunday, April 15, 2012

Gaming User Interface
In the past decade or so, most gamers would agree that gaming experience had improved vastly. Not only in terms of graphics or game play, the gaming interface has seen big changes as well. I'm sure many had seen or at least tried playing one of these before:
Nintendo Game Boy
Nintendo Game Boy Color
Nintendo Game Boy Advance
Notice how much the game boy series changes over time. From a non-colour device to a coloured device, to a differently shaped device. The change in shape is to cater for a better grip for users, the addition of colours is straightforward.
Sony Playstation
Sony Playstation 2
Sony Playstation 3
As you notice, the controller design remains the same, indicating a sign of consistency. However, the difference is in that the latest instalment of the PlayStation series has a wireless controller for the convenience of users. Speaking of convenience, the Xbox kinect doesn't even require a controller.



It works by hand gestures and voice recognition. This makes iterating on the interface much easier for users. Slick? Absolutely. Judging from the changes seen, one would not be surprised if the next generation of gaming interface would involve some brain computer interface.

Sunday, April 8, 2012

Yet Another Bad Interface

On one of my recent visits to Tan Tock Seng Hospital, I noticed that there were electronic gates like the ones we see at the train stations to restrict the number of visitors during visiting hours. Each patient is restricted to at most 4 visitors. Hence, in order to ensure that only a maximum of 4 visitors are at the patient's ward, the visitors have to first register manually using a touch screen interface computer.

Problems
The user will first have to scan their NRIC using the bar code scanner. Being a user myself, I noticed many people were having trouble scanning their NRIC. Firstly, it was not clear how and where we should position our identity cards under the scanner. Although there was a picture indicating how it should be done, the majority of first timers still failed the NRIC scanning process.

Next, the user has to input the ward number, bed number and patient's name. This was done using a touch screen keyboard on the screen as seen below:


The 3 input fields were clear and concise, it even has examples by the side of the text field. However, the main problem here is the keyboard. Notice that it is not the normal keyboard that we usually use at home. This touch screen keyboard is arranged in alphabetical order. My family and I had trouble typing the patient's name into the text field. Coupled with the scanning problems, the time taken to register was simply not ideal. Imagine if one user stays at the machine for so long, there would be unwanted long queues just to register.

Thoughts & Reflections
I was amazed that even hospitals are starting to use technology to tackle their problems. However, more can be done to improve the current human computer interface. The scanning problem can be improved by playing a short video clip of the process of scanning the NRIC. The touch screen keyboard just has to be replaced by the normal keyboard most people are using.

When I was in the ward, I noticed there is another touch screen interface right next to the patient's bed. As it is for the nurse or doctor to use, I had no idea what it was for. However, something caught my eye. It was the keyboard on the screen. It was completely different from the one I saw earlier on, as seen below:


The first thing that came to my mind was, where is the consistency? It makes no sense to have a certain keyboard for one machine, and to have a different keyboard on another machine, in the same hospital. Perhaps they were different machines made by different companies? Perhaps the hospital feels that visitors who does not use computers will find it easier to key in the names with a keyboard made in alphabetical order? The list of questions goes on and on..

Monday, April 2, 2012

CLI versus GUI

What is CLI?
CLI is short for Command Line Interface. It is an interface or dialogue between the user and a program, or between two programs, where a line of text (a command line) is passed between the two. The commands are stored in the graphical shell or in files like the registry or the OS/2 os2user.ini file. A CLI is used whenever a large vocabulary of commands or queries, coupled with a wide (or arbitrary) range of options, can be entered more rapidly as text than with a pure GUI. This is typically the case with operating system command shells. CLIs are also used by systems with insufficient resources to support a graphical user interface.

Screenshot of a sample Bash session


What is GUI?
GUI is short for graphical user interface. It is a type of user interface that allows the users to interact with the electronic devices with images rather than text commands. A GUI represents the information and actions available to a user through graphical icons and visual indicators such as secondary notation (visual cues which are not part of formal notation - properties like position, indentation, color, symmetry) as opposed to text-based interfaces, typed command labels or text navigation. GUI has greatly benefited from the concept of Direct Manipulation.

Screenshot of a sample GUI system


CLI versus GUI
There are many experts who claim that CLI is much faster and easier as compared to GUI and there are an equal number of experts who claim otherwise. Given below is a comprehensive comparison between the two different types of interfaces:
1. Ease:
  • CLI - New users find it a lot more difficult due to the need for familiarity and memorization of the commands
  • GUI - Although new users may find it difficult to navigate using the mouse in the initial stages, it is found that the users pick this up a lot faster
2. Control:
  • CLI - Users have much more control over their file system and operating system
  • GUI - Often advanced or experienced users who need to perform a specific task may have to resort to the use of CLI for this purpose due to the limited potential to offer control as part of GUI
3. Multitasking:
  • CLI - Capable of multitasking, but do not offer the same ease and ability to view multiple things at once on one screen.
  • GUI - The concept of having windows allows users to easily view, control and manipulate multiple things at once and is usually much faster than CLI
4. Speed:
  • CLI - Due to the input being limited to only a keyboard and a minimal set of commands, an advanced CLI system will essentially get a specific task completed faster than an advanced GUI system
  • GUI - Using a mouse and keyboard to navigate through several steps to control the operating system for many things is going to be much slower
5. Resources:
  • CLI - A computer that uses only CLI takes up much less resources
  • GUI - Requires a lot more system resources because of each of the elements that need to be loaded such as icons, fonts etc. In addition, video drivers, mouse drivers and other drivers that need to be loaded will also take up system resources
6. Scripting:
  • CLI - User can easily script a sequence of commands to perform a task or execute a program
  • GUI - Enables a user to create shortcuts, tasks or other similar actions to complete a task or run a program. But does not come close in comparison to what CLI offers.
7. Remote Access:
  • CLI - Often when accessing another computer or networking device over a network, a user will only be able to manipulate the device or its files using CLI or other text only manipulation
  • GUI - Although remote graphical access is becoming popular and is possible, not all computers and especially network equipment have this ability.

In-spite of its many merits, CLI has to this day come to be side-lined by GUI. However, CLI still has much to offer us, and many of its benefits simply cannot physically be emulated or even replaced by graphical ones. CLI and GUI have come to co-exist and can be used simultaneously to facilitate the completion of tasks that vary in requirements.

References:

Sunday, April 1, 2012

Muscle Computer Interface

MUCI


A combined effort between Microsoft Research, University of Washington, and University of Toronto has made the possibility of interacting with computers with nothing but your muscles a reality. In 2008, the researchers unveiled their muscle computer interface, abbreviated MUCI.  The hardware component of MUCI consists of an armband that the user attaches to their forearm. The armband uses six electromyography sensors (EMG) and two ground electrodes arranged in a ring around a person's upper right forearm for sensing finger movement, and two sensors in the upper left forearm for recognizing hand squeezes.

Example of a gesture recognized by MUCI

MUCI allows users to interact with computers and other devices without requiring the use of their hands. Though there are alternative hands free interaction systems such as voice control and camera based systems, these are vulnerable to inaccuracy and have privacy issues.

There are existing products such as prosthetics that rely on detecting muscle activity, but MUCI is the first commercial application. Unlike the electrodes used with prosthetics, user's do not have to worry about placing MUCI's electrodes on an exact position on their arms. After slipping the armbad on, MUCI's software will undergo a set of calibration exercises to recognize the position of the electrodes and to understand the user's movements. The calibration exercises rely on machine learning algorithms that improve in accuracy with time. The algorithms use three main components of data from the electrodes: the magnitude of muscle activity, the rate of muscle activity, and the wave like patterns that occur across sensors. These three components provide sufficient data to discern the type of muscle movement that the user is exerting. Preliminary testing on 10 subjects revealed that after calibration, the system has accuracy rates of up to 95% in recognizing movement of all 10 fingers.

Potential Applications
There are a number of potential applications for this technology, including the following:
  • Opening car trunk with groceries in hand - When holding grocery bags in both hands, it can be extremely difficult to access the car keys and open the trunk. MUCI can alleviate this problem by allowing the user to open their trunk by completing a simple gesture such as touching two fingers. 
  • Controlling an MP3 player while jogging - It can be awkward and time consuming for a user to take an MP3 player out of their pocket and change the song, increase the volume, etc. These actions can often force the user to stop jogging and stand stationary, something that is undesired. MUCI can allow the user to easily control their MP3 player while remaining in motion. 
  • Accepting/ending phone call when driving - Having to reach for a phone and fumble for the small accept button can be an and inconvenient and dangerous task while driving. MUCI can allow the user to accept and end their calls without lifting their hands off the steering wheel.
  • Playing video games such as guitar hero - As demonstrated in the video below, MUCI also has entertainment applications. Users could use MUCI as a controller in games such as Guitar Area, where their actions with an imaginary air guitar would be interpreted by the system. 

It should be noted that muscle computer interfaces are still very much in the research phase. Researchers are testing how well they work in real world scenarios, such as when people walk and run while wearing it. Future plans include creating arm bands that are easier to wear and that can be camouflaged as jewelry or an article of clothing. 

Sources
http://www.newscientist.com/article/dn13770-hightech-armband-puts-your-fingers-in-control.html
http://www.technologyreview.com/computing/23813/page1/
http://www.popsci.com/technology/article/2009-10/muscle-based-interface-lets-you-literally-point-and-click-no-mouse-required

Thursday, March 29, 2012

Tangible User Interfaces

What are Tangible User Interfaces?
Tangible user interfaces are user interfaces that allows a user to interact with digital information using physical objects. They consists of 4 characteristics such as:

1. Physical representations are computionally coupled to underlying digital information

2. Physical representations embody mechanisms for interactive control

3. Physical representations are perceptually coupled to actively mediated digital representations

4. Physical state of tangibles embodies key aspects of the digital state of a system.

Examples of Tangible User Interfaces

Computer mouse
Although we use a computer mouse everyday, many of us do not realise that this device is actually an example of a tangible user interface. The user drag the mouse on a flat surface to move the pointer on the computer screen. The direct relationship between the movement of the mouse and pointer on the screen has allowed the user to operate the computer easily.



Microsoft Surface
Microsoft Surface is a system that is designed to look like a table and has a multi-touch display which allows many users to use it at the same time. It has the ability to detect objects that are placed on it and provide users with many functions to manipulate these objects such as transferring photos via different devices. The video below shows how Microsoft Surface can be used.


Reactable
Reactable is a musical instrument designed to create and perform music. It is a clear and glowing round table with plucks placing on its surface. The users are able to turn the plucks and connect them to other plucks to create music with different elements such as synthesizers, effects, sample loops and control elements. When the pluck is placed on the surface, the pluck lights up and interact with other plucks. Music becomes tangible with Reactable as the user is able to see these interactions on the surface. The video below shows the usage of Reactable.



Tangible User Interface Alarm Clock (TUI-AC)
TUI-AC is an innovative alarm clock which consists of a ball and a pull-ring. The user set an alarm by pulling the ring out of the ball and throw it like a grenade. The pull-ring contains a sensor which measures the distance between the ball and itself. The alarm is louder when the ball is thrown further away from the pull-ring. When the alarm rings, the user needs to get up from bed to find the ball and insert the ring into the ball in order to switch off the alarm. TUI-AC is very useful for people who have difficulty waking up every morning.


References:

Monday, March 26, 2012

Understanding Brain Computer Interface

Have you ever wanted to communicate with others or move objects using only your mind just like the characters you saw in the movies? Isn't it great to be able to do it? This is now becoming a reality with the development of Brain Computer Interface (BCI). BCI is defined as a system of interaction between the brain and a device.

How BCI works?
A set of electrodes called electroencephalograph (EEG) is attached to the scalp or implanted onto the specific brain surface to receive better and more accurate signal. The EEG measure differences in the voltage between brain cells. This signal is then magnified, filtered and read by a software. BCI works in reverse for its input to the brain. The signal such as video is converted into voltages that are sent to the EEG and activate the brain cells. The person will then receives the signal of the video.

Application of BCI
One of the applications of BCI is for entertainment. The mind of a user can be used as a controller to control a video games or replace a remote control to change the channels of television. An application for entertainment is the release of a toy called "Star Wars Force Trainer". It uses a headset to detect concentration signal from the user's brain. When the user concentrates, the headset receives the signal and transmit it to a microchip that switches on the fan and lift up the ball inside the clear tube.



Another application of BCI is the implementation of devices that can help disabled people to live normally. The disabled person can use his mind to control such device to overcome his physical difficulties. He first visualises an action with a headset attached to allow the software to learn the brain signals. After a few tries, the user thinks about the action to transmit the brain signals to the device which will read the signal and execute the action. Some examples of such devices are robotic arm and mouse cursor.

Here is a video of Tan Le, co-founder and president of Emotiv Systems, showing how a user controls the computer with his mind using BCI.


Limitation of BCI
There are still challenges when implementing BCI.

1) Complexity of the brain
The electric signals from the brain do not totally determine a person's thought and action. There are also chemical process that the EEG cannot read.

2) Weak signals received by the EEG
These brain signals are so weak and small that they are easily interfered by signals generated by other actions.

3) Inconvenience of BCI equipments
Some BCI need a wired connection to their equipments. Although there are BCI that are wireless, they still require the user to carry a computer around. 

These challenges can be overcome with research and development of BCI. The EEG can be improved to receive better brain signals and the equipment of BCI can become wireless and lighter. With these challenges being overcome, I believe that this technology will be able to benefit us in making our lives more convenient in future.

References:

Saturday, March 17, 2012

User Reading Patterns

There are certain reading patterns that users follow when reading content on a web page - the most prominent of these is the "F shaped reading pattern". Research from Nielsen Norman Group's usability studies has revealed that the majority of users scan a web page in the shape of the letter F. The study involved 232 users, and this pattern holds across web pages and different content. There are three main components to this pattern:

  1. Users will read the top part of the page horizontally, forming the top bar of the F. 
  2. Users then skip some content and read horizontally again. This forms the middle bar of the F, and tends to be shorter than the top bar.
  3. Users then skim over the rest of the page vertically, creating the leg of the F. Depending on the user, this skimming can be either slow or fast.


Heatmaps of user viewing patterns. Red regions are where the user spent most of their time, followed by yellow and blue. 
It is easy to discern the F pattern by looking at the above images. It is more discernible in the middle image than the other two since it is a rough pattern that users follow rather than a strict rule that everyone follows. Nevertheless, there are important implications.

This study reveals that users do not read the majority of content on a page. There are certain sections that they are most attentive to, and they just skim the rest of the page. Since they are most attentive to the content at the top of the page, the most important content should be placed there. If the page is a news article, the content at the top of the page should be interesting and compelling enough to convince the reader to stay. On an e-commerce site, the most important content such as payment information, pricing details, etc should be placed at the top. On a search site, the most important results should be placed at the top. Though users will likely skim content after the first 2 paragraphs, Nielsen suggests a method to draw users in while they are skimming. By starting paragraphs with information carrying words, users are more likely to be engaged in the content.

Source: http://www.useit.com/alertbox/reading_pattern.html


Tuesday, March 6, 2012

Windows 8!

On 1st June 2011, at the D9 conference in Redmond, Washington; Microsoft demonstrated the next generation of Windows for the first time, internally code-named Windows 8. And on 29th February 2012, Microsoft released the consumer edition of Windows 8. In the first day of its release, the Consumer Preview was allegedly downloaded more than one million times.

Here is a video of the process that the developers went through while building Windows 8.



It features a new Metro-style interface that is designed for touchscreen, keyboard, mouse, and pen input. And for the first time since Windows 95, the Windows start button is no longer available - having been replaced by a sliding panel-based menu. The tile-based Start screen is similar to the Windows Phone operating system. Each tile on the screen represents one application and displays relevant information. For instance, an email app will display the number of unread messages and a weather app will display the temperature and humidity ratio  etc. The scalable, full screen views of the apps is customisable.



Utmost care has been taken to ensure that the OS is now more flexible and customizable. This ensure a more personal approach for the user.
  • The inviting lock screen which can be personalized to heart's content
  • The log-in page has gotten a face lift
  • Simplified Control Panel, Task Manager, Windows Explorer and the onscreen Volume Bar
  • As the user begins to configure his Twitter, RSS feeds, Facebook, preferred weather location and other such things, the home screen grows more and more personal.


Due to the touchscreen capability introduced, the following features that are supported become more meaningful:
  • Fluid and natural switching between running apps
  • Ability to snap and re-size an app to fit the size of the screen. This facilitates multitasking using the capabilities of Windows.
  • Picture password, which allows users to log in by drawing three gestures in different places on a picture. This is in addition to a PIN login system that authenticates users using a four digit PIN


The default font size for the widgets such as the control panel has been increased as compared to the previous versions of Windows. Quite evidently, this is again to support the touchscreen functionality.

The Task manager now looks a lot more personalized and easier to understand as compared to the one that we currently see in the other Windows versions.



Although the new interface is designed and optimized for touch, it works equally well with a mouse and a keyboard. In trying to add new features, Microsoft has not compromised on the existing features. It also works on a wide range of screen sizes and pixel densities, from small slates to laptops, desktops, all-in-ones,  and even classroom sized displays.

The apps use the power of HTML5 and standard JavaScript to tap into the capabilities of Windows. The apps can use a broad new set of libraries and controls designed for fluid interaction and connectivity. They can add new capabilities to Windows and to other apps, connecting to one another through the new interface.

There are quite a few limitations that seem to curb the usability:

  • The totally new revamp of the layout has left many users confused. It's not obvious enough how to shut down the PC or put it to sleep. The Escape key can no longer be used to exit programs. Instead it now serves the function of leaving the Start screen and returning to app that you were last using. To leave an app, you have to press the Start button - similar to tapping the home button on a mobile phone. 
  • Necessity to use a lot of short cuts in-spite of them being limited and inconsistent. For example, the Start-Tab can be pressed to toggle between open apps, but only two

There's a lot more to come before the Beta version is released in probably another half a years time!

References:
1. Previewing 'Windows 8' by Julie Larson-Green

2. That Windows 8 Experience? Confusing. Confusing as hell - The Guardian
3. Windows 8 on a laptop In-depth preview by Dana Wollman


Monday, February 13, 2012

FoodX - Project Proposal

The Problem
  • The fridge often contains expired food, which can smell bad and worse, can cause health problems.
  • We buy food, but we have no good method to track its expiration date.
Our User
  • Our primary user is the person that does grocery shopping in the household. Generally, we expect our primary users to be mothers.
  • Secondary users are other people in the household: spouse, children, etc
Our Persona - Julie


Scenario
Julie is a mother of two children. She wants a way to track the expiry date of the items she purchases at the grocery store to prevent her children from getting food poisoning. She uses FoodX to automatically input expiration dates for the items she purchases at a grocery store. When any of the items expire, she is notified, so she can throw them out.

Existing Solutions
1. Expiry Tracker

  • The items on screen look cluttered.
  • The UI makes the process of entering an item look a chore

2. ExpireTrack


  • We expect our user to use the app while in the kitchen, hence an online solution is not optimal
  • The interface is not mobile friendly

3. FridgePolice



  • This is by far the best existing solution
  • However, as seen from the reviews, the users are not quite happy with the UI

FoodX
FoodX is a mobile app that allows the users to easily input and track the expiration date of their grocery items.

Features of FoodX
  • Product recognition via barcode scan
  • Automatic reminders before expiry date
  • Items can be tagged with photos
  • Automatic food categorization and built in food expiry information
  • Search through existing items
  • Sort by expiry date and item type

Use Cases of FoodX
  • Scanning an item to automatically retrieve item type and expiration date
  • Manually inputting item
  • Manually inputting expiration date
  • Taking a picture of the item
  • Viewing expiration dates of existing items
  • Manually deleting items
  • Searching for items
  • Searching items by expiry date
  • Sorting items by item type

Prototype Wireframes: Design 1



Prototype Wireframes: Design 2


The Team
  • Kwa Jun Yong - Experience with Web Design and Coding.Enjoys UI
  • Aaaron Wong Jun Weng - Experience with Web Design
  • Sreshta Vijayaraghavan - Experience with game development and automation testing
  • Aravindh Dilli Dorai - Experience with creating wireframes and managing projects


Challenges
  • Creating a UI that is fun and friendly for mothers, yet functionally powerful.
  • Deciding between design prototypes
  • Steep learning curve for Flash5 and HTML (The team is not familiar with these technologies)

But we are excited to create an amazing app that lets mothers track the expiry dates of their foods!

Monday, February 6, 2012

Error Message Design

As everyday computer users, I'm sure each of us have received error messages at least once, if not a couple of times already. An error message is essentially used to alert the user that a problem has occurred. A warning on the other hand will alert the user of a problem that is most likely to occur in the near future.Error messages are often displayed using modal dialog boxes, in-place messages, notifications, or balloons. A good error message should entail the following:
  • Inform the user that a problem has occurred
  • Briefly describe the problem using terms understandable even to the IT noob
  • Provide the user with a solution 
Apart from the information presented, how we present it also plays a very important role. We may be presenting the user with all the useful information, but just the way we present it might not be user-friendly and could lead to discomfort and dissatisfaction. The 10 Golden rules certainly have to be adhered to:
  • Visibility
  • Consistency
  • Familiarity
  • Affordability
  • Constraints
  • Navigation
  • Feedback
  • Recovery
  • Flexibility
  • Style
This aside, the following are the basic guidelines to follow. Examples have been provided wherever necessary. 
  • Avoid error messages if possible - Introduce constraints wherever possible thereby reducing the chances of going wrong. If the error can be automatically rectified, refrain from involving or interrupting the user. Also consider if the problem is relevant to what the user is currently working on.

  • Explicit indication that something has gone wrong - Do not use misguiding icons. For instance, dont use the warning icon to display an error. It may water down the severity of the presentation, but the user might be left confused. Also do not just tell the user that there is an error. Should try as much as possible to be specific even if we're unsure of the error.


Compare the above unknown error dialogue to the one below. Clearly goes to show that even if the problem is unknown, try to point the user to other sources which may be of help.

  • Human-readable language - We need to keep in mind the fact that non-IT professionals use the system as well. When we try to explain the reason behind an error, we need to be as non-technical as possible. For experts within the field, they may be given the option to view such details. But on the outset, it's best we dont baffle the user.

  • Polite phrasing- without blaming the user or imply that user is stupid or is doing something wrong
Do not blame the user as above. Use passive voice instead. Compare with below.

  • Precise descriptions of exact problems. Do not over communicate to the user. It would result in nothing more than confusion.
The above error message is good. But it tries to provide the user with all the information at one go. Compare this with the one below which essentially tries to solve the same problem.

  • Constructive advice on how to fix the problem. It's not enough to only highlight the problem. A workable solution is to be provided as well.

We've encountered this problem innumerable times! Sending or not sending an error report makes no difference. It closes the application anyway and sometimes results in the system hanging. When the suggestion is of no use, might as well not provide it.

  • Visible and highly noticeable - both in terms of message itself and how it indicates which dialogue element users must repair
  • Reduce the work of correcting the error (e.g. list of possibilities)
  • Hypertext links may be used to connect a concise error message to a page with additional background information or explanation of the problem.

  • Avoid Sound alerts - It's best not to have jarring sounds accompanying the dialogue boxes. Again, this depends on the environment in which the user is working. In an already chaotic environment, this feature would be of no use. Only when the error is extremely critical and needs immediate user attention we may employ this feature.

Saturday, January 28, 2012

An example of a bad GUI

A graphical user interface is a very important aspect of computing these days. It can be found on smart phones, computers, and many other electronics out there. Many users will rely on this graphical user interface to use the product either efficiently or conveniently. Hence, it can make or break a certain product. Take for example, the following graphical user interface:


At first glance, one could not tell what it is used for. Moreover, don't you think that seeing this would make you feel like this:

What went wrong???

What went wrong exactly?
  • Overall representation. It is too messy. The user might not know where to start, where to look at. Basically, there is no proper layout at all.
  • The use of colours. Too many colours are being used. Colours can be a very useful tool in terms of graphical user interface design but in this case, it has been overused.
  • Functionality over usability. This is a common flaw behind many graphical user interfaces. It has too many functions in a page.

Friday, January 27, 2012

Principles of Designing Quality Navigation

Quality navigation is a very important criteria for a website to be successful and should not be underestimated. A website may have many contents and therefore with a good navigation system, it allows the user to find the content they want quickly. However, a website with a bad navigation system will most likely result in the user being lost or not being able to search for the content they want. The user will not hesitate to abandon such website.

Hence, I will like to share about what I have learnt on principles to consider when designing a good navigation for website. The principles are as followed:

Provide a variety of navigation options
Different users prefer to navigate around the website in different ways. Hence, the website should provide a variety of navigation options for these users. An example of website that applies this principle is Google which allows the user to search in many categories.


Let users know where they are
A good navigation should indicate clearly and unambiguously to a user which page they are on. The website should have a title for every page to show the user which page they are currently on and the link of the page that the user is currently on should look different from the other links in the navigation. 

An example of website that applies this principle is Threadless which uses blue colour to indicate which category the user is currently at.


Let users know where they have been
A good navigation allow the users to know which page they have visited on the website. This can be done by maximising the usage of hypertext for navigation as the hypertext uses default colour such as blue for unclicked links and purple for clicked links. 

Let users know where they are going
A good navigation should allow the user to know where they are going. There are several ways to ensure that: 
  • Inform the user in advance if the link is navigating to a non-HTML page such as PDF or Microsoft word as they usually expect the link to navigate to a HTML page
  • Insert ALT text to indicate to the user if the navigation uses an image to link to a homepage
An example of website that applies this principle is Wikipedia that insert ALT text on its logo to tell the user it navigates to the homepage.



Be consistent
Users depend on navigation when they are lost in a website. An inconsistent navigation will only cause the user to become more lost. A navigation design should have consistency in classification, graphics and hypertext colour.

Follow web convention
Users prefer to use the navigation skills they get from one website on another as this makes life easier for them. A good navigation should follow the web convention and not avoid them to be unique because the user is sure to get lost in website that does not follow the web convention. Some examples of web convention are as followed:
  • "Home" is used for the name of the website homepage
  • "Contact" is used to navigate to the page that contain details such as address, email and telephone

Reference:

Thursday, January 26, 2012

Iterating on the User Interface

Introduction
In the summer of 2011, I interned as a program manager within Windows at Microsoft. Program managers play an interesting and unique role at Microsoft - they are the backbone of a team that includes themselves, developers and testers. PM's define the product and at the end of the day, are the individuals responsible for ensuring that the team carries through. The most important role of the PM in the context of this post is to form a bridge between the users (customers) and the developers. As such, they play an important role in adding a human touch to the software, and ensuring that the usability is optimal.

My project over the summer was to develop a modern app for Windows 8. As revealed by the announcements at the BUILD conference in September 2011, Windows 8 will have a built in, cross platform app store. Apps are compatible with laptops, desktops, and tablets; design guidelines recommend that apps are designed for a touch first environment. After a few days of brainstorming, my team decided to build a to-do list app. As the PM, I was responsible for designing the user interface for the app.

The Initial UI
The first iteration of the user interface was inefficient, to say the least. The interface was designed with a "features first" attitude, something that I soon learned was not the correct approach when designing for people. When putting features before usability, it's easy to forget the human factor. The features took control, and I found myself designing the UI to accomodate the various functionalities. The result was a UI that required four user actions before they could begin entering a new task. This may not seem like much, but entering a new task is probably the single most important feature of the app, and one that will occur with the highest frequency. With that in mind, forcing the user to go through four loops before allowing them to start typing is a terrible idea, from a usability and efficiency standpoint.

Gathering Feedback
Upon presenting the UI to our supervisors, I quickly realized my mistake of putting features before usability. No matter how powerful and numerous the features, they are useless if not presented in an interface that enables the user to leverage them efficiently. This forms the backbone of the belief that interfaces should be designed with a "user first" attitude. Every UI element should be designed around helping the user accomplish their goal.

Another key point of feedback was to be great at one or two things rather than offering a mediocre product with every possible functionality. I decided to narrow the focus of our app - it went from a general to-do list app to a location based tasks app. Users create tasks and associate them with a location, and the app uses GPS data to trigger reminders. By narrowing the focus, we were able to simplify the UI by removing all elements that didn't help with the goal of creating location based tasks.

When it comes to consumer applications, and particularly those that are touch based, minimalism is a winner. Users do not want to deal with complexity - the faster they can get in and out of the app, the more likely they are to be satisfied. Keeping the user scenario in mind helps weed out those elements that aren't necessary and those that help the user achieve their goal.

Gathering feedback is an interesting process when it comes to UI - everyone will have their own opinion, and strong reasons for it. It's impossible to make everyone happy. I find that the best strategy is to focus on the points that were repeated by multiple individuals. If a number of people found a problem with an aspect of the UI, there is likely a way to improve it.

The Final UI
The end result looked nothing like my original UI prototype. In total, I think I went through around eight iterations. With each iteration, the usability of the UI progressively improved. Feedback with the final product was overwhelmingly positive, and users were happy with the user experience.

I'm confident that if I spent more time, I could have further improved it. As a UI designer (amateur as I am), its difficult to ever be completely satisfied with the work knowing that there are still a thousand ways to make it even better. But it's a time-quality tradeoff. The product has to be released at some point, and the ability to recognize the state in which the usability of the UI is acceptable is essential.