Mobile Usability: How Nokia Changed the Face of the Mobile Phone

Service trials were run from March to July 1998. The setup was arduous, including the installation of client/server systems, the selection and training of end users, definition and implementation of the service provisioning chain, issuing equipment to end users, and planning questionnaires, statistical methods, and help desk services. Several company-internal pretrials were therefore conducted to fine-tune the process.

The trials were organized as a series of test periods about two weeks long. The total number of participating end users was 70. The amount was to be maximized for sufficient statistical reliability but practically limited by manageable amount of persons during trials. This number was seen sufficient for our purpose. Since the number of users was limited—and segmentation by country was already a given—we agreed to focus on one user profile instead of delving into user segmentation according to gender, age, profession, and other criteria. In line with the business context of the services to be tested, the following target profile of an end-user was defined:

Internet experience and GSM usage were also part of the selection criteria. Thus, the whole end-user view was very business-focused, reflecting the developers’ expectations of the likely order of adoption for mobile multimedia services.

Typically the first series of precommercial equipment and services have not gone through field testing. The instability is likely due to varying and demanding mobile environment conditions. Thus the user-friendly test approach was chosen to start with. The users were selected from those having experience on Internet, mobile phones, and services usage. Thus they were more ready to cope with the technically demanding situations (e.g., contacting help desk services) instead of quitting the trial fully in despair. In the first phases of the trials, the subjects were personnel of the participating mobile operators. In later phases company-external users were included as test subjects.

We applied user-centered design approaches to improve the system quality as experienced by the end user. A number of design methods were chosen to support the development in various design, evaluation, and user support tasks. Focus groups discussions were conducted in early development phases to provide an indication of users’ service preferences. Then around 40 to 50 rough service ideas were generated in several brainstorming sessions. Storyboards were used for more detailed service concept design after the core ideas had been identified. Content providers developed use cases to help us understand the users’ tasks on a step-by-step level. Low-fidelity prototypes were constructed to evaluate early UI versions for service access, and usability tests were carried out to assess the solutions.

In addition to designing user interface solutions, usability evaluation criteria were developed to suit the specific needs of the project. Examples of usability criteria were user interface self-descriptiveness and feedback, controllability, and flexibility; efficiency, safety (mental, physical, and property), error-freeness, and error recovery; clear exits; and functional consistency. However, the field tests in MOMENTS were not intended primarily for optimizing the usability of a new device, but for assessing the overall acceptability of novel technologies and new services provided through them.

The user’s perception of the quality and acceptability of a mobile service is always influenced by a multitude of factors. First, there are the mobile client, the applications in it, the wireless connection to a server, the service presentation, and the content. Also, there are the real performance of the system and the expectations the users have for it.

Expectations about brand new technologies are influenced by another set of issues. Personal experience with similar or comparable technologies forms an obvious base of reference. (In the case of wireless services, the user’s experiences with the wired Internet was a natural and relevant comparison.) Personal needs between individuals vary—whether professional or private—setting up different criteria for acceptability. The type of service itself has some effect on how it will be evaluated. Information services, for example, are assessed on a different basis from entertainment services. Finally, there are market driven expectations about services, which are conditioned by their pricing and the reputation of the provider. We needed a conceptual model to link all these factors affecting service quality (see Figure 10.6).

Figure 10.6: Quality evaluation model of MOMENTS services.9

At this stage we had no way of knowing which quality dimensions would be the most influential, so the project needed to address a wide scope of customer experiences. The following list of evaluation criteria provides some idea of the complexity of assessing mobile service quality.

Our challenge was made all the more difficult because we didn’t know whether quality criteria would vary across different services and test sites.

The evaluation pack for MOMENTS was designed to give us a good idea about the acceptability of services with reference to these criteria in broad strokes, rather than going into deep detail with any individual issue. Usability data in all three trial countries—Italy, Germany, and the United Kingdom—were gathered using pretrial and post-trial questionnaires, face-to-face interviews, and questionnaires filled out by help desk personnel. The effective use of those approaches called for a commitment from all the partners in the project. Since MOMENTS was a large international project with multiple partners at different sites, the responsibility for usability assessment was spread over several locations and development teams. Nokia prepared the usability and quality evaluation guidelines for the project, and operator partners carried out the fieldwork. The mobile operators applied the guidelines to suit local conditions (e.g., operator-specific requirements). The original objective was to gather comparable results from all participating countries, but it turned out that the evaluation approaches had to be localized just like the services themselves.

Категории