Hypertext / Hypermedia
Cliff McKnight, Andrew Dillon and John Richardson
This item is not the definitive copy. Please use the following citation when referencing this material: McKnight, C., Dillon, A. and Richardson, J. (1992) Hypermedia. In A. Kent (Ed.) Encyclopedia of Library and Information Science, Vol. 50, New York: Marcel Dekker, 226-255.
The field of hypertext/hypermedia has mushroomed so much in the last five years that an article such as this cannot hope to be all-embracing. Rather, what we will do is provide a perspective on hypertext/hypermedia while offering guidance to the published literature. The perspective we give is essentially user-centred since we believe that ultimately it is user issues which will determine the success or failure of any technology.
We begin with a brief introduction and history then draw together some of the relevant research which has a bearing on hypertext/hypermedia usability. Some of this research has been conducted specifically in the field of hypertext but some general human-computer interaction research also needs to be considered. We look briefly at some of the issues involved in creating hypertexts and also at some of the claims made for hypertext. Finally, we attempt to see what the future holds for hypertext and offer a list of further reading.
What is hypertext, what is hypermedia?
Whatever hypertext is, one gets the impression that it is an idea whose time has come. [Conklin, 1987a]
In simple terms, hypertext consists of nodes or chunks of information and links between them. Stated in this way, it might seem easy to find examples of hypertext any text which references another can be seen as two nodes of information with the reference forming the link; any text which uses footnotes can be seen as containing nodes of information the text and the footnote with the footnote marker providing the link or pointer from one node to the other. As we shall see later, the idea of a node is very general, and there are no rules about how big a node should be or what it should contain. Similarly, there are no rules governing what gets linked to what.
What makes hypertext different, what sets it apart from the most conceptually inter-linked paper document, is that in hypertext the links are active, they are hot spots that, when touched, cause something to happen. When the reader selects a hypertext link, the movement between the two nodes takes place automatically.
Conceptually at least, hypermedia is no different from hypertext. The term hypermedia is a more general term than hypertext and suggests that links exist to information held on different media, e.g., video, CD, and so forth. Both terms refer to a system of linked information. We will therefore use the term hypertext to refer to any document with such properties irrespective of the media which contain it. In the same way that the general term a text refers to documents which may also contain graphics, so we shall use a hypertext as a generic term to denote a document which may in fact be distributed across several media.
Like most overnight sensations, hypertext hasnt suddenly arrived. Several research groups in academic institutions and industrial concerns have been developing and using hypertext systems since the 1960s. What has developed rapidly in recent years is the ready availability of the enabling technology the computer. The advent of hypertext has had to wait for the combination of processing power and visual display embodied in the modern computer. While it is certainly true that the ideas underlying hypertext have been around for many years, it is the vastly increased availability of computing power that has allowed the implementation, elaboration and exploration of these ideas.
As recently as 1964, the computing effort of the entire British academic community was handled by a single Ferranti Atlas machine housed in the Science and Engineering Research Councils Rutherford Appleton Laboratory. These days, all academics have ready access to mainframe computers and many also make extensive use of minicomputers and personal microcomputers; a recent small-scale survey by Shackel (1990) suggested that over 90% of UK academics have a microcomputer in their office. Hypertext has made the same journey, from mainframe to micro, and in the process has gone from academic research area to commercial software venture. Hypertext packages are now potentially big business and for this reason forces other than research results or practical experience are being exerted on the field.
A picture is worth a thousand words
Imagine yourself sat in front of the screen shown in Figure 1 displaying a hypertext concerning music. The top level (obscured in this view by overlapping windows) offers you a view of classical music organised by instrument, composer, historical time-line or geographical location. In this case, you have chosen composer and decided that it is Wolfgang Amadeus Mozart you want to investigate further. From here you can access films about Mozart, display and print out musical scores, listen to complete works, read a biography and so forth. Of course, all these are interlinked so that from listening to music you can move to the score and vice versa. In the biography, when a particular piece of music is mentioned, selecting the (bold) text plays an extract of the piece. When the text says that Mozart was born in Salzburg, a map can be called up showing Austria and surrounding countries and marked with the birthplaces of other composers, any of which you could then select directly from the map. All this movement around the information is achieved by selecting items on screen with a mouse or other pointing device. What previously would have entailed visits to various libraries, sound libraries and cinemas can now be achieved from the desktop.
Figure 1: A music hypertext, with the life and work of Mozart being explored.
This example shows how flexible the concepts of node and link are. A node of information can be a fragment of music, a piece of text, a map, a complete film anything which the author thinks can sensibly be presented as a unit. Even if a particular hypertext system always displays one screenful of information at a time, a node can consist of several consecutive screens. Similarly, a link is arbitrary in the sense that there are no rules to say where a link shall be made. A link can be made between any two nodes which the author (or often the reader as we shall see later) considers to be connected in some way.
In some systems, the links are typed in a manner analogous to the typing of variables in some computer languages integers, reals, strings and so forth. Similarly, in some hypertext systems there are several types of link and the author must specify which type he would like to make at any one time. For example, the system might limit links to those which connect information offering support for an argument, refutation of an argument, an example and so forth. However, many systems use untyped links. The flexibility of the size of nodes and the positioning of links places a burden on the author and reader which many paper documents do not.
Various hypertext systems have implemented the simple ideas in different ways and hence superficially they might look quite different. They might even feel different to use. For example, selecting a link may cause an overlapping window containing the linked text to open, or it may replace the node with the linked node. Similarly, replacing a node with a linked node may be couched in terms of unfolding or of physical jumping between nodes. However, there is sufficient similarity between the different systems to allow their grouping under the heading of non-linear text, dynamic documentation, or hypertext.
The fact that links are supported electronically is insufficient to define a system as hypertext. For example, the inverted file common in information management systems could be seen as a set of links allowing any word to be accessed. However, in such systems a word is simply an alphanumeric string, the basis for a search operation rather than a unit of meaning. A sophisticated system will allow the user (or, more typically, the database administrator) to define synonyms in terms of links between equivalent terms in the inverted file, and a thesaurus of words and phrases arranged according to their meaning could be constructed with the inverted file terms as the base level. However, the operation is still essentially one of searching through the syntax rather than linking on the basis of semantics.
The article most often cited as the birthplace of hypertext is Vannevar Bushs As we may think (Bush, 1945). Bush was appointed the first director of the Office of Scientific Research and Development by President Roosevelt in 1941. He saw clearly the problems associated with ever-increasing volumes of information:
There is a growing mountain of research. But there is increased evidence that we are being bogged down today as specialization extends. The investigator is staggered by the findings and conclusions of thousands of other workers conclusions which he cannot find time to grasp, much less to remember, as they appear. Yet specialization becomes increasingly necessary for progress, and the effort to bridge between disciplines is correspondingly superficial.
Reading this, it surprises many people to discover that the author was writing nearly 50 years ago, since many contemporary writers have made exactly the same point about the information explosion.
To cope with this plethora of information, Bush designed the memex, a device in which an individual stores his books, records, and communications, and which is mechanized so that it may be consulted with exceeding speed and flexibility. More than a simple repository, the memex was based on associative indexing, the basic idea of which is a provision whereby any item may be caused at will to select immediately and automatically another. This is the essential feature of the memex. The process of tying two items together is the important thing.
For Bush, tying two items together was important because it seemed to him to follow the workings of the mind, which operates by association. With one item in its grasp, it snaps instantly to the next that is suggested by the association of thoughts, in accordance with some intricate web of trails carried by the cells of the brain. In view of the way we described the essential features of hypertext earlier, it is not difficult to see why Bush is often regarded as its founding father.
In conception, the memex was a remarkable scholars workstation and Bush thought that it would allow a new form of publishing, with documents ready-made with a mesh of associative trails running through them, ready to be dropped into the memex and there amplified. Unfortunately for a visionary like Bush, the technology of the day was not up to the task of instantiating the memex. He assumed that microfilm would cope with the bulk of the storage problem, which might have been true. However, the level and complexity of indexing and retrieval required by the memex was certainly beyond microfilm-based technology. For this reason Bushs ideas lay dormant, waiting for technology to catch up with them.
The one thing which Bush did not do was to put a name to this new field of endeavour. The terms hypertext and hypermedia are attributed to Theodor (Ted) Nelson, a character who 25 years after coining the term can still hold an audiences attention with his vision of how the future of literature might look. Nelsons Xanadu project characteristically named after the site of Kubla Khans pleasure dome in Coleridges poem is aimed at the creation of a docuverse, a structure in which the entire literature of the world is linked, a universal instantaneous hypertext publishing network (Nelson, 1988).
In Xanadu, nothing ever needs to be written twice. A document is built up of original (or native) bytes and bytes which are inclusions from other documents in which they are themselves native. By the Summer of 1989, Nelson had moved from speaking of inclusions to speaking of transclusions, a term which implies the transfer and inclusion of part of one document into another. However, an important aspect of Xanadu is that the transclusion is virtual, with each document containing links to the original document rather than copies of its parts.
It could be argued that someone who speaks of a docuverse, xanalogical structure and transclusions should not be surprised that his project is not well understood. However, Nelson continues to work towards his vision, publishing details of the humber system for keeping track of the large number of documents in the docuverse (Nelson, 1988) and distributing flysheets announcing the imminent availability of the Xanadu Hypermedia Information Server software.
Although Nelson is seen as one of the progenitors of hypertext, the idea of much of the worlds literature being connected had been suggested many years earlier. At a talk given in 1936 (and subsequently published in 1938), almost ten years before even Bushs article, the British writer and visionary H G Wells had described his idea of a World Encyclopdia, the organisation of which would:
spread like a nervous networkknitting all the intellectual workers of the world through a common interest and a common medium of expression into a more and more conscious co-operating unity.
In a world which was about to be embroiled in the greatest war ever, Wellss article can be seen as a plea for thinking people to work together in peace. Modern political theorists might now judge the article to be nave, but the practicalities including issues like copyright which Wells foresaw are still being worked on today in the field of hypertext.
Nelson may have given hypertext its name but he was by no means the only person working on the ideas. Although perhaps better known as the inventor of the mouse pointing device and the five-key chording keyboard, Doug Engelbart has been pursuing his vision of hypertext since the early 1960s. Engelbarts emphasis has always been on augmenting or amplifying human intellect (cf Engelbart and English, 1968), a fact now reflected in the naming of his system as Augment. His original proposal was for a system he called H-LAM/T (Human using Language, Artifacts and Methodology, in which he is Trained) although the first implementation had the simpler title of NLS (oN Line System). NLS was meant as an environment to serve the working needs of Engelbarts Augmented Human Intellect Research Centre at Stanford Research Institute, a computer-based environment containing all the documents, memos, notes, reports and so forth but also supporting planning, debugging and communication. As such, NLS can be seen as one of the earliest attempts to provide a hypertext environment in which computer-supported collaborative work could take place.
We can see Bush, Nelson and Engelbart as representing three different views of hypertext which continue to attract adherents today. The Bush view sees hypertext as somehow natural, reflecting the mind or (in the strongest form of this position) modeling the mind; from this perspective, hypertext should feel easy to use. The Engelbart view of hypertext is as an augmentation environment; the user of hypertext should be able to achieve more than would be possible without it. Although Nelsons vision is perhaps the most ambitious, his view of hypertext is as a storage and access mechanism; the user of hypertext should be able to access any document, and such ease of access should work to break down subject boundaries.
These views are not mutually exclusive; it is possible to advocate a hypertext system which provides ready access to all information and therefore allows users to perform new tasks. Indeed, there is a fine line between these idealised positions and it is not always possible to describe any particular system (or system designers viewpoint) in terms of one or any of them. However, the fact that different views can proliferate illustrates the point that hypertext is not a unitary concept, not a single thing which can be defined any more precisely than in terms of nodes and links. It is for this reason that hypertext software packages with completely different look and feel can be produced and still claim to embody the concept of hypertext.
The corollary to this is that we should not expect any particular hypertext system to be ideal in all task situations. For this reason, it is not surprising that Conklins (1987b) historical survey of hypertext systems groups them in a largely task-based way. He uses four categories: macro literary systems, which centre on the integration and ready accessibility of large volumes of information; problem exploration systems, which are designed to allow the interactive manipulation of information; systems for structured reading/browsing/reference; and finally systems which might have been applied to a specific application but whose real purpose in construction had been the experimental investigation of hypertext technology itself.
For the student of hypertext history, Conklins survey provides an excellent summary and it would serve no purpose to reproduce it here. However, since it was written just as commercial microcomputer-based hypertext software was beginning to appear, we will end this historical section by describing briefly the development of two such packages: HyperCard and Guide.
Randall Trigg is credited with the first PhD thesis on hypertext (Trigg, 1983) in which he describes a system called Textnet. However, Trigg has had a greater impact on the field of hypertext since he moved to the Xerox Palo Alto Research Centre where he was one of the developers of the NoteCards system (Halasz, Moran and Trigg, 1987). NoteCards was designed as an information analysts support tool, one which would aid the analyst in forming better conceptual models and analyses. As its name suggests, the system is a computer-based form of the well-known index or note card much used by information workers, and extends the metaphor to include the FileBox traditionally a shoe-box in the paper domain.
The main factor which has limited the use of NoteCards is that it requires the use of an expensive Xerox Lisp computer. Even so, it has been used as a research tool in several application areas, with Trigg also exploring its use as an environment in which to perform computer supported collaborative working (Trigg and Suchman, 1989). However, possibly the biggest impact which NoteCards has had has been indirect; like several other aspects of the work at Xerox PARC, it has influenced Apple Computer and can be seen as the model for Apples HyperCard. Initially given free with all new Macintosh computers, today HyperCard is the most widely distributed hypertext system and consequently the best known and most used. For an introduction to using HyperCard, see Goodman (1987).
Unfortunately, while HyperCard has served to introduce the word hypertext into many peoples vocabulary, it has also been responsible for giving many of these people a wrong impression. HyperCard is a powerful tool, one which can do more than produce hypertext documents. It has an important use as a rapid prototyping tool and even an application generator it has been called the programming environment for the rest of us echoing Steve Jobs description of the Macintosh as the computer for the rest of us. Hence, it is a mistake to think that everything produced using HyperCard is hypertext or, conversely, that all hypertext systems have the same properties as HyperCard.
The movement from research laboratory to commercial software venture is one which has not been made very often. However, the Guide hypertext system currently available commercially for the Apple Macintosh and IBM PC, can claim to have made the transition with some success. The system has been the subject of research and development by Peter Brown and colleagues at the University of Kent since 1982, but in 1984 Office Workstations Ltd (OWL) became interested and have since developed the microcomputer versions. The original system, still the focus of research at Kent, runs under the Unix operating system.
Unlike many systems which use the card metaphor or present information a screen at a time (like pages), Guide uses a scrolling method similar to word processors. Indeed, a Guide document is presented as a single scrolling field. Within the document are buttons which, when selected, are replaced by the associated text or graphics, or may produce a pop-up note window, or may link to another Guide document. For example, Figure 2 shows a screen from a journal article held in Guide format; when the word Introduction is selected, the text is unfolded from behind the button as in Figure 3. Clicking on the text causes it to be re-folded behind the button. A read-only version of Guide has been used to distribute at least two books to date: the proceedings of the First UK Hypertext conference (McAleese, 1989) and Ted Nelsons (1981) Literary Machines. To the best of our knowledge, the book which has only been distributed in hypertext form has yet to arrive. However, the concept of hyperfiction has been discussed (Howell, 1990), and short stories have been distributed in hypertext format (e.g., Engsts (1989) Descent into the Maelstrom), so it may be only a matter of time before an original hyperbook appears. We describe elsewhere the development of a hypertext version of the journal Behaviour and Information Technology (McKnight et al., 1991) and the journal Hypermedia will be publishing a special issue in 1991 which will only appear in hypertext form.
Figure 2: The top level of the article, where each heading is a button.
Figure 3: Selecting the Introduction button causes the relevant text to be displayed.
Of course, the history of hypertext is being constantly updated and many more hypertext systems will continue to be developed in various centres around the world, including the USSR where the Hyperlog system has been developed.
The long term significance of hypertext, like any new technology, depends on its ability to allow users to perform tasks at least as easily as they could without it and preferably gives some added value. Early word processors gave such added value over the typewriter that they were used despite their often appalling interfaces. However, it was improvements to their usability rather than their functionality which established them firmly in the market-place. Since usability is of paramount importance, we turn now to a consideration of work which has a bearing on it.
Research into Hypertext Usability
Users, Tasks and Information Spaces
To understand the usability issues underlying hypertext one needs to conceptualise the system in terms of three factors: the users, their tasks and the information space in which the task is being performed.
Users have skills, habits, intentions and myriad other attributes that they bring to the computer when using hypertext. The rapid development of information technology over the last decade or so means that to some extent we are all users and contemporary thinking rightly stresses that technology should be designed with users needs in mind. In the hypertext domain it is likely that potential users will come from all walks of life and age groups. Hypertext applications will not exist just in libraries, schools or offices but will also be found in museums (e.g. HyperTIES; Shneiderman, 1987), tourist information centres (e.g. Glasgow On-Line; Baird and Percival, 1989) and eventually, in the home. Thus when we talk of hypertext and its uses it is important that we try to place our discussion in a specific context by looking at the first element of our triumvirate, i.e., who are the target users?
The tasks that can be performed with documents are extremely variable. People read texts for pleasure, to solve problems, to stimulate ideas or to monitor developments for example. Yet such interactions vary tremendously in terms of time, effort and skill involved that when we consider the development of a new information presentation medium we must determine the nature of the tasks it is intended to support if we are to avoid catch-all phrases or claims such as hypermedia is better than paper.
The first two terms are relatively self-explanatory but information space is a more vague term by which is meant the document, database, texts, etc. on which the user works. This information space varies tremendously and can be shown to interact with both users and tasks i.e., people utilise information differently depending on its type, their experience and what they want from it (Dillon et al., 1989).
The information that users deal with when interacting with contemporary computer systems varies tremendously. With hypertext such variation is equally apparent. Hypertext systems can be used to manipulate and present lengthy texts such as journal articles (McKnight et al., 1990), encyclopedias (Shneiderman, 1987), computer programs (Monk et al., 1988) or English literature (Landow, 1987) to name but a few current applications. In fact, there is no reason why any information could not be presented in hypertext form the question is, looking at the third element of our triumvirate, what sort of information would benefit from such presentation?
There is a glaring lack of applicable knowledge on how people satisfy their information needs. By applicable, we mean knowledge that can be used reliably to constrain the number of design choices that must be made when considering the usability of a hypertext implementation. This may seem surprising given the long history of a variety of disciplines in studying such phenomena, such as information science (how people access stored documents), psychology (how people process textual information) and typography (how characteristics of presentation structure and lay-out affect reading), but is a real problem for designers not skilled in these disciplines and wanting unambiguous answers to questions.
By understanding this context of use it is easy to appreciate the importance of sound research in the area of interface design to hypertext applications if the technology is to emerge as a serious challenger to the dominant medium of paper. Researchers have responded to this need by examining the human factors issues underlying information usage. However, this research has tended to be piecemeal, and given the extraordinary breadth of issues involved in understanding reading, writing and so forth it is not surprising that a coherent picture has yet to emerge. After all, issues concerning attitude to information technology in general will affect the usability of any product like hypertext which relies on computers as a delivery mechanism as much as the supposedly more relevant issues of information organisation and layout. In the following section we provide a basic framework within which to consider the relevant issues.
Research on Electronic Documents
Historically we can trace the development of relevant research on the subject of interface design for information access in electronic domains back several decades. However, much of the work in this area in the last 15 years has concentrated on the broad theme of likely differences between reading from screens as opposed to paper (for a detailed review of this work see Dillon et al., 1988). Empirical investigations of the area have suggested five possible outcome differences between the media:
Speed [Wright and Lickorish, 1983]
Accuracy [Creed et al., 1987]
Comprehension [Kak, 1981]
Fatigue [Wilkinson and Robinshaw, 1987]
Preference [Cakir et al., 1980]
In general, people read 20-30% slower from typical screens, (see e.g., Muter et al., 1982; Gould and Grischkowsky, 1984). However, the emphasis in hypertext on easy selectability of links, multiple windows and so forth has meant that such packages are often implemented on systems with large, high-resolution screens with black characters on a white background. Under such conditions (with the addition of anti-aliased characters) Gould et al. (1987b) reported no significant difference in reading speed between screen and paper, a finding we discuss later. As technology improves and screen quality approaches that of paper, reading speed decrements may cease to be an issue for the hypertext user.
Accuracy of reading usually refers to performance in some form of proof-reading task, although there is debate in the literature about what constitutes a valid task. Typically, number of errors located in an experimental text has been used as a measure (Gould and Grischkowsky, 1984). While it is probably true to say that few users of hypertext will be performing such routine spelling checks, many more users are likely to be searching for specific information, scanning a section of text and so forth. In a study by the present authors (McKnight et al., 1989) subjects located information in a text using either paper, a word processor document or one of two hypertext versions (Guide and Hypercard). Results showed an accuracy effect favouring paper and the linear-format word processor version suggesting a structural familiarity effect mentioned above. Obviously, more experimental work comparing hypertext and paper on a range of texts and tasks is needed.
With both speed and accuracy, a performance deficit may not be immediately apparent to the user. However, the same cannot usually be said of fatigue effects for which there is a popular belief that reading from screens leads to greater fatigue. Gould and Grischkowsky (1984) used a questionnaire and measures of visual acuity and phoria to test fatigue in this domain and failed to show a significant effect for presentation medium, leading them to conclude that good-quality VDUs in themselves do not produce fatiguing effects, although the findings have been disputed by Wilkinson and Robinshaw (1987).
The effect of presentation medium on comprehension is particularly difficult to assess because of the lack of agreement about how comprehension can best be measured. If the validity of such methods as post-task questions or standardised reading tests is accepted, it appears that comprehension is not affected by presentation medium (e.g., Kak, 1981). However, such results typically involve the use of an electronic copy of a single paper document. The hypertext context differs significantly in terms of both document structure and size. It is widely accepted that with hypertext, the departure from linear surface structure makes it difficult for the user to build a mental model of the text and increases the potential for getting lost (though the extent to which this is also true for complex and extensive paper document systems remains unanswered). It appears that the cognitive and manipulative demands of hypertext could lead to a comprehension deficit. If there is no time pressure on the user, this deficit may simply appear as a speed deficit the user takes longer to achieve the same level relevance to educational applications.
No matter what the experimental findings are, a users preference is likely to be a determining feature in the success or failure of any technology. Several studies have reported a preference for paper over screen (e.g., Cakir et al., 1980), although some of these may now be discounted on the grounds that they date from a time when screen technology was far below current standards. Experience is likely to play a large role however, users who dislike technology are unlikely to gain sufficient experience to alter their attitude therefore the onus is on developers to design good hypertext systems using high quality screens to overcome such users reticence. What seems to have been overlooked as far as formal investigation is concerned is the natural flexibility of books and paper over VDUs, e.g., books are portable, cheap, apparently "natural" in our culture, personal and easy to use. The extent to which such "common-sense" variables influence user preferences should not be underestimated.
Explaining the differences
For convenience, we will present a schematic representation of the human factors issues involved in using hypertext based on a review by Dillon (1990). This is a three-tier framework based on the size of information space as shown in figure 4.
Figure 4: User issues in hypertext design.
Though generic, this representation gives an indication of what human factors issues need addressing when designing hypertext. According to this classification, the important issues change as a function of document size. At the lowest level, for very short pieces of text such as notes or memos, the major research issues concern visual ergonomics. As the documents become larger and the reader must use physical manipulations to access parts of it, facilities supporting searching or paging become more pertinent to usability. Finally, for very large documents, research on such issues as navigation through electronic space are now gaining much attention.
Positively correlated with the size variable is the breadth of issue involved. At the lowest level, visual ergonomics address narrower and more specifiable constructs such as image quality than work on information structure which invokes such abstract psychological concepts as schemata and mental models. Negatively correlated with size is the specificity of prediction we can make about reading from screens on the basis of current knowledge. Thus for short texts requiring no manipulation, it is possible on the basis of published work to predict likely outcomes for electronically presented text though for large hypertexts, research provides much less guidance.
Obviously most hypertext applications will consist of relatively large documents (i.e., the paper equivalents would be multi-page documents) thus it might seem as if we need only consider issues pertaining to information structuring or navigation when discussing this medium. Unfortunately the picture is not that simple and as we will attempt to show in the present review, issues at the lower level of the hierarchy often present conditions that must be met before we can even begin to address the supposedly more important ones that will afford usability.
Basic ergonomic issues
There is an established literature on the visual ergonomic issues of screen technology. Surprisingly, most work on hypertext overlooks this even though, for the conceivable future, users of such systems are likely to be reading from VDUs and will suffer the consequences of poor screen design as much if not more than users of standard applications like word processors or spreadsheets. The following variables have all been experimentally manipulated in an attempt to identify the underlying causes of the differences between reading electronic and paper documents.
Many of these reflect the popular myths about the inherent weaknesses of contemporary electronic text. Yet when tested independently by researchers at IBM in what is surely the most determined research effort yet seen in this domain (see Gould et al., 1987a, 1987b) many of these variables were shown to have little or no effect on reading performance. In fact, of the nine variables listed above as likely or assumed influences on poor reading performance, only image polarity, certain display characteristics and anti-aliasing were found to have a noticeable difference and even then, none of these on their own accounted for the 2030% speed differences noted earlier.
Gould et al. (1987b) however did manage to demonstrate empirically that under the right conditions some of the differences between the two presentation media disappear. In a study employing sixteen subjects, an attempt was made to produce a screen image that closely resembled the paper image i.e., similar font, size, colouring, polarity and layout were used. No significant differences were observed between paper and screen reading. This study was replicated with twelve further subjects and again no significant differences were observed though several subjects still reported some perception of flicker.
They concluded that any explanation of these results must be based on the interactive effects of several of the variables outlined in the previous sections and suggested that the performance deficit was the product of an interaction between a number of individually non-significant effects. Specifically, they identified display polarity (dark characters on a light background), improved display resolution, and anti-aliasing as major contributions to the elimination of the paper/screen reading rate difference.
Therefore the better the image quality is, the more reading from screen resembles reading from paper and hence the performance differences disappear. Obviously these findings are only part of the story. The studies at IBM only addressed limited outcome variables: speed and accuracy. Speed is not always a relevant criterion in assessing the output of an information usage task. A second shortcoming results from the task employed. Invariably it was proof-reading or some similar task which hardly constitutes normal reading for most people. Thus the ecological validity of these studies is low. However, to those who would advocate hypertext as the most natural information presentation medium these findings are a salutary warning against over-enthusiasm.
As information spaces increase in size issues other than those covered above come into play and need to be considered. With paper, users have acquired a range of physical and cognitive skills for manipulating text and the relatively standard format of texts allows for easy transfer of these skills across the spectrum of paper documents. It is less clear how hypertext should be designed to facilitate equivalent ease of use. Numerous issues affecting manipulation have been investigated and the most relevant ones in terms of usability are outlined here.
Scrolling versus paging
Readers establish a visual memory for the location of items within a printed text based on their spatial location both on the page and within the document (Rothkopf, 1971) which is supported by the fixed relationship between an item and its position on a given page. A scrolling facility is therefore liable to weaken these relationships. Schwartz et al. (1983) found that novices tend to prefer paging (probably based on its close adherence to the book metaphor) and Dillon et al. (1990a) report that a scrolling mechanism was the most frequently cited improvement suggested by subjects assessing their reading interface which incorporated paging.
Popular wisdom suggests that bigger is better but empirical support for this edict is sparse. Deliberately examining this for reading scenarios, Richardson et al. (1989) observed no performance differences between 20 and 40 line screens but they did report a significant preference effect favouring the larger display. Similarly Dillon et al. (1990a) investigated screen sizes of 20 and 60 lines for reading an electronic version of an academic article. Interestingly they found a manipulation effect for screen size that could not be explained merely by the fact that to read a complete text on a small screen necessitates more manipulations than seeing it on a large one. As in the Richardson et al. study, the authors report a preference effect favouring the larger display. As with many variables, the task being performed is likely to be a deciding factor.
It has become increasingly common to present information on computer screen via windows i.e., sections of screen devoted to specific groupings of material and current technology supports the provision of independent processes within windows or the linking of inputs in one window with the subsequent display in another. The use of such techniques is now commonplace in hypertext applications. Research suggests caution however. Tombaugh et al. (1987) found that readers need to acquire familiarity with a system before the benefits of such facilities can really be tested and Simpson (1989) concluded that for information location tasks, the ability to see a window's contents is not as important as being able to identify a permanent location for a section of text.
Such studies highlight the impact of display format on readers' performance of a standard tasks: information location in a body of text. Spatial memory seems important and paper texts are good at supporting its use. Windowing, if deployed so as to retain order can be a useful means of overcoming this inherent weakness of electronic text. However, studies examining the problems of windowing very long texts where more than five or six stacked windows are required must be carried out before any firm conclusions about the benefits of this technique can be drawn.
Electronic text supports word or term searches at rapid speed and with total accuracy and this is clearly an advantage for users in many information usage scenarios, e.g., checking references, seeking relevant sections etc. Indeed it is possible for such facilities to support tasks that would place unreasonable demands on users of paper texts, e.g., searching a large book for a non-indexed term or several volumes of journals for references to a concept. Unfortunately most end-users of computer systems are not trained in their use and while the facilities may appear intuitive, they are often difficult to employ successfully. Richardson et al. (1988) reported that subjects in their experiment displayed a tendency to respond to unsuccessful searches by increasing the specificity of the search string rather than lessening it. While it is likely that such behaviour is reduced with increased experience of computerised searching, a study by McKnight et al. (1990b) of information location within text found other problems. Here, when searching for the term "wormwood" in an article, two subjects input the search term "woodworm, displaying the intrusion of a common sense term for an unusual word of similar sound and shape
Being language independent, icons convey information by pictographic means and should thus support use by individuals unfamiliar with the terminology of operating systems and command languages. However, icons can be confusing if their form provides no immediate clue to their action.
In manipulating documents electronically, icons have become popular in many hypertext applications Guide, for example, uses such forms as boxes, arrows and circles when the cursor moves over an actionable area of the document. Used in conjunction with a mouse such facilities can support rapid, easy manipulations of the text and allow the user to access the document through numerous routes.
Stammers et al. (1989) reported that icons are most useful when they represent concrete rather than abstract actions which while intuitively sensible, suggests ultimate limitations on their use as many computer functions are highly abstract in nature. Brems and Whitten (1987) found that icons were more appropriate for experienced than novice users which is ironic given the stated benefits of icons.
Generalising such findings to the electronic text domain is difficult at present. A reasonable conclusion seems to be that icons have a role, particularly for simple or repetitive actions but are less applicable for conveying information of abstract actions.
Over the last 15 years numerous input devices have been designed e.g., trackerball, mouse, function keyboard, joystick, light pen etc. Since Card et al.s (1978) claim that the speed of text selection via a mouse was constrained only by the limits of human information processing, this device has assumed the dominant position in the market.
It has since become clear that, depending on the task, other input devices can significantly outperform the mouse (Milner, 1988). For example, when less than ten targets are displayed on screen and the cursor can be made to jump directly from one to the next, cursor keys are faster than a mouse (Shneiderman, 1987). In the electronic text domain, Ewing et al. (1986) found this to be case with the HyperTIES application, though there is reason to doubt their findings as the mouse seems to have been used on less than optimum surface conditions.
It is important to realise that the whole issue of input device cannot be separated from other manipulation variables such as scrolling or paging. For example, a mouse that must be used in conjunction with a menu for paging text will lead to different performance characteristics than one used with a scroll bar. For the moment however the mouse appears dominant and as the point and click concept becomes integrated with the look and feel of hypertext it will prove difficult to replace, even if convincing experimental evidence against its use, or an innovative credible alternative should emerge. One has only to consider the dominance of the far from optimum QWERTY keyboard to understand how powerful convention is. This keyboard was actually designed to slow human operators down by increasing finger travel distances, thereby lessening key jamming in early mechanical designs. Despite evidence that other layouts can provide faster typing speeds with less errors (Martin, 1972), the QWERTY format retains its dominant position.
Structuring the Information Space
Issues relating to re-structuring of information, together with the nature and amount of linking involved with hypertext, have become central questions. It is almost taken on faith by many that traditional documents consist of a linear format which demands serial reading while hypertext allows non-linear formats which offer more flexible and natural methods of use. Such a distinction is dubious. It assumes for example that paper text can only be read in one way and that readers will prefer to have texts re-structured into linked nodes. There is little evidence to support either assumption.
However, there is a striking consensus among many of the experts in the field that navigation is the single greatest difficulty for users of hypertext. Frequent reference is made to getting lost in hyperspace (e.g., Conklin 1987; McAleese, 1989), and Hammond and Allinson (1989) speak for many when they say:
Experience with using hypertext systems has revealed a number of problems for users..... First, users get lost... Second, users may find it difficult to gain an overview of the material... Third, even if users know specific information is present they may have difficulty finding it (p. 294).
We have presented detailed critiques of this approach elsewhere (see e.g., Dillon et al., 1990b, McKnight et al., 1991) and here we will only outline the major issues.
Readers models of information spaces
It is clear that readers form mental representations of a paper document's structure in terms of spatial location [Lovelace and Southall, 1983] and overall organisation. Dillon (1991) for example has shown that experienced academic journal readers can predict the location within an article of isolated selections of material from it with a high degree of accuracy, even when they cannot read the target material in detail. Such representations or models are derived from years of exposure to the information type or can be formed in the case of spatial recall from a quick scan of the material. Such models or superstructural representations (van Dijk and Kintsch, 1983) are useful in terms of predicting where information will be found or even what type of information is available. Consideration of existing models is vital in the design of new versions so as to avoid designing against the intended users' views of how the information should be organised.
The qualitative differences between the models readers possess for paper and electronic documents can easily be appreciated by considering what you can tell about either at first glance. A paper text is extremely informative. When we open a hypertext document however we do not have the same amount of information available to us. We are likely to be faced with a welcoming screen which might give us a rough idea of the contents (i.e., subject matter) and information about the authors/developers of the document but little else.
Performing the hypertext equivalent of opening up the text or turning the page offers no assurance that expectations will be met. Many hypertext documents offer unique structures (intentionally or otherwise) and their overall sizes are often impossible to assess in a meaningful manner (McKnight et al., 1990b). At their current stage of development it is likely that users/readers familiar with hypertext will have a schema that includes such attributes as linked nodes of information, non-serial structures, and perhaps, potential navigational difficulties! The manipulation facilities and access mechanisms available in hypertext will probably occupy a more prominent role in their schema for hypertext documents than they will for readers schemata of paper texts. As yet, empirical evidence for such schemata is lacking.
Browsers, maps and structural cues
A graphical browser is a schematic representation of the structure of the database aimed at providing the user with an easy to understand map of what information is located where. According to Conklin (1987) graphical browsers are a feature of a somewhat idealized hypertext system, recognising that not all existing systems utilise browsers but suggesting that they are desirable.
It is not difficult to see why this might be useful. Like a map of a physical environment it shows the user what the overall information space is like, how it is linked together and consequently offers a means of moving from one information node to another. Indeed, Monk et al. (1988) have shown that even a static, non-interactive graphical representation is useful. However, for richly interconnected material or documents of a reasonable size and complexity, it is not possible to include everything in a single browser without the problem of presenting visual spaghetti to the user.
Some simple variations in the form of maps or browsers have been investigated empirically. Studies by Simpson (1989) found that as accuracy of performance increased so did subjects ability to construct accurate post-task maps of the information space, suggesting that provision of a map to a system may be a good way of encouraging accurate performance.
The provision of metaphors
A metaphor provides a way of conceptualising an object or environment and in the information technology domain is frequently discussed as a means for aiding novices comprehension of a system or application. The most common metaphor in use is the desk-top metaphor familiar to users of the Apple Macintosh amongst others. Prior to this metaphor, the word processor was often conceptualised by first-time users as a typewriter.
The logic behind metaphors is that they enable users to draw on existing world knowledge to act on the electronic domain. As Carroll and Thomas (1982) point out:
If people employ metaphors in learning about computing systems, the designers of those systems should anticipate and support likely metaphorical constructions to increase the ease of learning and using the system.
However, rather than anticipate likely metaphorical constructions, the general approach in the domain of hypertext has been to provide a metaphor and hope (or examine the extent to which) the user can employ it. As the term navigation suggests, the most commonly provided metaphor is that of travel.
Of course, one could simply make the hypertext look as similar to the paper document as possible. This is the approach offered by people such as Benest (1990) with his book emulator and as such seems to offer a simple conceptual aid to novice users. Two pages are displayed at a time and relative position within the text can be assessed by the thickness of pages either side which are splayed out rather like a paper document would be. Page turning can be done with a single mouse press which results in two new pages appearing or by holding the mouse button down to simulate flicking through the text.
If that was all such a system offered it would be unlikely to succeed. It would just be a second-rate book. However, according to Benest, his book emulator provides added-value that exploits the technology underlying it. For example, although references in the text are listed fully at the back of the book, they can be individually accessed by pointing at them when they occur on screen. Page numbers in contents and index sections are also selectable, thereby offering immediate access to particular portions of the text.
It is hard to see any other metaphors being employed in this domain. Navigation is firmly entrenched as a metaphor for discussing hypertext use and book comparisons are unavoidable in a technology aimed at supporting many of the tasks performed with paper documents. Whether there are other metaphors that can be usefully employed is debatable. Limited metaphors for explaining computer use to the novice user are bound to exist and where such users find themselves working with hypertext new metaphors might find their way into the domain. However, for now at least, it seems that navigation and book emulation are here to stay.
In summary, most work on hypertext has been discursive rather than experimental but of the studies that have been carried out it is obvious that hypertext will not be a universal panacea to problems of information presentation. It comes as a surprise to many that most comparative experimental work in the area has shown that users perform better or at least as well with paper (Monk et al., 1988; McKnight et al., 1990b). This is not always so Egan et al., (1989) provide evidence for improved performance by students using SuperBook over a paper text on statistics though their evidence is open to interpretation but is typical. However such findings must be interpreted in terms of the users, tasks, and information spaces being investigated as there will be many instantiations of the hypertext concept available in many different task domains to many different users. To assume that all hypertext are better (or worse) than paper on the basis of several limited empirical examinations is clearly short-sighted.
There are two basic routes to creating hypertext: conversion of an existing text and direct origination of a new hypertext. Developments like Glasgow On-Line (Baird and Percival, 1989) represent an intermediate approach of taking existing but previously unconnected material and placing it in a hypertext. Extensive experimental hypertexts have been developed in research and development departments in both the educational (Intermedia (Yankelovich et al, 1985), Writing Environment (Smith et al., 1987)) and private sector (KMS (Akscyn et al., 1988), Document Examiner (Walker, 1987), Thoth-II (Collier, 1987), NoteCards (Halasz et al. 1987), gIBIS (Conklin and Begeman, 1989)), but the market for published hypertext has yet to take off. The vast majority of current hypertexts are small, with restricted functionality and of experimental interest rather than practical significance.A notable exception is the hypertext version of the Engineering Data Compendium, a reference encyclopdia of human factors knowledge written for designers and engineers (Glushko, 1989). For the publishing community some significant problems will need to be addressed before electronic products are commonplace. For the publisher the potential market is still relatively small, there are problems in terms of incompatible hardware and software systems and, as yet, no proven techniques for protecting electronic texts from unauthorised reproduction.
The considerations involved in the creation of original hypertexts rely to a large extent on the characteristics of extended prose arguments. We have discussed these in detail elsewhere (McKnight et al., 1991) and we will therefore give more attention to text conversion here.
The conversion of existing text into hypertext
The possible reasons why a text may be considered suitable for conversion to hypertext format include all those which apply to the creation of basic electronic texts. The advantages of electronic text formats are most clearly seen in the improved access that they offer to texts. Thus, for example, many readers can access the same text immediately and simultaneously via a network; lengthy texts can be readily searched, edited and incorporated into new documents if desired; and version control can be managed with greater efficiency so that all readers can be confident that they are reading the most recent version of the text. A hypertext implementation not only enjoys all the above advantages but also offers the increased convenience afforded by the dynamic linking of the constituent elements and a greatly increased flexibility of design.
A major criticism of Nelsons Xanadu vision has concerned the cost of the effort required to create hypertext versions of existing texts on a major scale. This view assumes that each text would need to be individually transformed, with each hypertext link uniquely specified. Although the effort required to convert texts into hypertext on any scale should not be underestimated Alschuler (1989) gives a description of the problems encountered in the conversion of six conference papers into three hypertext formats, with each conversion taking two or three experts approximately two months; Collier (1987) describes the conversion of 17 pages of a printed text into Thoth-II format as taking 40 hours there are a number of reasons why the task may not be as great as it first appears. These include the differing frequencies with which texts are accessed by readers, the rle of machine-readable texts in the printing process, the nature of the transformation that will be appropriate, and the increasing use of generic text mark-up languages.
Text access frequencies
It is an accepted fact that for practical purposes much scientific and technical information has a limited shelf life, after which its importance gradually declines. The demand for scientific journal information clearly demonstrates this factor. The ADONIS project was a full-scale evaluation study of the parallel publishing of bio-medical journals on paper and CD-ROM (see Campbell and Stern, 1987). A pilot study described by Clarke (1981) showed that, in the chosen subject area, readers were primarily interested in material less than three years old. Thus, for certain areas there may be little point in actually capturing archive material and this could effectively remove 100 years production of books and journals from the electronic queue for well established disciplines. However, this is not to say that electronic bibliographic data should not be available.
It is now standard practice for printed text to be processed electronically at some stage, although there is enormous variation in the precise form which such processing takes. Many authors create documents on word processors or microcomputers. An increasing number of publishers are prepared to accept electronic versions of texts or camera-ready copy instead of manuscripts or typed drafts, and the majority of publishers/printers produce a final electronic version as input for a typesetting machine. Thus, for the majority of texts published today, an electronic version of some kind will have been created, from which a hypertext could be fashioned.
A notable development in this direction is the SuperBook system, developed at the Bell Communications Research laboratory. It is a browsing system for electronic texts which have been prepared for paper publication using proprietary systems such as Interleaf, Scribe or troff (Remde et al., 1987). SuperBooks aim is to enhance the retrieval of information from existing electronic texts without the added demand of converting them into a specific hypertext format. When the text is viewed using SuperBook, a multi-window display is created which shows the text, a fully selectable contents page and a window for conducting sophisticated string searching.
The nature of the transformation
There are sound reasons for suggesting that the content and structure of many documents may be largely maintained following conversion from text to hypertext, and preservation of these aspects would certainly reduce the labour costs of the conversion. There are many types of text which have a strongly regulated content, and conversion to electronic format would be no reason to make amendments. Examples include industrial standards, guidelines, codes of practice, legal documents, technical documentation, historical records and religious documents.
In terms of a texts structure, alterations can obviously vary from merely rearranging the sequence of the original macro-units (sections/chapters) to completely reorganising the material into a new structure (hierarchy, flat alphabetical sequence or net). Again, there are grounds for suggesting that some texts may be converted with relatively little restructuring. Some electronic texts such as computer operating system documentation (e.g., Symbolics Genera) are published in parallel forms. There is an obvious need for both forms to have equivalent contents, but there is also a considerable advantage in maintaining a consistency of structure for the reader. Users may also need to use both versions as the situation demands, and this could be under conditions of extreme stress. Consider, for example, operating procedures for an industrial plant which are normally accessed electronically but which also exist as a printed document in case of a total power failure. Many recent technical texts have benefitted from the increased importance of document design as an area of professional activity and are consequently well structured with regard to the users requirements.
Recent advances in electronic text processing and in particular the use of text mark-up represent another form of assistance to the creation of hypertexts. In its broadest sense mark-up refers to any method used to distinguish equivalent units of text such as words, sentences or paragraphs, or of indicating the various structural features of text such as headings, quotations, references or abstracts. A generic descriptive markup system called Standard Generalised Mark-up Language (SGML) has been accepted as an ISO standard (ISO 8879) and is likely to become even more widely used in the future. (For an introduction to SGML, see Holloway, 1987.)
The generic coding of the structural units of documents via SGML, or some similar system, is likely to be of considerable significance to the future development of hypertext. It would enable the automatic generation of basic hypertexts which are based on document structure (i.e., the creation of nested hierarchies and the direct linkage of text elements) with a minimum of human involvement. Niblett and van Hoff (1989) describe a program (TOLK) that allows the user to convert SGML documents into a variety of hypertext and text forms for display or printing. Rahtz et al. (1990) describe a hypertext interface (LACE) for electronic documents prepared using LATEX.
Perhaps of greater significance is the US Department of Defence Computer-aided Acquisition and Logistic Support (CALS) programme. CALS has the aim of converting all the significant documentation supporting defence systems from paper to electronic forms via internationally agreed standards, including SGML. Although CALS will initially concern only the armed forces and their contractors, the size of the defence industry in America means the programme will soon have a major impact far beyond this sector.
The authoring of original hypertexts
Hypertext certainly places new burdens on the author. No matter how badly it is written, every part of a book is accessible to the reader simply by turning the pages. A badly authored hypertext is far more restrictive when the reader can only follow the links provided by the author. Even if readers are given facilities to add new links, they must still be able to access all parts otherwise how are they to know that they want to add a link? With the lack of existing authoring tools, it is still relatively easy for the author to leave sections completely unlinked (information islands) or to leave links dangling. Nor is it sufficient to link everything to everything else. The resultant spaghetti does not help the reader in any way.
One role of the author is to simplify material for the reader, to structure material in order that it can be grasped. In paper, several familiar structures exist which aid both the reader and author. For example, the author of a journal article has a structure prescribed by the particular journal and this removes the need for authors to make structural decisions. The hypertext author does not have such traditions on which to rely and consequently must give much more thought to the structure of the hypertext if it is to support the readers tasks.
Fountain et al. (1990) discuss some of the problems of authoring using current hypertext systems, including the large effort required. Brndmo and Davenport (1990) also discuss the authoring of a hypermedia document, including the problem of linking in temporal media.
Streitz (1990) suggests that hypertexts can actually represent the authors internal knowledge structure which can then be directly apprehended by the reader without the need for an intermediate stage. However, given the general inadequacy of methods for eliciting and representing knowledge structure, it seems unlikely that hypertext will succeed in capturing adequately a knowledge structure of any complexity. Furthermore, even if hypertext did succeed, there is little to suggest that such structures would be directly apprehended by the reader.
The suggestion made by Streitz, discussed above, is not untypical. As with any new technology, many claims have been made about the potentialities of hypertext and the effect which it will have. For example, It has been variously claimed that hypertext will change the way we read, learn and think, but can such claims be taken seriously?
The way we read
Paper is usually portrayed as being essentially linear and proponents of this claim usually make the assumption that reading follows this linearity for example, readers access most text in serial order (Jonassen, 1986). However, as we noted at the very outset, the structure of paper documents with their footnotes, references, signposts and so forth is often far from linear. Similarly, if we observe reading behaviour it is only at the obvious low level that it is tied to the serial nature of the print on page. Skilled readers have developed reading strategies which allow them to interact with a paper document in a rapid and flexible way. Furthermore, they have developed these strategies through experience with structured documents. (See Dillon et al., 1989, for a description of reading strategies associated with academic journal use.)
To the extent that hypertext supports existing strategies, then, it will not change the way we read. However, to the extent that new document structures are developed, readers will need to develop new reading strategies. Such strategies will develop as a result of experience with the new medium in the same way that the existing strategies developed with the paper medium. In this sense, then, hypertext may change the way we read, but only if novel structures are found to be useful in supporting certain tasks.
The way we learn
Evaluating learning in hypertext is dogged by difficulties, many of which stem from the general problems involved in evaluating learning. Despite the topic having occupied psychologists for decades, little of any consequence can be transferred from the psychology literature to the evaluation of learning in the average classroom. However, the fact that the task is difficult does not mean that it should be ignored and several people have attempted to demonstrate increased learning in a hypertext environment.
For example, Beeman et al. (1987) report on an ambitious project which attempted to introduce hypertext into two different university courses and to measure the effect of this in a variety of ways ranging from grades to subjective preference. Unfortunately, the biggest learning effect seemed to be on the people who prepared the hypertext course material! Jones (1989) suggested that more incidental learning (the kind that takes place along the way rather than on purpose) would occur in a hypertext environment. However, her experiment failed to support the suggestion.
Stanton and Stammers (1990) suggest that a hypertext environment may be superior because it allows for different levels of prior knowledge, encourages exploration, enables subjects to see a sub-task as part of the whole task, and allows subjects to adapt material to their own learning style. In their experiments (1989, 1990), one set of subjects was given the freedom to access a set of training modules in any order, while another set of subjects was presented with the modules in a fixed order. They reported that performance was significantly improved when subjects trained in the non-linear condition. Although such comparisons may provide valid experimental designs, extrapolating the results to realistic learning situations is difficult, particularly in higher education where students are rarely forced to access material in a rigid, predetermined order. Hence, the results may reflect the advantage not so much of non-linear environments but rather of giving the learner some degree of control over the learning environment.
Although the notion of control is an important one in education, it is far from clear that hypertext provides the learner with more control than traditional media. While Duchastel (1988) states that computers promote interaction through a manipulative style of learning where the student reacts to the information presented, the fact that the learner is using a mouse to select items and move through the information space does not make the process any more active than consulting an index, turning the pages of a book, underlining passages and writing notes in the margin. In this sense, hypertext might be merely the latest in a long line of computer solutions in education and any apparent benefits may be due to little more than novelty value.
The way we think
The study by Beeman et al. mentioned earlier drew attention to what they saw as a paradox: that education attempts to produce a pluralistic, non-lineal style of thinking by using lineal modes of communication, presentation and instruction. They suggest that hypertext offers a solution to this paradox insofar as it can be used as an educational tool for promoting non-lineal thought. However, as the authors themselves point out, the small changes observed in their studies may not be changes in thinking so much as course-specific adaptation to what the students perceive is desired from them by the instructors.
Whether or not hypertext changes the way we think remains to be seen. Duffy and Knuth (1990) have argued convincingly that the promotion of non-linear thinking rests primarily in the pedagogy of the professor rather than in the database. Their paper represents an excellent example of the requirement to consider the pedagogical principles on which the hypertext usage is to be based before considering the technology through which it will be instantiated, and it should be considered as required reading to all who would design hypertexts for use in learning and education.
Taken together, the results of these studies suggest that rather more has been claimed for hypertext than is supported by hard evidence. It could be argued that strong claims need to be made about any innovative technology in order for it to be taken seriously enough to merit evaluation. However, those not involved in promoting such technologies would be well advised to view such claims with a modicum of scepticism.
Predicting the future is a dangerous occupation. Approximately ten years ago, Jonassen said in ten years or so the book as we know it will be as obsolete as is movable type today (Jonassen, 1982, p.379). Clearly, the book is not yet obsolete and will be with us for the foreseeable future. The future of hypertext depends on a variety of factors, several of which we have touched on in the preceding article. For example, if hypertexts are to become widely distributed we need to understand the tasks they support and the way in which the information space can best be structured to support such tasks.
It appears as though screen technology is improving sufficiently to predict that the quality of screen required to support extended reading is not far away. The increasing use of good quality equipment may also lead to increased experience of computer use and therefore decreased preferences for paper. However, for hypertext to become widely used it must be easy to use. This ease of use needs to extend to the tools provided for authoring as well as the interface provided for reading and browsing.
The time demands involved in creating hypertexts suggest that more attention needs to be given to the automatic generation of hypertext. As we noted earlier, some work is being done in this area and the spread of standards such as SGML should work to hypertexts advantage.
The recent past has also seen hypertext developing into other areas. For example, hypertext has been suggested (e.g., Halasz, 1987; Brown, 1988) and investigated (e.g., Trigg and Suchman, 1989) as an environment for computer supported collaborative work (CSCW). The field of CSCW is much wider than hypertext and incorporates such technologies as video conferencing and groupware in general (Grudin, 1989). Hypertext has also been suggested as a front end to expert systems, giving the concept of expertext (Diaper and Rada, 1989). Both these developments suggest that while we currently think of hypertext the product, the future may lead to hypertext the approach.
Above all, hypertext is an access mechanism and as such has a potential role to play in all situations where information is accessed and used. It is a way of thinking about structuring information and as such will probably cease to be called hypertext in the near future, as the ideas get absorbed into many informational contexts and designed into the interfaces to a variety of computer-based working environments. The success of the ideas will be judged on their ability to help people achieve their aims.
Proceedings of Hypertext 87, University of North Carolina, Chapel Hill. New York: ACM Press.
McAleese, R. (1989b)(ed.) Hypertext: Theory into Practice. Oxford: Intellect.
McAleese, R. and Green, C. (1990)(eds.) Hypertext: State of the Art. Oxford: Intellect.
McKnight, C., Dillon, A. and Richardson, J. (1991) Hypertext in Context. Cambridge: Cambridge University Press.
Nielsen, J. (1990) Hypertext and Hypermedia. San Diego, CA: Academic Press.
Jonassen, D.H. and Mandl, H. (1990)(eds.) Designing Hypermedia for Learning. Heidelberg: Springer-Verlag.
Shneiderman, B. and Kearsley, G. (1989) Hypertext Hands-On! Reading, MA: Addison-Wesley.
Akscyn, R.M., McCracken, D.N. and Yoder, E.A. (1988) KMS: a distributed hypermedia system for managing knowledge in organizations. Communications of the ACM, 31(7), 820835.
Alschuler, L. (1989) Hand-crafted hypertext: lessons from the ACM experiment. In E. Barrett (ed.) The Society of Text: Hypertext, Hypermedia, and the Social Construction of Information. Cambridge, MA: MIT Press.
Baird, P. and Percival, M. (1989) Glasgow On-Line: database development using Apples HyperCard. In R. McAleese (ed.) Hypertext: Theory into Practice. Oxford: Intellect.
Beeman, W.O., Anderson, K.T., Bader, G., Larkin, J., McClard, A.P., McQuilian, P. and Shields, M. (1987) Hypertext and pluralism: from lineal to non-lineal thinking. Proceedings of Hypertext 87. University of North Carolina, Chapel Hill. 6788.
Benest, I.D. (1990) A hypertext system with controlled hype. In R. McAleese and C. Green (eds.) Hypertext: State of the Art. Oxford: Intellect.
Brems, D. and Whitten, W. (1987) Learning and preference for icon-based interfaces. In Proceedings of the 31st Annual Meeting of the Human Factors Society. Santa Monica, CA: Human Factors Society. 125129.
Brndmo, H.P. and Davenport, G. (1990) Creating and viewing the Elastic Charles a hypermedia journal. In R. McAleese and C. Green (eds.) Hypertext: State of the Art. Oxford: Intellect.
Brown, P. (1988) Hypertext the way forward. In J.C. van Vliet (ed.) Document Manipulation and Typography. Cambridge: Cambridge University Press. 183191.
Bush, V. (1945) As we may think. Atlantic Monthly, 176/1, July, 101108.
Cakir, A., Hart, D. J. & Stewart, T. F. M. (1980) Visual Display Terminals. Chichester: John Wiley and Sons.
Campbell, R. and Stern, B. (1987) ADONIS a new approach to document delivery. Microcomputers for Information Management, 4(2), 87107.
Card, S., English, W. and Burr, B (1978) Evaluation of mouse, rate-controlled isometric joystick, step keys and text keys for text selection on a CRT. Ergonomics, 21, 601-613.
Carroll, J. and Thomas, J. (1982) Metaphor and cognitive representation of computing systems. IEEE Transactions on Systems Man and Cybernetics, SMC-12(2), 107116.
Clarke, A. (1981) The use of serials at the British Library Lending Division. Interlending Review, 9, 111117.
Collier, G.H. (1987) Thoth-II: hypertext with explicit semantics. Proceedings of Hypertext 87, University of North Carolina, Chapel Hill. 269289.
Conklin, J. (1987a) A survey of hypertext. MCC Technical Report # STP-356-86, MCC, Austin Texas.
Conklin, J. (1987b) Hypertext: an introduction and survey. Computer, September, 1741.
Conklin, J. and Begeman, M.L. (1989) gIBIS: a tool for all reasons. Journal of the American Society for Information Science, 40(3), 200213.
Creed, A., Dennis, I. & Newstead, S. (1987) Proof-reading on VDUs. Behaviour and Information Technology, 6(1), 313.
Diaper, D. and Rada, R. (1989) Expertext: hyperising expert systems and expertising hypertext. In Proceedings of the Conference on Hypermedia/Hypertext and Object Oriented Databases. Uxbridge: Unicom.
van Dijk, T.A. and Kintsch, W. (1983) Strategies of Discourse Comprehension. New York: Academic Press.
Dillon A. (1990) The human factors of hypertext. International Forum on Information and documentation 15(4)...
Dillon, A. (1991) Readers models of text structures: the case of academic articles International Journal of Man-Machine Studies, in press.
Dillon, A., McKnight, C. and Richardson, J. (1988) Reading from paper versus reading from screens. The Computer Journal 31(5), 457464.
Dillon, A., Richardson, J. and McKnight, C. (1989) The human factors of journal usage and the design of electronic text. Interacting with Computers, 1(2), 183189.
Dillon, A., Richardson, J. and McKnight, C. (1990a) The effect of display size and text splitting on reading lengthy text from screen. Behaviour and Information Technology, 9(3), 225237.
Dillon, A., Richardson, J. and McKnight,C. (1990b) Navigation in hypertext: a critical review of the concept. In D. Diaper (ed.) INTERACT 90. Amsterdam: North Holland.
Duchastel, P. (1988) Display and interaction features of instructional texts and computers. British Journal of Educational Technology, 19(1), 5865.
Duchnicky, R.L. and Kolers P.A. (1983) Readability of text scrolled on a visual display terminal as a function of window size. Human Factors, 25(6), 683692.
Duffy, T.M. and Knuth, R.A. (1990) Hypermedia and instruction: where is the match? In D.H. Jonassen and H. Mandl (eds.) Designing Hypermedia for Learning. Heidelberg: Springer-Verlag.
Egan, D., Remde, J., Landauer, T., Lochbaum,C. and Gomez, L. (1989) Behavioural evaluation and analysis of a hypertext browser. In Proceedings of CHI 89. New York: Association of Computing Machinery.
Elkerton, J. and Williges, R. (1984) Information retrieval strategies in a file search environment. Human Factors, 26(2), 171184.
Engelbart, D.C. and English, W.K. (1968) A research center for augmenting human intellect. Proceedings of the AFIPS Fall Joint Computer Conference. Montvale, NJ: AFIPS Press.
Engst, A.C. (1989) Descent into the Maelstrom. Hyperfiction available from the author, RD #1 Box 53, Richford, NY 13835.
Ewing, J., Mehrabanzad, S; Sheck, S; Ostroff, D. and Shneiderman, B. (1986) An experimental comparison of a mouse and arrow-jump keys for an interactive encyclopedia.International Journal of Man Machine Studies, 24, 2945.
Fountain, A.M., Hall, W., Heath, I. and Davis, H.C. (1990) MICROCOSM: an open model for hypermedia with dynamic linking. In A. Rizk, N. Streitz and J. Andr (eds.) Hypertext: Concepts, Systems and Applications. Cambridge: Cambridge University Press.
Glushko, R.J. (1989) Transforming text into hypertext for a compact disc encyclopedia. In Proceedings of CHI89. New York: Association of Computing Machinery.
Goodman, D. (1987) The Complete HyperCard Handbook. New York: Bantam Books.
Gould, J.D. and Grischkowsky, N. (1984) Doing the same work with hard copy and cathode-ray tube (CRT) computer terminals. Human Factors, 26(3), 323337.
Gould, J.D., Alfaro, L., Barnes, V., Finn, R., Grischkowsky, N. and Minuto, A. (1987a) Reading is slower from CRT displays than from paper: attempts to isolate a single variable explanation. Human Factors, 29(3), 269299.
Gould, J.D., Alfaro, L., Finn, R., Haupt, B. and Minuto, A. (1987b) Reading from CRT displays can be as fast as reading from paper. Human Factors, 26(5), 497517.
Grudin, J. (1989) Why groupware applications fail: problems in design and evaluation. Office: Technology and People, 4(3), 245264.
Halasz, F.G. (1987) Reflections on NoteCards: seven issues for the next generation of hypermedia systems. In Proceedings of Hypertext 87. University of North Carolina, Chapel Hill. 345365.
Halasz, F.G., Moran, T.P. and Trigg, R.H. (1987) NoteCards in a nutshell. In Proceedings of the ACM CHI+GI Conference, Toronto. 4552.
Hammond, N. and Allinson, L. (1989) Extending hypertext for learning: an investigation of access and guidance tools. In A. Sutcliffe and L. Macaulay (eds.) People and Computers V. Cambridge: Cambridge University Press.
Holloway, H.L. (1987) An Introduction to Generic Coding and SGML. British Library Research Paper 27. London: The British Library.
Howell, G. (1990) Hypertext meets interactive fiction: new vistas in creative writing. In R. McAleese and C. Green (eds.) Hypertext: State of the Art. Oxford: Intellect.
Jonassen, D.H. (1982) The Technology of Text. Englewood Cliffs, NJ: Educational Technology Publications.
Jonassen, D.H. (1986) Hypertext principles for text and courseware design. Educational Psychologist, 21(4), 269292.
Jones, T. (1989) Incidental learning during information retrieval: a hypertext experiment. In H. Maurer (ed.) Computer Assisted Learning. Berlin: Springer-Verlag.
Kak, A.V. (1981) Relationships between readability of printed and CRT-displayed text. In Proceedings of Human Factors Society - 25th Annual Meeting, 137140.
Landow, G. (1987) Context32: using hypermedia to teach literature. Proceedings of the 1987 IBM Academic Information Systems University AEP Conference. Milford, CT: IBM Academic Information Systems.
Martin, A. (1972) A new keyboard layout. Applied Ergonomics, 3(1), 4251.
McAleese, R. (1989) Navigation and browsing in hypertext. In R. McAleese (ed.) Hypertext:Theory into Practice. Oxford: Intellect.
McKnight, C., Richardson, J. and Dillon, A. (1990a) Journal articles as learning resource: what can hypertext offer? In D.H. Jonassen and H. Mandl (eds.) Designing Hypermedia for Learning. Heidelberg: Springer-Verlag.
McKnight, C., Dillon, A. and Richardson, J. (1990b) A comparison of linear and hypertext formats in information retrieval. In R. McAleese and C. Green (eds.) Hypertext: State of the Art. Oxford: Intellect.
McKnight, C., Dillon, A. and Richardson, J. (1991) Hypertext in Context. Cambridge: Cambridge University Press.
Milner,N. (1988) A review of human performance and preference with different input devices to computer systems. In D. Jones and R. Winder (eds.) People and Computers IV. Cambridge: Cambridge University Press.
Monk, A., Walsh, P. and Dix, A. (1988) A comparison of hypertext, scrolling, and folding as mechanisms for program browsing. In D. Jones and R. Winder (eds.) People and Computers IV. Cambridge: Cambridge University Press.
Muter, P., Latrmouille, S.A., Treurniet, W.C. and Beam, P. (1982) Extended reading of continuous text on television screens. Human Factors, 24(5), 501508.
Nelson, T.H. (1981) Literary Machines. Paper version available from the author, 8480 Fredericksburg #138, San Antonio, TX 78229; also to be published in the UK by Intellect. Electronic version available from Office Workstations Ltd, Rosebank House, 144 Broughton Road, Edinburgh EH7 4LE, Scotland.
Nelson, T.H. (1988) Managing immense storage. Byte, January, 225238.
Niblett, T. and van Hoff, A. (1989) Structured hypertext documents via SGML. Poster presentation at Hypertext II conference, University of York, June.
Rahtz, S., Carr, L. and Hall, W. (1990) Creating multimedia documents: hypertext processing. In R. McAleese and C. Green (eds.) Hypertext: State of the Art. Oxford: Intellect.
Remde, J.R., Gomez, L.M. and Landauer, T.K. (1987) SuperBook: an automatic tool for information exploration hypertext? In Proceedings of Hypertext 87. University of North Carolina, Chapel Hill. 175188.
Richardson, J., Dillon, A., McKnight, C. and Saadat-Sarmadi, M. (1988) The manipulation of screen-presented text: experimental investigation of an interface incorporating a movement grammar. HUSAT Memo # 431, HUSAT Research Institute, Loughborough University.
Richardson,J., Dillon, A., and McKnight, C. (1989) The effect of window size on reading and manipulating electronic text. In E. Megaw (ed.) Contemporary Ergonomics 1989. London: Taylor and Francis.
Rothkopf, E. Z. (1971) Incidental memory for location of information in text. Journal of Verbal Learning and Verbal Behavior, 10, 608613.
Schwartz, E., Beldie, I. and Pastoor, S. (1983) A comparison of paging and scrolling for changing screen contents by inexperienced users. Human Factors, 25, 279282.
Shackel, B. (1990) Information exchange within the research community. In M. Feeney and K. Merry (eds.) Information Technology and the Research Process. Sevenoaks: Bowker-Saur.
Shneiderman, B. (1987) User interface design and evaluation for an electronic encyclopedia. In G. Salvendy (ed.) Cognitive Engineering in the Design of Human-Computer Interaction and Expert Systems. Amsterdam: Elsevier. 207223.
Simpson, A. (1989) Navigation in hypertext: design issues. Paper presented at International OnLine 89 Conference, London, December.
Smith, J.B., Weiss, S.F. and Ferguson, G.J. (1987) A hypertext writing environment and its cognitive basis. In Proceedings of Hypertext 87. University of North Carolina, Chapel Hill. 195214.
Stammers, R., George, D. and Carey, M. (1989) An evaluation of abstract and concrete icons for a CAD package. In Contemporary Ergonomics 1989. London: Taylor and Francis. 416421.
Stanton, N.A. and Stammers, R.B. (1989) A comparison of structured and unstructured navigation through a computer based training package for a simulated industrial task. Paper presented to the Symposium on Computer Assisted Learning CAL 89, University of Surrey.
Stanton, N.A. and Stammers, R.B. (1990) Learning styles in a non-linear training environment. In R. McAleese and C. Green (eds.) Hypertext: State of the Art. Oxford: Intellect.
Streitz, N.A. (1990) A cognitive approach for the design of authoring tools in hypertext environments. In D.H. Jonassen and H. Mandl (eds.) Designing Hypermedia for Learning. Heidelberg: Springer-Verlag.
Tombaugh, J., Lickorish, A. and Wright, P. (1987) Multi-window displays for readers of lengthy texts. International Journal of Man-Machine Studies, 26, 597616.
Trigg, R.H. (1983) A network-based approach to text handling for the online scientific community. PhD thesis, University of Maryland.
Trigg, R.H. and Suchman, L.A. (1989) Collaborative writing in NoteCards. In R. McAleese (ed.) Hypertext: Theory into Practice. Oxford: Intellect.
Walker, J.H. (1987) Document Examiner: delivery interface for hypertext documents. Proceedings of Hypertext 87. University of North Carolina, Chapel Hill. 307323.
Wells, H.G. (1938) World Encyclopaedia. In World Brain. New York: Doubleday, Doran. Originally presented at the Royal Institution of Great Britain weekly evening meeting, Friday November 20, 1936.
Wilkinson , R.T. and Robinshaw, H.M. (1987) Proof-reading: VDU and paper text compared for speed, accuracy and fatigue. Behaviour and Information Technology, 6(2), 125133.
Wright, P. and Lickorish, A. (1983) Proof-reading texts on screen and paper. Behaviour and Information Technology, 2(3), 227235.
Yankelovich, N., Haan, B.J., Meyrowitz, N. and Drucker, S.M. (1985) Intermedia: the concept and the construction of a seamless information environment. Computer, January, 8196.