LIS 397.1
R. E. Wyllys

Evaluating Reports of Research


"If it's in print, it must be true." Or must it?

We all know, if we stop to think about it, that the fact that something has been printed does not necessarily mean that it is true. But we do not always stop to think about this fact when we should. Unfortunately, caution and skepticism are necessary whether you are reading a tabloid targeted at the gullible, such as the National Enquirer, a newspaper such as the New York Times, a professional journal such as the Library Quarterly, or even the Encyclopedia Britannica (see Endnote 1). In particular, most problem-solving efforts or investigations could have been somewhat better performed, and most reports on such researches could have been somewhat better written.

When you read such reports, therefore, you should read critically. That is, you should constantly bear in mind the possibility that the author or authors made mistakes in the investigation and that he or she or they have failed to express with absolutely clarity the totality of what was done and what has, supposedly, been learned. You should ask yourself questions along the lines of: "Does this step in the investigation seem appropriate or reasonable?", "Does this conclusion really follow from the data presented?", "Do the authors appear to have been clear in their own minds about what they were actually doing?", and so on.

To be somewhat more specific, we can say that research reports can be evaluated on the basis of two questions: How well was the research done? and, How well has it been reported? The first question is the more important one; but even the best research will have been done in vain if it is not reported well enough for other people to be able to understand it, use it, and build on it. This reading discusses these two questions.

One point needs to be made before we proceed. Evaluation or criticism should be positive as well as negative. It is usually easier to be negative about a research report, to say what strikes you as wrong with it, than to be positive about the report, to make clear what its good aspects are. But it is important for you, in your critiques, to explain what you find good in a research report as well as what you find bad. This is not to say that in critiquing a particular report, you must aim to set forth a balance between its good points and its bad points (after all, there may not be such a balance to be found); it is only to say that you should look for, and comment on, the good things as well as the bad.

Evaluating the Research Itself

Unfortunately, the matter of evaluating how well the research was done is more difficult to describe in detail than is the matter of evaluating how well the research was reported. Research can deal with so many topics and employ so many approaches that discussions of it must be either rather general or quite specific and lengthy. This discussion attempts to be general. One consequence is that this section is briefer than the following section. You should not misinterpret this brevity as suggesting that the quality of the research itself is less important than the quality of the reporting.

In examining a report of research with a view toward evaluating how well the research was done, you will need to consider at least the following questions:

Was there a well defined and understandable purpose? In the report, the purpose should be stated explicitly; ideally, it should be expressed in the form of a testable hypothesis.

Were the data that were used truly pertinent to the purpose? Were they the best data that could have been used for the purpose?

Were the data gathered properly? For example, if sampling was involved, were the data obtained by an adequately random method? If observations of people were part of the data-gathering process, were the observations as unobtrusive as possible?

Were the data suitably studied and analyzed? For example, if the analysis was statistical in nature, were suitable statistical procedures chosen and were they properly applied to the data? Did the investigator overlook analytic procedures that could have been applied?

Were the reported results based strictly on the outcome of the study or analysis of the data? Or are there findings that seem unsupported by the analysis?

Does the investigator's interpretation of the results make sense? Does the interpretation appear to reflect in a reasonable way both the strict results and the overall situation or problem?

You will simply have to think hard and carefully about the foregoing questions as you review the research report.

Evaluating the Reporting of the Research

As already noted, the reporting of a research effort is generally easier to evaluate than the research itself, for all of us (at any rate, all professionals) are used to reading and to having reactions that we can verbalize as "That was clear and easy to read" or "That was muddy, hard to understand." An author's prime duty in the writing of a research report is to communicate to the reader, and all of us can tell when an author has failed to communicate clearly.

But in criticizing a research report, you must also consider the question of whether the author has said all that he or she ought to have said about the research. A checklist of what a research report should include would obviously help you do this, and one such checklist can be found in the following outline for a research report.

Please note that it is possible for a research report to be a good report without adhering to this outline. A research report that does not do so should nevertheless include, in some order, all the items contained in the outline. If an author does follow this outline, he or she will be helped to include all the desirable items and to present them in a logical and easily understandable sequence. In other words, when you set out to write up the results of some problem-solving effort (as you are practically guaranteed to do during your career as an information professional), you can very profitably make use of this outline.

Outline of a Research Report

TITLE [in a memorandum, the subject]

ABSTRACT [in a memorandum, the first paragraph]


1.1 Background

1.2 Outline of the problem and its context

1.3 Previous related work


2.1 Hypothesis or hypotheses

2.2 Definitions

2.3 Assumptions


3.1 What are the data that were used?

3.2 How were they collected?

3.3 How were they analyzed?






We shall discuss each of the components of this outline in turn.


Since a research report is simply a description of an investigation or a problem-solving effort, such reports can take forms other than those of articles in professional journals, even though most people probably think first of such articles when they hear the words "research report." Actually, there are undoubtedly far more research reports that take the form of technical reports and internal memoranda than ever appear as journal articles. Books also can serve as reports on research. Books and technical reports share with journal articles the characteristic of having titles, but memoranda lack titles. Instead, they usually have concise statements of their subjects, typically in the form of a line labelled "SUBJECT:", and these statements serve as titles.

For the sake of brevity in what follows, we shall use the word "title" to refer not only to titles of articles, books, and technical reports but also to subjects of memoranda, and we shall use "article" to refer not only to journal articles but also to books, technical reports, and memoranda.

A title provides the reader with his or her first idea of what the article or memorandum is about. Often the reader will decide on the basis of the title alone whether to examine the article further; this is especially likely to be true if the only information about the article available at the time is the title, perhaps with the author's name also. This situation obtains when the reader is looking at a table of contents, or a titles-only index, or is skimming through a journal. Clearly, an author who wants his or her article to be read should provide potential readers with enough information in the article's title to whet their interest.

The preceding sentence may seem too obvious to be worth giving space to here; but, unfortunately, some authors favor "cute" titles--titles that are funny or clever (at least in the authors' opinion) but fail to communicate information about the content of the article. Such titles are a disservice to the reader, and a self-wounding fatuity by the author. An especially egregious example is:

Mason, Ellsworth. The Great Gas Bubble Prick't or, Computers Revealed by a Gentleman of Quality. College & Research Libraries. 1971 May; 32(3):183-196. ISSN:0010-0870.

Someone encountering this title in an index would have a very difficult time connecting the article with the subject of library automation, which is what Mason splenetically--and myopically--damned in the article. Even a reader who encountered a more complete citation than a title listing and thus knew also that the article was published in College & Research Libraries could easily misconstrue the reference.

To demonstrate that the spread of modern retrieval techniques has, unfortunately, not led all authors to eschew cute titles, here are two more recent examples:

Epstein, Susan Baerg. It Can't Happen Here--Or Can It? Library Journal. 1986 March 15; 111(15):50-51. ISSN:0363-0277.

Even if a citation to this item included the fact that it appeared in the "Libraries & Systems" column of the Library Journal, the reader would still have no clue that the item is a discussion of parallels between library-automation problems and certain major problems that the Internal Revenue Service encountered with its new computer systems in 1985 when it started processing tax returns for 1984.

Cobb-Walgren, Cathy J. Why Teenagers Do Not "Read All About It." Journalism Quarterly. Summer 1990; 67(3):340-347.

That this article concerns the already low and still declining percentage of teenagers who read newspapers is not readily apparent from the title, especially if one is too young to remember the days when newspapers were sold on street corners by youths shouting "Read all about it!". Nowadays, a mention of teenagers reading about "it" is likely to be construed as referring to their reading about sex, and in this context the title might suggest that the article is about why teenagers engage in sex rather than reading about it. It is a pity that the editor of a journalism journal showed so little awareness of the desirability of communicative, rather than cute, titles.


An abstract is a brief note about the content of an article, technical report, or book. Abstracts typically are around 100-500 words in length and are written in the form of complete sentences. In a well prepared memorandum, the first paragraph should state the purpose of the memorandum as a whole, thus serving the same purpose as an abstract.

Abstracts come in two forms. One type is usually referred to as "informative" ("informational," "direct"); it provides the reader with the basic informational content of the source, i.e., the abstracted article, technical report, or book. The other is the "descriptive" ("indicative," "alerting") abstract, whose function is to indicate to the reader whether the source would be of enough interest to him or her to be worth the effort of examining it. To illustrate the difference, here is a pair of abstracts (see Endnote 2) for the same metallurgical article:

Informative: Brinell and scratch hardness tests were made on single ice crystals with a modified Olsen Baby Brinell Hardness Tester and a Spencer Microcharacter, respectively. Hardness increased with a decrease in temperature; Brinell hardness numbers ranged from about 4 at -5o C to 17 at -50o C. A similar temperature dependence for scratch hardness was noted. The single ice crystal was harder parallel to the c-axis than in the normal direction.

Descriptive: The experimental procedures and results of Brinell and scratch hardness tests on single ice crystals are given. The effects of temperature and c-axis orientation on hardness are discussed, and the experimental data are tabulated and graphed.

There cannot be much doubt about which type of abstract is preferable.


In the introduction to a research report, the author should prepare the reader for what the latter is going to find in the report. You have probably heard the story about the old-time preacher who was asked why his sermons were so favorably received by the members of his congregation. The preacher is supposed to have replied, "Well, I start by telling 'em what I'm going to tell 'em; then I tell 'em; and then I tell 'em what I've told 'em."  (See Endnote 3)  The introduction to a research report is the "what I'm going to tell 'em" portion of the report.

In the background part of the introduction, the author should explain what led up to the problem-solving effort or investigation being reported in the article (or technical report or book).

In the outline of the problem and its context part, the author should succinctly explain the problem itself. The explanation should include a description of as much of the setting of the problem as will help the reader to understand the problem; e.g., the author might explain that the problem came to light in the public-services department of an academic library of a stated size serving a college of a stated size and nature.

In the previous related work part, the author should discuss earlier research efforts that looked into problems similar, but not identical, to the author's problem. The author should explain why the results of the earlier researches failed to provide a satisfactory solution to the problem about which he or she is now writing; i.e., why the author's research needed to be done even though similar problems had already been investigated.


The purpose section of a research report should clearly state the goal and the objectives of the investigation. In addition to any informal discussions of the goal that the author may think helpful, there should be a carefully worded explicit statement of each major objective.

One of the best ways of providing such statements is to express each major objective in the form of a testable hypothesis. While the absence of a testable hypothesis does not necessarily mean that the author failed to really understand what he or she was doing, the presence of such a hypothesis should encourage the reader to expect to find that the investigation was carried out well.

At the very minimum, the report should contain a statement along the lines of: "The purpose of this investigation was. . .", or "This project was intended to establish that. . .", or "What we attempted to do was. . .", etc. If the report contains no such statement, the reader is entitled to infer that the author never had a clear purpose in mind, either during the investigation or in writing about it.

In addition to an explicit statement about the hypothesis or hypotheses that were tested, there should be explicit definitions of all terms that were coined for use in the report and/or are used in the report in an unusual way and/or are used in the report in a specific, restricted sense.

Similarly, there should be statements of the assumptions made in the investigation; i.e., there should be explicit discussions of those aspects of the problem, and the situation in which it arose, (a) that were not under the control of the investigator, thus forcing the investigator to accept them as they were, and (b) that have, in the investigator's judgment, a reasonable likelihood of having an effect on the problem, and hence on the investigation--an effect that cannot be known exactly.


In the methods section of a research report, the author should explain carefully and in detail just what data were used in the investigation. That is, the author should explain just what were the objects or concepts (e.g., relationships, behaviors) that were observed and/or examined and/or studied in the investigation. In particular, the author should clarify the question of why these data were pertinent to the purpose of the investigation; i.e., why they could be expected to shed light on the problem.

Furthermore, the author should discuss the methods of collection of the data: just how were they identified and observed. In a statistically oriented investigation, if the data were identified by means of random sampling, the author should say so and should outline the randomization process; if random sampling would have been preferable but was impossible (or thought to be) for some reason, the author should discuss the rationale for replacing randomized collection by his or her other method of data collection. If the investigation was not statistically oriented, the author should explain whatever methods were used to identify the objects or concepts to be further processed in the investigation, and should indicate the reasons for choosing them.

Finally, the author should discuss the methods of analysis of the data: how they were examined, studied, counted, etc. In a statistically oriented investigation, the statistical procedures used should be identified and the rationale for choosing them discussed. If the investigation was not statistically oriented, the author should explain whatever procedures were used for analyzing the data, and should indicate the reasons for choosing them.


Properly used, the word result should be restricted to only those findings of the investigation that can be inferred by rigorous procedures from the data collected. Used in this careful fashion, the word result does not mean the same thing as the word conclusion. A conclusion is an interpretation, by the investigator, of the results in the light of the whole problem and its context.

For example, in an article on reference service (Hernon and McClure) the section headed "Findings & conclusions" included, under a sub-heading of "SUMMARY OF FINDINGS," such statements about results as: "Participants in the study answered 62 percent of the questions correctly and 38 percent of the questions incorrectly"; "Approximately two-thirds of the 390 questions were answered (correctly or incorrectly) within three minutes after contact with library staff"; and "In no instance when library staff members indicated that they 'did not know' the answer did they also refer the [questioner] to another member of the staff or another information provider."

Unfortunately, also under the SUMMARY sub-heading in this article appeared such statements about conclusions as: "Patrons have a greater probability of obtaining a correct answer from the documents department than from general reference"; and "Patrons have the responsibility for finding alternative reference providers; reference and documents *personnel do not offer suggestions for alternative information sources and providers."

The goal of clarity in communication is best served by authors who carefully distinguish between results and conclusions, and between both of these and recommendations. Unfortunately, too many authors mix them up. Preferably, each of the three types of statements should appear in a section of its own in an article.

In the recommendations section of a research report, the author discusses his or her judgments about what should be done next. Recommendations can range from specific actions to be taken (e.g., "On the basis of this investigation, I believe we should plan to open a new branch library at . . .") to statements about problems that remain to be investigated, especially problems that were recognized in the course of the investigation (e.g., "During the investigation it became clear that there remains an important, unanswered question concerning . . .").

An important part of any research effort should be the recognition of questions that need further study. Investigators have a professional responsibility to record such questions and to communicate them to the fellow members of their profession, in order to increase the chances of someone's undertaking the needed study. The recommendations section of a research report is an excellent means of passing such questions along. (Sometimes, especially when the author is someone to whom the "publish or perish" dictum applies, he or she may list a problem or two needing study and then add a statement along the lines of "Preliminary investigation of this problem by the author is already under way"; such a statement warns others that the author has a head start on looking into this particular problem, and thus tends to discourage others from competing with the author for the chance to publish the first article on it.)

For example, in an article (Winger) on interlibrary lending the section headed "Conclusion" included the following statements, all of which are recommendations, some for action and some for further study:

Nearly 15 percent of the requested materials that were not available for lending were no longer held in the collection. . . .However, this problem could be eased if more attempts were made to update the union lists. The primary responsibility for this updating should rest with the libraries themselves. . . .

The study results point to a potential problem in the ILLINET system. Why were almost 40 percent of the requests [to the Regenstein Library] unfilled due to non-ownership of the material? . . .It would be interesting to know whether . . . this is a problem experienced by all lending libraries in the ILLINET system.

In order to understand the interlibrary loan system as it exists today, it would be desirable to study the records and procedures of a major online user to see if a system such as OCLC brings a higher rate of success in borrowing materials from other institutions. The study could also be repeated in more specialized settings, such as in business and in scientific research libraries.

The author of this article could have communicated more clearly if he had labelled these statements as recommendations.


In the summary section of a report the author, following the old-time preacher's practice, tells the readers what they have just been told. The summary provides the opportunity for the author to state succinctly what the whole investigation was intended to find out, how the investigation tried to find that out, and what it did find out. By reviewing the whole process briefly in the summary, the author can help the reader grasp the connections among the many details about which the reader has just been reading, and can thus help the reader see the forest instead of merely the trees.

Acknowledgements of assistance are almost always appropriate in a report on a problem-solving effort or investigation, and the summary section is a conventional place for an author to express his or her thanks for help received.


In writing expository prose, an author can serve the cause of clarity of communication well by keeping the main flow of the exposition as simple and direct as possible. One good way in which an author can simplify the main flow of his or her writing is to put into an appendix (or appendices) the highly detailed discussions of subordinate portions of what he or she wants to say. The author should summarize briefly, in the main expository flow, what has been appended and should refer the reader from the main flow to the appendix as appropriate; e.g., with statements along the lines of: "The observations made are presented in full in Appendix A"; or "Appendix B contains the questions asked in the questionnaire sent to catalogers"; or "The full database structure will be found in Appendix C."

The appendix is not used as much in journal articles as it might (and, in my opinion, should) be.  This may be partly due to authors' fears that editors would often be tempted to excise appendices in order to reduce easily the length of articles as published. In technical reports and memoranda, however, such editorial concerns are unlikely, and when you write in these forms you should certainly take advantage of the appendix as a good expository device.


We have looked at the evaluation of research in terms of the quality of the investigation itself, and in terms of the quality of the reporting of the research. I hope it is clear to you that though these ideas are offered here for use in the task of critiquing the research of others, most of the ideas can also be used as guidelines for your own researches and your reporting on them.


1. An interesting discussion of the failings of this encyclopedia (in a previous edition), and by implication, the possible failings of other encyclopedias, can be found in: Einbinder, Harvey. The Myth of the Britannica. New York, NY: Grove Press; 1964. 390p. LC:63-16997.  [It should be noted that the major revision of the Britannica published in 1974 responded to many of the faults noted by Einbinder. A good review of the 1974 version was written by C. D. Needham.  See: Needham, C. D. Britannica Revisited. Library Association Record. 1975 July; 77(7):153-168.]

2. These abstracts are quoted from: Abstractors' Manual of the ASM Review of Metal Literature. Novelty, OH: American Society for Metals. [no date]

3. Actually, this tripartite structure for good expository prose is a modern version of what Aristotle recommended some 2,300 years ago in his Rhetoric. As expressed by the Roman rhetorician Quintilian (c. 95 A.D.) in his De institutione oratoria, Aristotle's structure consists of: 1. exordium (opening); 2.1 narratio (statement of facts), 2.2 confirmatio (proof), 2.3 refutatio (denial); 3. peroratio (closing). Parts 2.1, 2.2, and 2.3, which do not have a fixed sequence among themselves, are what we think of today as the main body of the exposition.

Items Quoted

Hernon, Peter; McClure, Charles R. Unobtrusive Testing: The 55 Percent Rule. Library Journal. 1986 April 15; 111(7):37-41. ISSN:0363-0277.

Winger, Robert B. Characteristics of Interlibrary Loan Requests Received by the Joseph Regenstein Library, 1980-1981. Library Quarterly. 1985 October; 55(4):424-445. ISSN:0024-2519.

Go to Guide to Course Materials for LIS 397.1
Go to Wyllys Webpage

Last revised 2002 Nov 8