Analysis of evaluation results may be aided by use of some statistical procedures. However, in the real world, in which some of us are forced to reside, time for munching on crunched numbers is limited. Therefore, in this manual we are going to concentrate on a few basic data comparison and test evaluation concepts, accomplishing analysis without resorting to complex calculations.
One of the most useful analysis techniques involves the "Response Distribution Table." Although it is most easily constructed with the aid of a computer, it can and often is assembled by hand. The results easily warrant the effort.
The basic task involves tallying the number of individuals responding
to each item choice. A table is constructed displaying the tally results.
Each cell contains the number of persons responding to that item choice
and, ideally the percentage that this number represents. In our sample
table, an asterisk appears in the cell with the correct response choice.
Note: The Testing Centers Zeus class test scoring faculty service
also provides this data.
Maximum test reliability requires that questions fall at about the 50% to 80% difficulty ranges. That is between 50% and 80% of the examinees should be responding correctly to each item. The optimal percentage of correct responses for five option multiple choice is 60%.
We can easily observe the items difficulty levels from the table; for example, Item #1 in our sample table has a response distribution, which should be considered quite satisfactory. Fifty-six percent of the examinees choose response choice D, the correct response to that question. The remainders of the respondents were spread over the other options, however, for Item #2 we note that a relatively small percentage chose the correct response A. At this point, the value of such an analysis becomes obvious. We can see that distracter B lured a large number of examinees. This suggests that close examination of the distracter is warranted. Perhaps its wording misled a number of students who actually knew the material. It should be noted that even if the distribution of responses had been more evenly spread over the distracters, such a light response to choice A suggests consideration of revision of the item.
Another useful procedure is to encourage student feedback about fairness of items. Although some students need little encouragement, others are reluctant to risk the penalties they believe associated with challenging authority. Of course a willingness to consider the possibility of a flawed item implies a willingness to also adjust grades.
More advanced statistical analysis tools may be found in the Zeus System
of classroom test scoring offered through the Testing Center.
Although this guide to test construction and analysis is far from exhaustive
we hope that is was helpful. If you have any questions, comments or suggestions
please send then to Fred Gillette of
the Testing Center.