PITFALLS OF CONVENTIONAL COURSE EVALUATION FORMATS
By
Alan Veasey, M.A.Ed., M.P.H.
University of Alabama at Birmingham
Center for Labor Education and Research
I believe all labor educators will agree that feedback from trainees is important for training process evaluation. We at UAB/CLEAR have conducted NIEHS-funded Hazardous Waste Operations and Emergency Response (HAZWOPER) courses for most of the last nine years. During most of that time, we used conventional course evaluation forms to get feedback from trainees. In this chapter, I will use our program as a case-history in describing some of the pitfalls we encountered. I will also describe recent changes we made in an attempt at improvement.
What I am calling the "conventional approach" uses standardized course evaluation forms such as the one shown in Appendix 1. In this approach, each trainee completes and turns in a course evaluation at the end of each course. These evaluations are intended to provide an assessment of training from the trainee's point of view. These insights are intended to be used to improve training. This method of process evaluation is probably familiar to us all. It has been in common use, with minor variations, for years.
PITFALLS OF THE PROCESS
Nine years ago an initial group of instructors parachuted into UAB/CLEAR and began HAZWOPER training. At that time, we were given the course evaluation format shown in Appendix 1 as a standard. I don't know where the format came from. However, we dutifully used it without question for years, sort of like trainer zombies in a George ("Night of the Living Dead") Romero movie. However, over several years of training, we gradually became aware of major problems in our course evaluation process. These problems were related to the basic concept and format of our course evaluation form.
In the following paragraphs, I will work through the evaluation format shown in Appendix 1 section by section. As I do so, I will describe some of the problems we encountered in using it.
Section 1: Rating Overall Effectiveness of Training
This section simply asks, "After taking this course will you be able to perform your job better?" This is obviously an important question. However, wouldn't a better question be, "After taking this course will you be able to perform your job more safely?"
I realize this may seem like a minor point. However, "better" and "safer" may not be synonyms for everyone. This calls into question the amount of thought put into developing the form.
Section 2: Rating Instructor Presentations
In our program, this section functioned mainly as an instructor popularity contest. I became aware of this after realizing that our worst instructor consistently received superior presentation ratings.
The instructor in question often used incorrect terms, provided incorrect information, and made disorganized presentations. Also, he constantly told war stories. These often strayed from the topic and sometimes consisted of obvious lies. Needless to say, he is no longer employed in our program.
How, you might ask, did such an instructor get superior ratings? Simply stated, he was very well liked by our trainees. He joked with them during class, smoked with them during breaks, and organized recreational events after hours. As a result, he was popular with our students and consistently received superior student evaluations.
Section 3: Rating Coverage of Topics
Results from this section often were not consistent with results from section 2. For instance, a given trainee might rate my presentations in a course with low scores in section 2. However, the same student might rate coverage of the topics I presented with high scores in section 3. While I am admittedly multi-talented, I always found it baffling that I could make presentations that were simultaneously seen as bad and good.
In some cases, topics would appear in section 3 which had not actually been covered. In such cases, we would remind trainees not to evaluate those topics. In such cases, almost all trainees would evaluate the topics which had not been covered. This caused us to doubt the validity of the student feedback we were receiving.
Section 4: Rating Course Interest Level, Materials, and Quality
Our major complaint about this section was that results were too general. For example, do poor ratings on audiovisual materials apply to the course as a whole? If not, then which topics do negative ratings refer to? Evaluations which fail to answers questions such as these provide little guidance for improvement.
Section 5: Rating Time Spent on Topics
Like section 4, the feedback provided from this section was generalized across the entire course. Students sometimes indicated that time spent on topics was too short or too long. However, they rarely specified which topics they were referring to. Thus, this feedback was not very helpful to us in making time adjustments.
Occasionally, students would indicate that the time devoted to course topics was both too short and too long. While such replies are useful for philosophical contemplation, they are not very helpful for course improvement.
Sections 6 and 7: Course Likes and Dislikes
Sections 6 and 7 allowed trainees to let us know what they liked and disliked about a course. These sections of the evaluation consistently provided helpful feedback for course improvement.
However, even these sections occasionally produced results that were not very helpful. For example, we have received complaints about bad weather, students not being able to leave early on payday, and other things beyond our control.
Also, trainee's personal comments were sometimes not very helpful. This factor reminds me of a friend who teaches Introductory Anthropology at a small college. He enjoys sharing in his students observations through course evaluations. One student wrote, "This teacher is like Hannibal Lechter from the movie The Silence of the Lambs ". Another wrote "The instructor reminds me of the Mr. Hyde phase of the Dr. Jekyll/Mr. Hyde character from the Bugs Bunny cartoons". I believe that students should have a chance to make any comments they like on course evaluations. However, some observations, although very interesting to read, don't provide a lot of guidance for improvement.
CONSIDERATIONS FOR IMPROVEMENT
At UAB/CLEAR we recently began an effort to improve our course evaluation process. We had one major advantage in this effort: We knew what bad evaluation was. We were intimately acquainted with bad evaluation because we had been doing it for years. In our case bad evaluation produced results which were overly subjective, confusing, occasionally bizarre, and sometimes bad for instructor morale. Moreover, bad evaluation provided very little guidance for improvement of training.
As a starting point for improvement, we threw out our old evaluation form. We did retain the few questions which sometimes provided useful information. We then identified additional questions that we wanted our evaluations to answer. We rewrote the questions several times to make them easier to read and understand. These questions served as the basis for our new evaluation format.
A NEW AND IMPROVED EVALUATION FORMAT
Our initial efforts at improvement resulted in a new evaluation form, as shown in Appendix 2. It is designed to provide answers to the questions we had about how students experience our courses. The new evaluation format consists of three parts, as described below.
Part 1: Trainee Education and Work Experience
This section provides information on educational level, and work experience for each respondent. We plan to correlate this information with trainee responses on the evaluation.
Part 2: Evaluation of Course Modules
This section allows trainees to provide basic feedback on all modules of the course. We ask that they do this as the course progresses rather than waiting until the end. Otherwise, they tend to forget topics over a long course.
This section is completed by answering "Yes" or "No" to the following set of questions for each module.
• Was this part of the course interesting?
• Did you have the chance to really take part?
• Were you able to follow what was taught?
• Did you learn things that can help you stay safe and healthy on the job?
If No, why not? Check One.
- Couldn't understand the material
- Already knew all I needed to know about the topic
- Information did not pertain to my job
• Should we take more time to cover this material?
• Could we have covered this material in a shorter time?
Part 3: Overall Impressions of Course and Comments
This section is intended to provide feedback on the course as a whole. Was it worthwhile? Did the modules fit together well? Was the learning environment comfortable? What parts were especially liked or disliked, and why?
Also, what evaluation form would be complete without a "comments" space? This serves as a place for lingering comments which didn't quite fit in elsewhere.
CONCLUSION
We've only recently begun using our new evaluation format. We don't know yet if it will solve all the problems that I've described. However, early indications have been good. We remain optimistic.
I believe that course evaluation is problematic by nature. Thus, as glaring problems are corrected, more subtle or insidious difficulties may become apparent. I strongly suspect that improvement of evaluation will be an ongoing evolutionary process. However, I do feel that we have taken a valuable first step in that process. I just wish we had done it years ago.