ENHANCING TEACHING EFFECTIVENESS FOR
PERSONAL SATISFACTION AND PROFESSIONAL
GAIN: CASE EXAMPLE OF A “RAPID-
ORIENTED EVALUATION PROCESS AND TOOL TO
SUPPLEMENT AND ENHANCE NATIONALLY-
NORMED STUDENT FEEDBACK SYSTEMS
D.K. (Skip) Smith, Southeast Missouri State University
David Kunz, Southeast Missouri State University
Business professors are under pressure from numerous stakeholders (AACSB, their own
institutions, administrators, colleagues, and students) to improve performance and to
practice continuous improvement. Student evaluations are already important to many of
the above stakeholders, and becoming more so. Based on suggestions from their
university’s Center for Research in Scholarship and Learning, the authors developed a
rapid-response student feedback-based process for continuous improvement,. The
process used to create the data collection instrument, the data actually collected, and
the implications flowing from the study are discussed.
Institutions of higher learning are being asked by state legislatures and other governing
bodies to provide proofs on the quality of instruction and curricula (Ewell 1991; Terenzini 1989).
Responding to these requests, universities and colleges have established assessment programs,
that is (DeMong, et al. 1994), “processes used to determine the impact of effectiveness of an
activity, session, class or program.” “Assessment” measures program quality and demonstrates
accountability (Chamberlain and Seay 1990). Benefits provided by assessment include
(Hutchings, et al. 1991): (1) the ability to compare student learning with program objectives; (2)
Agreement on student learning expectations, that is, clearly defined objectives; and (3) The
gathering of information that allows ongoing program improvement.
Given the above comments, it is no surprise that AACSB is a vigorous supporter of
continuous improvement in the quality of programs and courses. AACSB and other proponents
of continuous improvement indicate that systematic collection and analysis of student feedback
is an extremely important and useful action. For this reason, and because the authors were
disappointed by the end-of-term IDEA evaluations of the first iteration of their new team-taught
BA650 Strategic Decision-Making course, we decided to conduct an in-depth mid-semester
assessment of student reactions during the second iteration of the course.
II. QUESTIONNAIRE DESIGN
What data and how to collect it: these are interesting and important questions. Discussions
with the head of our university’s Center for Research in Scholarship and Learning led us to
believe that our rapid-response continuous improvement-oriented form and process should
accomplish at least the following five objectives:
1. Remind students of the objectives of the course.
2. For each major course activity (that is, readings, video tapes, guest speakers, case studies,
projects, and business simulation), ask students to assess the contribution of that activity
to each course objective, using the following three response categories:
L = This activity contributes very little to the achievement of this objective.
M = This activity contributes somewhat to the achievement of this objective.
H = This activity contributes a great deal to the achievement of this objective.
3. Solicit student input on ways each individual activity could be made more
4. Solicit any additional student suggestions for improving the course (that is, suggestions
which might not be tied to a specific activity or objective).
5. On a seven-point scale (7 = high), ask students to indicate the extent to which the course
is meeting their expectations.
Based on the above, we created a questionnaire. By substituting their class objectives and
their classroom activities, readers can easily create their own data collection instrument. As to
data collection, we distributed the forms at the beginning of a mid-semester class session.
Students filled out the form anonymously and returned it one week later. For confidentiality’s
sake, all students turned in their completed forms to a class member. That individual monitored
the collection process, and delivered the packet of completed questionnaires to the authors. Since
students could earn points by participating, the authors were not surprised that all 24 students
III. CASE EXAMPLE: THE QUANTITATIVE DATA
Student perceptions on the extent to which the listed activities contributed to achievement of
our objectives (Low, Medium, or High) were converted to a five point scale. Key findings
flowing from analysis of the quantitative data include the following:
1. Student perceptions are that four activities (readings, speakers, cases, and projects) each
contribute very importantly to the achievement of two or more of the objectives
established by the authors for this course.
2. Tapes and the simulation contributed far less to the achievement of our objectives.
Furthermore, on objectives for which tapes and the simulation are perceived to contribute
relatively substantially, other activities contribute to the achievement of those objectives
at higher levels. A review of our tape and simulation-based activities is needed.
3. None of our activities were perceived to contribute importantly to achievement of our
ethics objective. We must re-think our approach.
IV. CASE EXAMPLE: QUALITATIVE DATA
We solicited two levels of qualitative data. Students provided us with comments and
suggestions for making each individual classroom activity (readings, tapes, guest speakers, case
studies, research projects, and business simulation) more useful. In addition, students were
provided an opportunity to share with us any other suggestions for improving the course. Key
findings included the following: (1) Perceived benefits of our simulation game were very low;
(2) Key take-aways of readings and/or tapes need to be highlighted; (3) The amount of time and
energy spent on various activities needed to be reviewed and rebalanced.
V. DISCUSSION AND IMPLICATIONS
The authors’ institution requires that at least once a year, all instructors conduct student
evaluations of learning using the IDEA system developed at Kansas State University. Based on
the results of this “rapid response/continuous improvement-oriented effort to improve student
evaluations for our Strategic Decision Making course, the authors have newfound reservations
regarding the usefulness of the IDEA system, not only as a tool for coaching and improvement
but also as an indicator for tenure and promotion. Our concerns include the following points:
1. We believe professors need to know whether each objective is being effectively
addressed by at least one activity, and whether each activity is powerfully addressing at
least one objective. Neither the IDEA system nor any other nationally- normed student
feedback system known to the authors does so.
2. None of the nationally- normed student assessment systems is useful in rapid-response
situations. Furthermore, the costly and time-consuming IDEA, SIRS, and other
nationally- normed systems do not lend themselves to the multiple uses required for
ongoing continuous improvement.
3. Systematic use of the sort of feedback scheme described in this paper can be very useful
not only in facilitating but also in documenting efforts to improve teaching. For that
purpose too, ongoing use of the process we describe should be helpful. For a professor
facing the tenure decision, a paper trail documenting ongoing use over time of this sort of
feedback scheme is tangible evidence of efforts toward continuous improvement and, by
implication, of a desire to continue improving his or her teaching effectiveness on into
the future (that is, after the tenure decision).
Collection and analysis of this activity-specific mid-semester student feedback did provide
the insights we needed to dramatically improve our IDEA evaluations. Our “overall evaluation,
similar courses” percentile evaluations increased from below the 30th percentile (first iteration of
the course) to above the 75th percentile (second iteration). For now and the foreseeable future,
we believe the process described in this paper will be our #1 tool for continuous improvement.
Chamberlain, Don and Robert Seay (1990). “Outcomes Assessment: A New Challenge for
Business Educators,” Journal of Education for Business, (February), Washington, DC:
DeMong, Richard F., John H. Lindgren, Jr., and Susan E. Perry (1994). “Designing an
Assessment Program for Accounting,” Issues in Accounting Education, (Spring), 11-26.
Ewell, P.T. (1991). “Assessment and Public Policy: Shifting Sands, Uncertain Future,”
Assessment Update, (September/October), 1-7.
Hutchings, P., T. Marchese, and B. Wright (1991). Using Assessment to Strengthen General
Education. Washington, DC: AAHE.
Terenzini, P.T. (1989). “Assessment With Open Eyes: Pitfalls in Studying student Outcomes,”
Journal of Higher Education, (November/December), 644-665.