A Proposal for Formative Assessment of Teaching
Center for Teaching and Learning
Task Force on Formative Assessment
Nancy Lapp, Government; Ted Lascher, Public Policy and Administration; Tom Matthews,
Electrical Engineering; Rosemary Papalewis, Educational Leadership and Policy Studies; Mark
Stoner, Communication Studies, Chair
“The measure of one’s power is the ability to control one’s future.”
INTRODUCTION: INTERESTS IN EVALUATION
Evaluation is a significant part of our professional lives. We have achieved our positions
by passing a series of rigorous evaluations; we continue to be evaluated by colleagues sitting on
retention and promotion committees, by administrators, by students and ourselves. But the
evaluation matrix is even more complex than that. Take a look at the Figure 1 below, and notice
the number of parties who have either a direct or indirect interest in faculty evaluation. The most
direct sources of interest are the University, the department(s) within which we work, and our
students. As a result, these entities often represent seats of power to us because their evaluations
can significantly affect our careers and lives. Our responses to their evaluations may then,
depending on how we attribute power to these evaluators, “force” us to behave in ways that meet
their perceived power demands, but are ill-fitting to our personal values, goals or styles of
Community University Government
Faculty Member Disciplines
Also, the indirectly interested parties, at times, can exert significant influence on
decisions about faculty recruitment, retention and promotion. For example, federal and state
governments are threatening to take a greater role in monitoring or modifying faculty workloads,
or evaluating faculty productivity for purposes of merit pay. In spite of that, we argue that the
single most important entity in Figure 1 is the individual faculty member. Although numerous
interested parties endeavor to evaluate our work, significant, meaningful, and long-term
positive change will be achieved only when it comes as a decision from within individual
faculty members based on self-evaluation. The fact is, if faculty need to make changes in their
teaching, but they are not committed to proposed solutions, no lasting change will occur.
Demands or suggestions from even well-intentioned evaluators fail to account for the
complexities of the context in which individual instructors are working. They know more about
their classes than anyone else; the richness of the “data” that they can gather with each class
meeting gives them a unique perspective that is more complex than any observer could develop.
Faculty have a clearer understanding of the primary and secondary goals of the course than
anyone else could have; their knowledge of available, relevant resources or barriers to student
learning is unparalleled. The fact is, if faculty have to make changes in how they teach, and they
are not committed to the proposed solutions, no lasting change will occur.
SOURCES AND KINDS OF DATA
Those truths, however, do not make outside evaluation go away, nor do they provide a
compelling argument to ignore it. What we need to remember is that there are different kinds of
data used for different evaluation or assessment purposes. Figure 2 provides a matrix, which
shows the kinds of evaluation we face and their interrelationships. All six cells of the evaluation
matrix provide some useful information, but for very different purposes.
Self 1 4
Peers 2 5
Students 3 6
Summative evaluation is probably what we think of first when the word “evaluation” is
mentioned in the context of work. This is an evaluation conducted at the conclusion of some
specified period of time and often, although not always, employs some previously specified
criteria and is used to facilitate organizational decision-making or ranking. It is “static” in
the sense that it provides a snapshot of whatever is being evaluated, and it generally does not
provide developmental data (Sell 25). So, summative, peer evaluation (cell 5), for most faculty
in the CSU system is a periodic evaluation, conducted by peers representing their departments
and the University as a “direct” interest (probably incorporating student evaluations and probably
not including self-evaluation) against some criteria, the conclusions of which are used to decide
if a colleague should or should not be retained or promoted. Students typically provide
summative data on end of semester evaluation forms (cell 6). The data most commonly goes
into the mix along with summative peer evaluations for decision-making purposes. Rarely, it
seems, is summative self-evaluation (cell 4) included in faculty personnel decisions. The
function of summative evaluation is primarily to facilitate organizational decisions. As a
result, it provides sporadic feedback that is not designed to facilitate or provide direction for
positive change, and the hard thinking about an instructor’s behavior is done by someone else!
However, Hutchings argues that, “Faculty whose teaching is being evaluated should see
themselves not as objects, but as active, central actors in the process, generating, assembling, and
putting forward for review their work as scholar-teachers” (105). That requires a formative
An alternative, formative evaluation, is conducted for purposes of development. That is,
formative evaluation has as its goal assessment of progress toward stated objectives, by
providing information for use in making changes in behavior, or corrections to one’s
proper “trajectory” toward achievement of objectives. Whereas summative evaluation
functions as a static decision point, formative evaluation serves as a mechanism for change
by which the hard thinking about teaching is done by an individual instructor (Gross 8-9).
In most retention and promotion systems, the summative peer evaluations and summative student
evaluations carry the greatest weight with decision-makers. However, the questions that faculty
evaluators don’t often ask are: What exactly is the intended purpose of evaluation? Are the
evaluations only for benchmark decision purposes, or are they intended to function
developmentally? Given the fact that the typical tenure cycle lasts six years, and post tenure
review occurs every five years, it would be logical to expect that formative evaluation would be
used frequently so as to facilitate constant growth over those periods of time. It has been our
experience, though, that the organizational and, consequently, our personal orientations have
been toward the use of infrequent summative peer evaluations that provide little useful feedback.
Given the significant differences between summative and formative processes, we would like to
formally distinguish the terms from here on as: summative evaluation and formative
Formative assessment, we argue, within the fluid context of teaching, wherein the task of
teaching is understood to be done over the course of a semester or academic year, is the most
useful and, ultimately, most powerful form of intervention for growth. That is, if faculty are
engaging in continuous formative self-assessment, coupled with peer coaching or some form of
dialogue with colleagues, faculty will be constantly aware of their progress toward their goals by
which they will be judged, ultimately, in a summative fashion. So, conducting formative self-
assessment is to “take the evaluative bull by the horns” and positively control outcomes. If
faculty are encouraged to constantly appraise their goal achievement, and document it, they no
longer have to wait for someone else to pronounce them a success or failure. We argue that
rigorous self-assessment and constant work at self- development reconstructs the overall
evaluation process, increasing opportunity and rationale for improvement and, in the end, wrests
significant power from the parties noted in Figure 1 and gives it back to the individual faculty
EFFECTS OF EVALUATION
Summative evaluation, as noted above, has a function of providing specific data to be
compared against more-or-less defined benchmarks for decision-making purposes. Also, as
suggested above, those being evaluated often wait extended periods of time for summative
decisions to be made. Those circumstances produce significant and long-term effects on the
behavior of those being evaluated in a summative fashion, which we can experience at the
moment. Below, we’ve reproduced some samples of summative statements faculty have actually
received from evaluators. Admittedly, the statements are out of context, but pretend for a
moment that these statements were directed at you (you may wish to add your own to the list).
Read the statements carefully and slowly, keeping track of your internal responses to them.
1. “As you may already be aware, I am extremely dissatisfied with your work.”
2. “She has been teaching long enough to discontinue use of student group
activities/presentations and lecture full time.”
3. “You take things too seriously.”
5. “She is a good teacher but she has some negative comments from students such as,
‘She is hard to please; somewhat strict.’ ”
6. “You are a great teacher.”
Now, pause and recall what you were doing inside your head when you read those statements. If
you were the recipient, to what degree did the evaluative statements facilitate your thinking
about how you can get better? Most faculty report that even positive responses such as number
six do nothing to facilitate growth. In fact, when such statements are made, motivation for
thinking and positive change are diminished--why change something that obviously works? Such
evaluations feel good and are welcome, but don’t move the recipient ahead. Responses such as
number one may create paralysis of thought and action rather than motivate positive change;
understandably, that was the testimony of the receiver of the first statement! Response number
two caused the recipient to argue against lecturing rather than think about expanding his
repertoire by looking for ways to employ lecture as a teaching method when appropriate.
Responses three, four and five are so ambiguous that they elicited confusion rather than
purposeful thinking about teaching.
When we ask teachers what kind of feedback they most often receive, it is either positive
(number six) or negative (number one), with the negative evaluations being most memorable and
having the most significant impact on them. Even if negative statements are uncommon (which
they are not), and even if they are well-intentioned, the impact can be powerfully inhibiting of
change in recipients.
Interestingly, the effect of a summative, evaluative approach is not motivation for faculty
to improve. Pat Hutchings found that, “Too often, the kind of teaching that’s institutionally
valued (although no one says this outright) is teaching with no visible defects–where students are
satisfied, parents do not call the dean’s office with complaints, and, in general, instruction is
‘pulled off’ without apparent hitch or glitch”(Hutchings 104). In other words, faculty are
encouraged implicitly to do what is safe, but not necessarily what is best for student learning.
Sometimes students don’t like a particular teaching device or approach, but learn a great deal
from it (Sokol). Innovation is risky for faculty. Therefore, Hutchings recommends that “before
leaping into ‘summative’ (i.e. high stakes evaluation) contexts it [is] important to experiment
with strategies for being more public about teaching in ways that would serve the ‘formative’
purpose of improvement” (Hutchings 101). This actually suggests that formative, developmental
assessment is something prior to summative evaluation rather than a post hoc “fix” when
summative evaluation is negative.
FORMATIVE ASSESSMENT AND GROWTH
Research, by Chris Argryis, on high performing business consultants has indicated that
“smart people” often fail to learn from their performances when others point out their
mistakes (35). What Argryis observed was that highly intelligent, energetic, successful
professionals, when faced with evaluative comments by their peers about their work, spent most
of their time in project debriefing sessions justifying their actions, or attempting to lay blame for
problems or failures on others. Successful people, such as the subjects of Argryis’ study, have
powerful egos that act as a two-edged sword in the context just described--a strong ego allows
such people to move into complex situations and accomplish significant work, but it can inhibit
performance development if attacked. The same descriptors can be applied to Ph Ds teaching in
universities--highly intelligent, energetic, and successful. Faculty get their positions by being
“high performers.” What the consultants needed, Argryis discovered, was a way to think and
talk about their work that avoided traditional evaluation and facilitated purposeful, self-directed
assessment to improve their work. We believe the same is true of faculty at CSUS, especially in
the present climate of uncertainty and change. We must adapt to newly emerging realities of
university life and we must install mechanisms for facilitating self-directed and meaningful
change (i.e., help people take risks) if adaptation is to occur. Otherwise, folks tend to keep
doing what they have always done, only harder; and if what they have been doing hasn’t been
working, they end up investing more energy into failed systems of behavior.
We have argued that reliance on summative peer and student evaluations provide static
descriptions of performance, but such evaluations have the tendency to inhibit change; such a
structure invests power in persons outside of a faculty member’s context for choosing that
person’s future. The issue of power is significant, and exercising power to effect change in one’s
own work requires “extra current effort” or effort beyond that required for accomplishment of a
normal workload (Bereiter and Scardamalia). We must recognize this significant fact and keep it
at the forefront of our thinking about formative assessment. Nevertheless, formative self-
assessment is powerful when individuals engage in intentional learning about themselves and
their work. This necessitates engagement in meta-cognition, goal-setting and implementation of
change while doing their regular work. A program of teaching development which has a strong
self-assessment component breaks people loose to adapt, develop, and flex to meet the
exigencies of their work. (I t also entails thinking of ways to provide resources, particularly
time, to do this purposefully and productively.)
The more specific we can be in our study of teaching, the more appropriate and effective
change will be. As Russell Edgerton noted, “Teaching is highly context specific and its true
richness can be fully appreciated only by looking as how we teach a particular subject to a
particular set of students” (2). Lee Schulman argues that traditional educational research, in an
effort to make generalizations about teaching, has served to decontextualize our analysis of
teaching and has led us to emphasize technological prescriptions about teaching which diverted
attention from analysis of student (and teacher) cognitive processes.
In an effort to take Edgerton seriously regarding contexts of teaching and learning, the
CTL conducted a series of focus groups with faculty in order to discover faculty perceptions of
their situations. We were primarily interested in faculty talking about their perceptions of the
nature and impact of the present evaluation system on their teaching. They had interesting
insights about that topic as well as other related topics. We present a summary of the themes of
the focus groups' responses. Substantial transcriptions of faculty comments related to each
theme are reproduced in an appendix to this proposal.
WHAT OUR FACULTY HAVE TO SAY: FOCUS GROUP RESULTS
Between the dates of October 1 and November 3, 2001, CSU, Sacramento conducted four
focus groups. Groups ranging in size from 2 to 7 were composed of CSUS faculty. Each group
lasted between 75 and 90 minutes in length. Audio and video recordings of the groups were
preserved for future study. A copy of the moderator’s guide is attached to this summary.
Faculty volunteered hundreds of specific comments and suggestions. Readers interested in
hearing and seeing a full list of faculty answers to focus group questions should consult the audio
and video recordings. This summary will attempt to capture the recurring patterns and themes
evident from faculty comments.
1. Quality Teachers: Focus group members were somewhat reluctant to conclude that CSUS
faculty were uniformly talented, but complimented most of their colleagues.
2. Course-Instructor Evaluations: The consensus of focus group participants was that course-
instructor evaluations were inadequate as a stand-alone evaluation tool. Most believed that
such evaluations often compromise the goal of effective instruction.
3. Students: Consensus emerged that clearly some students do not belong at CSUS. The
prevailing belief that every California student deserves a college education hurts the effort to
provide quality education at CSUS.
4. Assessment Tools: CSUS faculty use a wide variety of assessment tools in their classrooms.
5. Risk & Innovation: Some faculty members believed CSUS does not reward risk taking by
6. Institutional Barriers to Effective Teaching: Faculty members identified a wide variety of,
what they believed, were barriers to effective teaching. The most common complaint
regarded the inappropriate use of "corporate" model that emphasizes quantity of students
over quality of education.
7. Repairs: Focus group members suggested a variety of possible repairs that they believed
would improve the quality of instruction at CSUS.
WHAT IS APPROPRIATE FOR CSUS
Given the diversity of departments’ sizes, histories, structures and resources, and given their
varying interests in teaching, research and service, rather than recommending a one-size-fits-all
policy, we recommend a procedure for implementing formative self-assessment that can be
modified for each department. While we encourage all departments to engage in and facilitate
formative self-assessment among their faculty, we realize that each department needs to do that
in a way that best fits their own needs.
FORMATIVE ASSESSMENT PROCEDURE
Below, we have outlined a procedure that provides some minimal structure for planning,
conducting and reporting formative self-assessment projects. It is intended as a guide that
departments can use as a template or modify to meet the departments’ needs.
Recommended Formative Self-Assessment Explanation of Procedure
1) Faculty may request suspension of official student Departments that presently require standardized student
evaluations for at least one class section annually. evaluations for all classes every semester may consider
one exemption per semester. Institutionalizing the
exemption will promote attention to teaching and
diminish inferences that faculty who declare an
exemption are having difficulties with a course.
2) Prior to submitting a proposal, faculty doing Discussion of the teaching process with a coach helps
formative assessment should contact one of the instructors articulate questions, methods and findings;
college’s faculty mentors or enlist the collaboration it also lessens feelings of isolation. A faculty colleague
of a colleague in another department to act as an could be asked to do anything from simply meet with
occasional observer and coach. the instructor to talk about how the assessment process
is going, to interviewing the students in the exempted
section to collect information about the impact of the
intervention or innovation during the semester and at
the end. We recommend collaborations outside
departments to avoid conflicts regarding RTP.
The faculty member will submit a brief proposal
(we recommend <1 page) stating
which class and section is to be exempted.
• a description of the intervention/s or Departments will decide who receives the proposal. We
innovation/s that will be implemented, recommend that chairs receive the proposals in order to
provide support and resources where necessary.
• the means by which the faculty member
intends to assess outcomes,
• proposed format for report of findings
(e.g.brief written report to chair;
presentation to department faculty;
posting of ideas or insights on web site,
• the name of a collaborator (mentor, coach
Neither the proposal or any written report or other
documents associated with the assessment project The report may treat such things as: conclusions drawn
will be included in any WPAF or RTP file without about what methods may be effective for the instructor
and other instructors of the same course; insights about
the faculty member’s specific request for inclusion. the teaching process; the effects of focused reflection
on teaching practices; further innovations suggested by
The faculty member is encouraged to report insights the experience; ideas for research in teaching, etc.
developed as a result of the formative assessment
The purpose of the report is to provide opportunity to
The nature of the report is to be expressly non- articulate what has been learned from the process,
evaluative closure to the process, and a simple record of
completion of the project.
SOME WAYS OF DOING FORMATIVE ASSESSMENT
So, how can faculty engage in effective formative assessment and increase (or in some
cases, recover) control of their professional development? There are at least six different
mechanisms by which faculty can engage in:
1. Teacher Narratives: According to Cochran-Smith and Lytle, “what is missing from
the knowledge base of teaching. . . are the voices of the teachers themselves, the questions
teachers ask, the ways teachers use writing and intentional talk in their work lives, and the
interpretive frames teachers use to understand and improve their own classroom practices" ( qtd
in Sparks-Langer and Colton 41). Such writing emphasizes the “teacher’s own interpretations of
the context in which professional decisions are made" (41). Teacher narratives document the
instructor’s reflective work in a form that, if the faculty member wished, could be shared with
2. Classroom research: Cross and Angelo, beyond articulating a rationale for it, have
developed numerous means for accomplishing that research. As is the case with a journal or
narrative, classroom research may be done in isolation, if desired, or conducted and prepared to
be shared with colleagues.
3. Faculty Growth Contracts: Peter Seldin describes how growth contracts function as a
means of documenting the plans and accomplishments of a teacher (1982, 73). According to
Seldin, “it turns the teacher inward to reflect with more than casual interest on his [or her]
professional strengths and weaknesses, and often suggests further improvement steps for the next
academic year” (73). However, as Seldin correctly notes, “the growth contract rests on the
assumption that the teacher is aware of his or her professional shortcomings and is genuinely
interested in overcoming them.” The professional growth contract has the particular strength of
facilitating discussion between faculty and department chairs and/or administration, depending
on the interests and concerns of the faculty member.
4. Video recording provides a possible means of self-assessment (Kipper and Ginot).
Video equipment is widely available and easy to use for purposes of creating a detailed record of
one’s teaching for analysis. Nieves-Squires argues that faculty should be trained in how to
assess video taped records of teaching. In spite of this need, video tape provides an unparalleled
means for assessment of many dimensions of teaching not accessible via instructor recall.
5. Teaching portfolios provide a means by which an instructor can specifically, and
precisely document and interpret his or her teaching. This device is gaining increasing attention
among those who specialize in faculty evaluation. (Braskamp and Ory; Centra; Seldin 1989).
Braskamp and Ory note that whereas many institutions first adopted portfolios for the purpose of
summative evaluation, many are now concluding that their greatest potential is in formative
assessment (231). Centra argues that portfolios:
should be reflective and explain the teachers’ thoughts and hopes as they make
instructional decisions. As Schon discussed in the Reflective Practioner: How
Professionals Think in Action (1983), professionals should not simply depend on
established theory or technique but should react to particular situations. Thinking
and doing should not be separate; people who reflect-in-action, Schon argues,
become researchers in the context of their jobs. (101)
The portfolio provides a means of collecting, presenting, and interpreting self-generated data in a
systematic and coherent fashion. The task of assembling a portfolio facilitates self-reflection and
initiates and maintains an attitude of instructor as classroom researcher.
6. Cognitive peer coaching has been used in kindergarten through university settings for
over a decade as a means for facilitating self-assessment and continuous improvement among
faculty. Upon examination, it is clear that the coaching process creates the kind of interaction
that Argryis called for among professionals: thinking and talking about work in ways that
promote self-directed, purposeful and constant improvement. Within the coaching process,
“[s]killful cognitive coaches apply specific strategies to enhance another person’s perceptions,
decisions, and intellectual functions. Changing these inner thought processes is prerequisite to
improving the overt behaviors that, in turn, enhance student learning” (Costa and Garmston 2).
Three objectives of faculty coaching at the university level are:
1) to build on existing abilities [of faculty] to think and act thoughtfully as a teacher;
2) to stimulate participants’ abilities to modify themselves through reflection and
dialogue with a skilled coach, and
3) to help participants recognize the decisions they make in their teaching and constantly
refine ways of making decisions that are effective in maximizing student learning.evised
Cognitive peer coaching, with its focus on facilitating self-assessment of teachers’
thinking, decision-making, and behaving, provides a mechanism for classroom research based on
reliable and valid data which are highly specific and well-contextualized. Coaching is useful for
faculty no matter what pedagogical method a teacher favors, for coaching neither assumes nor
requires any particular approach to teaching. It is formative in its goals and, due to its design for
assisting self-modification and proscribing all external evaluation, coaching facilitates deep
learning among faculty about their teaching. Finally, coaching is adaptable for use in a variety of
ways. It can facilitate the use of classroom research techniques and interpretation of the data;
teacher narratives would become significantly more developmental if the narratives were used in
a coaching context, and the construction, presentation, and use of teaching portfolios would be
enhanced for those faculty working with a skilled coach.
In sum, the literature suggests that an environment of evaluation serves to inhibit innovation,
growth and development. Rather, self-directed change, in an environment that supports
thoughtful reflection promotes achievement of the goals of higher performing professionals.
At CSUS, the faculty indicate a need to reshape the environment to one that supports
risk-taking and innovation in teaching. Specifically, the faculty envision a more flexible
assessment/evaluation process that values efforts to innovate in course design and delivery and
values a variety of data by which the outcomes of innovations are assessed and interpreted.
Numerous tools for conducting formative self-assessment, as outlined above. Variations
on those tools as well as others not listed provide a rich array of possible means for planning,
conducting and assessing teaching innovations. The task force is optimistic that the CSUS
community will explore the opportunities for growth that this project has uncovered.
Angelo, Thomas, and K. Patricia Cross. Classroom Assessment Techniques, 2nd ed. San
Francisco: Jossey-Bass, 1993.
Argryis, C. On Organizational Learning. Cambridge, Mass.: Blackwell, 1992. p. 35.
Braskamp, Larry A. and John C. Ory. Assessing Faculty Work: Enhancing Individual
and Institutional Performance. San Francisco: Jossey-Bass, 1994.
Bereiter, Carl and Marlene Scardamalia, “Intentional Learning as a Goal of Instruction,”
in Lauren Resnick, ed. Knowing, Learning and Instruction: Essays in Honor of
Robert Glaser. Hillsdale, NJ: Lawrence Erlbaum, 1989, pp. 361-92.
Centra, John A. Reflective Faculty Evaluation: Enhancing Teaching and Determining
Faculty Effectiveness. San Francisco: Jossey-Bass, 1993
Costa, Arthur, and Robert Garmston. Cognitive Coaching: A Foundation for Renaissance
Schools. Norwood, MA: Christopher-Gordon, 1994, p.2.
Davis, Barbara Gross. “Demystifying Assessment: Learning from the Field of
Evaluation,” in P. J. Gary, ed. Achieving Assessment Goals Using Evaluation
Techniques. New Directions in Higher Education, no. 67. San Francisco; Jossey-
Bass, 1989, pp. 8-9.
Edgerton, Russell qtd. in Seldin, Peter. The Teaching Portfolio: A Practical Guide to
Improved Performance and Promotion/Tenure Decisions. Bolton, MA: Anker
Publishing, 1991, p
Geis, George L. “Formative Feedback: the Receiving Side.” Performance and Instruction
25 (June/July 1986), p.4.
Hutchings, Pat. Making Teaching Community Property: A Menu for Peer Collaboration
and Peer Review. Washington, D. C.: AAHE, p. 104.
Kipper, David A. and Efrat Ginot. “Accuracy of Evaluating Videotape Feedback and
Defense Mechanisms.” Journal of Consulting and Clinical Psychology. 47 (1979), pp.
Nieves-Squires, Leslie C. “Teacher Theory/Practice: Disciplined Inquiry and Self-
Improvement.” Improving College and University Teaching 26 (Fall 1978),
Schulman, Lee. “Paradigms and Research Programs in the Study of Teaching: A
Contemporary Perspective,” in Merlin C. Wittrock, ed., Handbook of Research on
Teaching, 3rd ed. New York McMillan, 1986.
Seldin, Peter. “Self-Assessment of College Teaching.” Improving College and University
Teaching 30 (Spring 1982):73-4.
Seldin, Peter. The Teaching Portfolio: A Practical Guide to Improved Performance and
Promotion/Tenure Decisions. Bolton,MA: Anker Publishing, 1991.
Sell, G. Roger. “An Organizational Perspective for Effective Practice of Assessment,”in
P. J. Gary, ed. Achieving Assessment Goals Using Evaluation Techniques. New
Directions in Higher Education, no. 67. San Francisco; Jossey-Bass, 1989,
Sokol. P. E. “Improvements in Introductory Physics Courses” in Deborah J. Teeter and
G. Gregory Lozier. Pursuit of Quality in Higher Education: Case Studies in Total
Quality Management. New Directions for Institutional Research No. 78. San
Francisco: Jossey-Bass Publishers, 1993, pp. 41-43.
Sparks-Langer, Georgea Mohlman, and Amy Berstein Colton. “Synthesis of Research on
Teachers’ Reflective Thinking.” Educational Leadership 48 (March 1991), p. 41.