Docstoc

critique

Document Sample
critique Powered By Docstoc
					                                                                                 scholarly critique


McLeod, Susan, Heather Horn, and Richard H. Haswell. “Accelerated Classes and
the Writers at the Bottom: A Local Assessment Story.” College Composition and
Communication 56 (2005): 556-80.


        Authors Susan McLeod, Heather Horn, and Richard H. Haswell present a two-part
argument in their article ―Accelerated Classes and the Writers at the Bottom: A Local
Assessment Story.‖ The first part of their argument is that accelerated composition courses
are beneficial to a specific group of students within a given local context. The second and
more salient part of their argument is that assessment efforts must consider local contexts if
they are to be valid and useful within a specific field. The authors conducted a classroom
research study in which they examined the commonly held claim that accelerated versions of
freshman composition are not as successful as longer versions; the data they collected
suggest otherwise within the context of their study.
        McLeod, Horn and Haswell begin their argument by providing a backdrop for their
research question. Assessment, they aptly describe, is something that most writing programs
try to avoid because they struggle with knowing how to apply the results. They explain that
assessment‘s negative reputation is the result of standardized tests created by testing
companies who know little about current trends in literacy and writing pedagogy. They
argue that ―many of these tests are used as ways of reifying the social order, of ‗proving‘ that
people in certain groups are less intelligent, or write less well, or are less analytical in their
thinking than others‖ (557). As a result of the data they collect and their interpretation of it,
they suggest that it is useful to think about assessment as a means to understand local
contexts, which will in turn inform teachers and students—whether or not they are
mainstream—as they participate in the learning process.
        The authors‘ research question, which asks whether or not ―the level of student
achievement in the accelerated six-week version of the summer course [is] comparable to
that in the ten-week summer version of the course,‖ (561) developed as a result of a local
issue, but is similar to research questions asked by earlier researchers. McLeod, Horn, and
Haswell report on research published in 1978 by M. Beverly Swan, in 1984 by Carol David
and Donna Stine, and in 1992 by Richard Jenson. Swan and Jenson both reported that the


                                                                                                     1
length of a writing class was of higher priority and had more effect on success than the
intensity of the class. In other words, students did better in classes that spanned a regular
semester than in accelerated classes. David and Stine reported on two groups of students,
those in a five-hour on-the-job class and those in a regular semester class and reported that
students in the college class gained more confidence in their writing than in the accelerated
on-the-job class. Currently, many institutions offer accelerated courses in the summer. The
model requires that an instructor fit the course content from a regular term—10 weeks in a
quarter system and 15 weeks in a semester system—into a shortened session during the
summer, usually 6 to 8 weeks. Writing programs across the country have long been
concerned with the effectiveness of this model for writing courses, specifically freshman
composition and/or developmental writing courses. The concern is that there is not enough
time outside of a writing class condensed into 6 to 8 weeks for students to process, think,
draft and revise—processes that are crucial to a student‘s success.
        To answer their question, McLeod, Horn and Haswell framed their research within
what Cindy Johanek (2000) might call a contextualist research paradigm. In other words,
they did not favor qualitative research over quantitative research or vice versa, but instead
employed multiple research strategies. McLeod, Horn and Haswell explain that they began
their study with ―an expanded notion of validity‖ (559) based on work by Messick, whose
definition of validity argues for both theoretical and empirical rationales. Therefore, within
their study, they considered multiple measures, both objective and subjective. To be
specific, they gathered data by administering a pre-course questionnaire, a pre- and post-
course writing attitude survey, and by conducting focus group sessions. They also examined
student grades, test scores on the SAT II Writing Test and a local writing placement exam,
two assignments scored in a blind reading using a common rubric, and analyzed the same
assignments for length (words, sentences, paragraphs, the essay as a whole). In addition, the
teachers‘ reflections were evaluated to help analyze data and draw conclusions.
        A total of 112 students in five sections of first-year college composition were
studied. Three of the sections were 6-weeks long and two of the sections were 10-weeks
long. The courses were offered during a summer and fall semester of the same year. Three
sections were taught by the same instructor. Two different instructors taught the fourth and
fifth sections. By studying classes taught by different instructors, for various lengths of time,




                                                                                                 2
and within different semesters, the authors were able to draw conclusions about how context
affected the students‘ writing, particularly those students in the accelerated 6-week classes.
        Had these researchers not gathered both qualitative and quantitative data, they would
not have been able to ferret out the nuances of how context affects assessment results. In
fact, the context surrounding the course studied was foundational in understanding the
results, a context that the authors were able to define and understand as a result of the
information gathered from the pre-course questionnaire, the pre- and post-course writing
attitude survey and the post-course focus group sessions. What they discovered was that a
majority of students who took first-year college composition during the fall semester were
placed directly into the course. Students who took the course in the summer, however, did
so for one of two reasons; they either lacked confidence in their writing skills and so waited
until summer when they could take the class by itself, or they were reacting to life
circumstances and economics.
        Gains in writing skills and levels in writing confidence by the end of the semester,
the authors argue, were directly related to this local context. An analysis of various features
on two separate writing assignments during the course of the semester showed very little
difference numerically in writing skills from section to section. During the focus group
sessions, students in the accelerated courses reported that they felt they performed better in
the shorter, summer class. One student summed it up by saying: ―During the school year it‘s
hard to concentrate on just one thing because I‘m taking other courses that interest me a lot
more; during the summer I can think about my papers for this class and nothing else‖ (571).
The authors conclude that these students seem to be strategic as to when and how they take
courses, specifically those that may be problematic for them. Therefore, offering accelerated
courses during the summer to this specific group of students is a valid and thoughtful
response to a specific need.
        The methodology used in this study argues for a holistic approach to conducting
both assessment and research. Because the authors looked at both qualitative and
quantitative data, the results can be read and interpreted by a broader audience. As a writing
teacher at a community college, I benefit from their explaining both how and why they
looked at test scores, grades and the mean length of different parts of each student writing
assignment (i.e., sentence length, paragraph length, introduction length and word size), in
addition to contextual factors, such as living circumstances, attitudes toward writing and


                                                                                                  3
levels of self-confidence as writers. I can immediately imagine how I might use this same
framework to design studies that will answer specific assessment questions I have in my own
classroom.
        The call to action in this article is for educators to disrupt the negative reputation
that surrounds assessment and reposition it as a tool that can help make significant changes
in education. This can only happen if specific conclusions are not generalized to populations
and circumstances that do not approximate those within the assessment study or project.
The authors‘ argument that results must take into account contextual variables is profound.
For example, the students in this study, the authors claim, are ―highly motivated, savvy, and
articulate about what they want with regard to their own education‖ (561). The average SAT
score for students at the researchers‘ institution in fall 2003 was 1186 compared to the
national average of 1026. The students I teach at Doña Ana Branch Community College
(DABCC) do not as a group have high SAT scores and many would not be described as
particularly articulate about their educational goals and aspirations. Therefore, as a
composition instructor, I cannot take the results of the study and apply them wholesale to
the students I teach. What I can do, however, is argue that because DABCC students are
also highly motivated – for different reasons to be sure, but still highly motivated—and
because many are savvy and strategic in choosing a slate of courses each semester, DABCC
should consider offering accelerated writing classes during regular semesters, as well as
during the summer, and then assess the results to see if they are positive within each specific
context.
        This study and the authors‘ conclusions add to the body of literature on assessment.
That many are openly suspicious of educational assessment is evident in its identification
with compromising activities that threaten academic freedom. This study offers both a
method and a reason to assess students in the composition classroom that is non-
threatening. Furthermore, the results of their study speak to a growing concern about
accelerated writing classes in general. Whether or not institutions of higher education offer
accelerated writing classes will become more of an issue as the traditional college student
becomes obsolete. Students who are older, work at full and part-time jobs, have families,
and have limited transportation—to name a few issues—have the need for flexible, creative
scheduling. The results of this study suggest that one option is to offer accelerated writing




                                                                                                 4
classes in order to serve the needs of a well-defined group of students for whom an
accelerated class would be beneficial.
         While the strengths discussed so far have to do with the methodology and
conclusions of this study specific to the field of composition studies, the broader conclusion
has the possibility of speaking to academia as a whole. The notion that assessment can be a
tool to increase self-awareness, to answer practical questions about serving an ever-
increasingly diverse student population and to challenge long-standing assumptions is
powerful and intensely important in the twenty-first century climate of higher education, a
climate that includes top-down approaches to assessment that can be polarizing at best, and
create distrust and disintegration at worst. The implication from the results of this study is
that for assessment to be relevant and useful in higher education, and not vilified as it has
been, assessment practices must be situated contextually and the results of assessment
projects must be interpreted within that context. In other words, the specific results of one
study cannot be applied to a different context with any degree of reliability.
         McLeod, Horn and Haswell‘s readers—composition instructors at various
institutions of higher education, and myself to be specific—can use the results of this study
to argue that assessment can be a useful tool to find out what the students on their campuses
might need to be more successful. Not only does it suggest to me as a writing instructor that
accelerated courses might serve a specific group of students, it also moves me to assess other
writing strategies and/or modes to uncover and discover how to best help student learn to
write.


Works Cited
Johanek, Cindy. Composing Research: A Contextualist Paradigm for Rhetoric and Composition.
         Logan: USU Press, 2000.




                                                                                                 5

				
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
views:51
posted:11/2/2011
language:English
pages:5