Complex skill

Document Sample
Complex skill Powered By Docstoc
					Assessment & Evaluation in Higher Education
Vol. 31, No. 1, February 2006, pp. 71–90




Complex skills and academic writing: a
review of evidence about the types of
learning required to meet core
assessment criteria
James Elandera*, Katherine Harringtonb, Lin Nortonc, Hannah
Robinsonc and Pete Reddyd
aThames                                     Valley University, UK;                                                          bLondon Metropolitan University, UK; cLiverpool Hope

University College, UK; dAston University, Birmingham, UK.
Psychology &
JamesElander Evaluation
0
100000February Ltd in Higher Education
31
Taylor and (print)/1469-297X (online)
2005 &Article 2006
Original Francis
0260-2938 DepartmentLondon
10.1080/02602930500262379 Metropolitan UniversityCalcutta House, Old Castle StreetLondonE1 7NTUKj.elander@londonmet.ac.uk
CAEH_A_126220.sgm
AssessmentFrancis




Assessment criteria are increasingly incorporated into teaching, making it important to clarify the
pedagogic status of the qualities to which they refer. We reviewed theory and evidence about the
extent to which four core criteria for student writing—critical thinking, use of language, structuring,
and argument—refer to the outcomes of three types of learning: generic skills learning, a deep
approach to learning, and complex learning. The analysis showed that all four of the core criteria
describe to some extent properties of text resulting from using skills, but none qualify fully as
descriptions of the outcomes of applying generic skills. Most also describe certain aspects of the
outcomes of taking a deep approach to learning. Critical thinking and argument correspond most
closely to the outcomes of complex learning. At lower levels of performance, use of language and
structuring describe the outcomes of applying transferable skills. At higher levels of performance,
they describe the outcomes of taking a deep approach to learning. We propose that the type of learn-
ing required to meet the core criteria is most usefully and accurately conceptualized as the learning
of complex skills, and that this provides a conceptual framework for maximizing the benefits of using
assessment criteria as part of teaching.


Introduction
Assessment criteria, sometimes called marking criteria, play an increasingly impor-
tant role in higher education. One reason for this is the priority attached to improv-
ing the reliability and validity of marking. Another is the trend towards transparency
and explicitness in all aspects of student assessment. Potentially the most significant

*Corresponding author. Psychology Group, Thames Valley University, St Mary’s Road, Ealing,
London W5 5RF, UK. Email: james.elander@tvu.ac.uk

ISSN 0260-2938 (print)/ISSN 1469-297X (online)/06/010071–20
© 2006 Taylor & Francis
DOI: 10.1080/02602930500262379
72 J. Elander et al.

reason, however, is the increasing use of assessment criteria in teaching, usually in
the form of marking exercises where students themselves actively engage with the
criteria.
   Essays and related written work provide opportunities for students to demonstrate
some of the most demanding learning outcomes, but students are often more
confused about what constitutes a good essay than they are about the criteria for other
types of assignment. One survey of first year students taking a study skills module, for
example, showed that essay writing was the most commonly requested topic for
advice and guidance (Elander, 2003a). Confusion among students is understandable
considering that professional academics also often struggle to specify what constitutes
a good essay; Sadler (1989), for example, identified over 50 published criteria for
written composition. Studies reported as many as 10 (Branthwaite et al., 1980) or 18
criteria (Norton, 1990) that tutors considered important, and a recent report listed
12 potential criteria for essays (Andrews, 2003). Across disciplines and institutions,
however, there are a few core criteria that have a central role in the shared perception
of what is important in good student writing. Four such core criteria, identified from
an analysis of published assessment criteria in psychology, business studies and geog-
raphy, are critical thinking, use of language, structuring, and argument (Elander et al.,
2004). Table 1 gives examples of each of those core criteria.
   The purpose of using assessment criteria in teaching has been to improve students’
understanding of the criteria, thereby improving their performance in assessments.
The rationale has usually been to provide opportunities for students to develop tacit
rather than explicit knowledge about the meaning of the criteria and how they are
applied. This, it is argued, should lead students to develop understandings of assess-
ment that correspond more closely to those of their tutors, whose ability to assess
students’ work is usually the result of shared practical experience rather than internal-
isation of explicit rules or standards. ‘It follows that inviting students into the shared
experience of marking and moderating should also enable more effective knowledge
transfer of assessment processes and standards’ (Rust et al., 2003, p. 152). The case
for this type of teaching was made previously by Sadler (1989) in the context of
formative assessment. ‘Providing guided but direct and authentic evaluative experi-
ence for students enables them to develop their evaluative knowledge, thereby bring-
ing them within the guild of people who are able to determine quality using multiple
criteria’ (p. 135).
   One such initiative was part of a first year geography module in which students
discussed the criteria and undertook marking exercises. Student feedback was gener-
ally positive but there was no attempt to estimate the effects on student performance
(Pain & Mowl, 1996). In another, marking exercises using assessment criteria were
included in a first year psychology skills module, along with other study skills exer-
cises including IT, note-taking and library use. Student feedback was generally
positive, but there was no clear improvement in student achievement (Elander,
2003a). In the largest, most systematic, and most rigorously evaluated programme of
this kind, workshops with marking exercises improved student performance in a first
year business module, with signs that the benefits might even endure over time and
                                                                           Core assessment criteria 73

                        Table 1.    Core assessment criteria for student writing

Core criterion                               Example

Critical thinking/critical evaluation        Does the author present material in a critical manner?
                                             (Pain & Mowl, 1996).
                                             Clear application of theory through critical analysis/critical
                                             thought of the topic area (O’Donovan et al., 2000).
                                             Evaluation includes conceptual/ methodological critique
                                             and an appreciation of alternative perspectives and current
                                             controversies (Elander, 2002).
Use of language/writing style                Is it generally clear, readable and well presented? Does it
                                             make the reader want to read it? Correct use of spelling
                                             and grammar? (Pain & Mowl, 1996).
                                             Language fluent, grammar and spelling accurate (Price &
                                             Rust, 1999).
                                             Material and arguments presented clearly and coherently
                                             (Elander, 2002).
Structuring                                  Does the essay have a clear, logical and well-defined
                                             structure? (e.g. is there an introduction, middle and
                                             conclusion?) (Pain & Mowl, 1996).
                                             Good essay structure; sections obvious (Oates, 2002).
                                             Clear structure, material organised well (Elander, 2002).
                                             Does the conclusion draw together the various important
                                             points made in the main body of the essay? (Pain & Mowl,
                                             1996).
                                             Extremely well organised answers whose structure reflects
                                             the development of argument (Elander, 2003b).
Argument                                     Does the author sustain a well-reasoned and supported
                                             argument? (Pain & Mowl, 1996).
                                             Logical argument clearly present throughout (Oates,
                                             2002).
                                             Good development shown in summary of arguments based
                                             in theory/literature (O’Donovan et al., 2000).

Note: For uniformity’s sake most examples are criteria for satisfactory or good performance.



transfer to some extent to other modules (Rust et al., 2003). A self-assessment initia-
tive for Masters level students was based on a similar model of extending communi-
ties of practice about assessment to include students (Elwood & Klenowski, 2002).
   Initiatives like those described above are likely to continue to be developed in the
future and their success will depend on the type of learning that is identified as
required to demonstrate the criteria concerned. Note that assessment criteria them-
selves refer to qualities of the outcomes of student learning: they are ‘properties of the
text’, consistent with the definition of an assessment criterion as ‘a distinguishing
property or characteristic of any thing, by which its quality can be judged or esti-
mated, or by which a decision or classification can be made’ (Sadler, 1987, p. 194).
Types of learning, on the other hand, are the activities in which students engage as
they go about producing the work to which the assessment criteria will be applied.
74 J. Elander et al.

Confusion between the two is easy because the same forms of words are often
employed in descriptions of the activities of students and the properties of the work
they produce. For example, a student may engage in critical thinking, and subse-
quently produce an essay that may (or may not) have certain textual properties that
the marker recognizes as a demonstration of the outcome of critical thinking. The
same goes for use of language, structuring, and argument, which can all be used to
refer to activities undertaken by students and the properties of the work they submit
for assessment.
   Notwithstanding the distinction between the meanings of terms that refer to prop-
erties of the text (assessment criteria) and those that refer to activities of students
(types of learning), it is important to know about the types of learning that can most
helpfully be associated with core assessment criteria. That is, although there is no
identity between assessment criteria and types of learning, any given criterion may be
a property of text that is more likely to result from one type of learning than another.
Certain types of learning may therefore be more likely than others to increase
students’ ability to produce work that demonstrates the relevant criteria, which has
important implications for the way assessment criteria are used in teaching. For
example, certain assessment criteria may describe properties of work that are the
result of students learning skills that will transfer from one context to another. Or they
may be properties of work that are the result of a deep approach to learning that reflects
engagement with a specific area of knowledge. Or they may be properties of work that
are the result of complex learning, where subject knowledge is combined with personal
qualities and social practices. Each of these possibilities has different implications for
how to achieve the best results from using assessment criteria in teaching.
   Assessment criteria that describe properties of work resulting from learning skills
could contribute to changing the way skills are regarded in higher education. The
debate about skills in higher education has included critical analysis of skills provision
in universities (Kemp & Seagraves, 1995), analysis of the conceptual problems inher-
ent in the notion of transferability (Hinchliffe, 2002), and analysis of the problem of
separating ‘thinking skills’ from subject knowledge (Bridges, 1993). An analysis of the
extent to which transferable skills can be appropriately assessed by core assessment
criteria for student writing would inform those debates, and would also be informa-
tive about the extent to which teaching based on assessment criteria should be skills-
oriented.
   Assessment criteria that describe properties of work resulting from a deep approach
to learning would have different implications, for whereas skills are amenable to train-
ing, deep approaches to learning are associated with motivational factors and active
student engagement in the discipline. Since a deep approach to learning is desirable,
and since assessment criteria codify desirable qualities of students’ work, it is perhaps
natural to assume that the criteria represent the expected outcomes of a deep
approach to learning. This is especially true for essays, which are often assumed to be
the ideal form of assessment of the outcomes of taking a deep approach to learning.
One study showed that students were more likely to adopt a deep approach for essay
assignments than for multiple-choice examinations, and that adopting a deep rather
                                                                  Core assessment criteria 75

than a surface approach was associated with better essays (Scouller, 1998). However,
the extent to which a deep approach to learning can be promoted by teaching
innovations is uncertain, and interventions using assessment criteria might lead
instead to students merely simulating the outcomes of a deep approach. Deep
approaches to learning were increased by an intervention among very able and highly
motivated students (Biggs & Rihn, 1984), but interventions in other student groups
have resulted in increases in surface, or instrumental, approaches to learning (e.g.,
Maguire et al., 2001).
   Complex learning has been described as the ‘construction of new knowledge’
(King, 1997) and is a recent alternative conceptualization of desirable outcomes of
education, especially in relation to employability. Complex learning aims at the ‘inte-
gration of knowledge, skills and attitudes; the coordination of qualitatively different
constituent skills; and the transfer of what is learnt to daily life or work settings’ (van
Merrienboer et al., 2003, p. 5). It is distinct from learning generic skills and taking
deep approaches to learning in that academic learning is associated with psychologi-
cal characteristics similar to those underpinning ‘graduate identity’ (Holmes, 2001)
and ‘social practices’ (Knight & Yorke, 2003). Analysis of the ways and extent to
which assessment criteria represent the properties of work resulting from complex
learning could help to promote employability through teaching innovations linked
directly to subject learning, in line with the increasingly close links between pedagogy
and employability (Tynjälä et al., 2003). For example, identifying possible links
between the qualities represented in core assessment criteria and complex learning
could help develop interventions to improve students’ meta-awareness of what assess-
ment criteria represent, and promote more autonomous approaches to learning that
may be more likely to transfer to other contexts, including employment settings.
   Generic skills, deep approaches to learning, and complex learning therefore repre-
sent broad categories or types of learning that differ in a number of ways. Table 2
summarizes the distinctions between them, as discussed above, in terms of whether
they can be separated from subject knowledge, whether they can be improved by
specific training (training in study methods, for example, rather than the training
provided by higher education more generally), and whether they are related to
personal development and employability. This is a very approximate classification
that is based partly on evidence from research, but also on the ways that the terms


                         Table 2.   Features of three types of learning

                                                             Type of learning

                                         Generic skills     Deep learning       Complex learning

Separable from subject knowledge         Yes                No                  No
Improvable by specific training          Yes                Maybe               Maybe
Related to personal development          No                 Maybe               Yes
Related to employability                 Maybe              Maybe               Yes
76 J. Elander et al.

‘skills’, ‘deep approaches to learning’ and ‘complex learning’ have been defined, and
on the assumptions that are often made about them.
   Identifying the type of learning that is most likely to lead to the successful demon-
stration of core assessment criteria would provide a first step in the more systematic
organization of teaching based on assessment criteria, and this is the main aim of the
present paper. Taking each criterion in turn, we consider how they have been concep-
tualized, the extent to which the qualities they represent are capable of being sepa-
rated from subject knowledge, the extent to which the activities required to
demonstrate them can be promoted by specific and non-specific training, and the
range of benefits that have been obtained by such training. The analysis is based on
a comprehensive review of theory, practice, and research evidence identified by elec-
tronic searches in the science, social science, and arts and humanities databases for
English language reports published since 1981, using search terms from the following
sets: (1) assessment, criteria, marking, marking scheme, grade descriptor, level
descriptor; (2) critical thinking, critical evaluation, critical analysis, academic writing,
essay writing, writing style, academic voice, academic language, academic English,
register, structure, argument; and (3) skills, generic skills, transferable skills, key
skills, deep learning, approaches to learning, learning styles, complex learning, social
practices, employability, learning objectives, learning aims, learning outcomes.
Reports were also identified from the bibliographies of reports found by the electronic
searches.


Critical thinking/critical evaluation
Garside (1996) reviewed definitions of critical thinking and concluded that it is
usually defined in terms of a skill component and an attitude component. McPeck
(1981, p. 8), for example, suggested that ‘the core meaning of critical thinking is the
propensity and skill to engage in an activity with reflective scepticism.’ Four defining
features of critical thinking have been suggested: (a) clear, precise, accurate, relevant,
logical and consistent thinking; (b) a controlled sense of scepticism or disbelief about
claims, assertions and conclusions; (c) taking stock of existing information and iden-
tifying holes and weaknesses; and (d) freedom from bias and prejudice (Garside,
1996). Many give critical thinking a central role in learning (e.g., McPeck, 1981;
Beyer, 1987), and suggest that essays are well suited to the assessment of critical
thought: ‘the answer format for a critical thinking test should permit more than one
justifiable answer and good answers should not be predicated on being right, in the
sense of true, but on the quality of the justification given for a response’ (Tynjälä,
1998, p. 175).
   Let us first consider the skills elements of critical thinking, which was often
included in the lists of skills that proliferated during the 1980s and 1990s, usually as
a thinking or cognitive skill. It was rated eighth in importance out of 20 ‘transferable
employment skills’ by graduates who were asked about the importance of skills in
their current jobs (Smith et al., 1989). Beyer (1985) identified 10 specific critical
thinking skills: (1) distinguishing between verifiable facts and value claims, (2)
                                                              Core assessment criteria 77

determining the reliability of a source, (3) determining the factual accuracy of a state-
ment, (4) distinguishing relevant from irrelevant information, (5) detecting bias, (6)
identifying unstated assumptions, (7) identifying ambiguous or equivocal claims or
arguments, (8) recognising logical inconsistencies or fallacies in a line of reasoning,
(9) distinguishing between warranted and unwarranted claims, and (10) determining
the strength of an argument. Defining very specific, discrete critical thinking skills
should open the door to skills-based interventions to promote critical thinking. One
programme, for example, focused on questioning and showed that students asked
more questions, and especially more critically evaluative questions, after training in
questioning (Keeley et al., 1998).
   Garside argued that ‘critical thinking involves a set of skills that are most effectively
taught within the context of a subject area. Since it is impossible to think critically
about something of which one knows nothing, critical thinking is dependent on a
sufficient base of knowledge’ (Garside, 1996, p. 215). That view is supported by eval-
uations of interventions to promote critical thinking, which have been more success-
ful when integrated in subject teaching than when delivered through study skills
programmes. In one qualitative evaluation of nursing education, a study skills
programme had some benefits for the development of critical thinking, but more
significant gains were obtained by relating critical thinking to students’ clinical prac-
tice and academic learning (Girot, 1995). Another study of nursing education
showed that discipline-specific teaching rather than more general academic ability
had a bigger influence on the development of critical thinking (Miller, 1992). In biol-
ogy, critical thinking instruction and subject content were successfully integrated,
with improvements in both reasoning skills and content knowledge from pre-course
to post-course tests (Chapman, 2001).
   Recommendations about specific classroom activities and teaching methods to
promote critical thinking include active student participation, meaningful interaction,
and opportunities for students to challenge and question (Garside, 1996). Psycholog-
ical research indicates that cooperative learning should be more effective than
competitive learning (Johnson et al., 1981), but in a direct comparison between
students taught in lectures and those taught in group discussions, there was no differ-
ence in outcome (Garside, 1996). However, critical thinking was measured with a
multiple-choice test, which may not be the most appropriate measure.
   Let us turn now to the attitude or propensity element of critical thinking, for there
is evidence that critical thinking is related to learning styles. For example, one study
showed that critical thinking measured with the Watson-Glaser test was correlated with
a deep processing approach to learning (Gadzella & Masten, 1998). There is also
evidence that critical thinking is influenced by non-academic as well as academic expe-
riences, making critical thinking resemble complex learning, in which personal char-
acteristics influence outcomes. Terenzini et al. (1995), for example, found that both
instructional and out of class experiences made unique contributions to gains in critical
thinking, over and above pre-college levels of critical thinking and other characteristics.
   Critical thinking therefore involves elements that resemble skills and can be
improved by specific training, but these are not generic skills, and they can be
78 J. Elander et al.

improved more effectively by integration with subject teaching. However, critical
thinking is not just a matter of knowing more about one’s discipline, for it is affected
by learning styles and out of class experiences, suggesting a complex learning process.
Bailin et al. (1999b) described critical thinking as comprising a set of intellectual
resources including knowledge of concepts, standards and procedures, plus certain
habits of mind. Perhaps most importantly, critical thinking includes the application
of standards and criteria: ‘critical thinking is not promoted simply through the repe-
tition of “skills” of thinking, but rather by developing the relevant knowledge,
commitments and strategies and, above all, by coming to understand what criteria
and standards are relevant’ (Bailin et al., 1999a, p. 280). Sadler (1989) argued that
the key skill required for students to improve their academic performance was evalu-
ation: ‘For an important class of learning outcomes, the instructional system must
make explicit provision for students themselves to acquire evaluative expertise. It is
argued that providing direct and authentic evaluative experience is a necessary
(instrumental) condition for the development of evaluative expertise’ (p. 143). Of all
the core criteria therefore, critical evaluation has a special status in the context of
improving student performance, and familiarising students with the criteria that are
applied to their own work, and providing opportunities for them to apply those crite-
ria themselves, may be an especially effective method to promote critical and evalua-
tive thinking more generally.


Use of language
Many universities offer generic courses on academic writing, and student guides often
treat writing as a generic skill that can be developed independently of what is being
written about. In the analysis below we identify three levels of writing: correctness,
register, and academic literacy. Correctness involves grammar, punctuation, spelling,
                              ˇ
and referencing, and Ivanic (2004) associated the ‘skills discourse’ of writing with
                              [n
                              a
                              or
                              c]




assessment practices that focus mainly on accuracy. This is consistent with a view of
writing as a teachable generic skill that will transfer to almost every setting where writ-
ing is used.
   Register is much more a matter of writing style, including the length and complex-
ity of sentences, the way in which values are expressed, the ways in which writers can
distance themselves from other people’s terms, and the use of active and passive
sentences. ‘Our choice of register when we write displays our attitude towards our
reader and towards the subject matter we are writing about. … One of the main char-
acteristics of the register appropriate for academic writing is that it does not resemble
the register of conventional speech’ (Fabb & Durant, 1993, pp. 72, 74–75). Though
not divorced from acts of understanding and knowledge creation, the concept of
register emphasises the role of conventions in the construction of knowledge and
                                                                     ˇ
resembles to some extent the ‘genre discourse’ of writing (Ivanic , 2004).
                                                                     [
                                                                     o
                                                                     a
                                                                     rn
                                                                     c]




   Biber et al. (2002) described a multidimensional analysis of ‘register variation’ in
the written and spoken language to which students were exposed, and exemplars from
published academic writing can be used to identify elements of the academic register
                                                            Core assessment criteria 79

that students might be encouraged to adopt. Studies of published academic writing
have focused on sentence length and structure (Harnon, 1992), use of audience
engagement by addressing readers directly in the text (Hyland, 2001), use of antici-
patory (it) clauses (Hewings & Hewings, 2002), directives (Hyland, 2002), and inclu-
sion in the text of a voice intended to be attributable to the reader (Thompson, 2001).
Studies of student writing have also helped to understand the significance of register.
For example, one of the factors that differentiated students at different levels was
assertiveness versus cautiousness in writing, which included confidence and the feel-
ing of being in control when writing and discussing essay plans with tutors and other
students, as opposed to being pessimistic, unenterprising and externally constrained
(Branthwaite et al., 1980). Cautiousness is nevertheless an important part of
academic writing because of the need to balance different views and avoid absolute
judgements, leading to constructions such as ‘it can be suggested that’ or ‘it could be
argued that’. In one analysis, for example, ‘frequent use of a “bold” or “very bold”
style in students’ essays by no means guaranteed a high grade’ (Francis et al., 2002,
p. 174).
   Academic literacy subsumes all the other assessment criteria for essays and
proceeds from the assumption that understanding and creating knowledge in a disci-
pline takes place through language, not independently from it, so that writing is part
of the formulation as well as the presentation of thoughts (Lea & Street, 1998;
Warren, 2003). ‘The student has to learn to speak our language, to speak as we do,
to try on the peculiar ways of knowing, selecting, evaluating, reporting, concluding
and arguing that define the discourse of the community’ (Bartholomae, 1985,
p. 134).
   Lavelle (1993) identified five student writing processes that span the three levels of
writing we have identified. The ‘low self-efficacy’ process was characterized by low
confidence in writing abilities and little concern for surface aspects of composition,
grammar and punctuation. The ‘spontaneous-impulsive’ process involved an off-the-
cuff impromptu approach to writing. The ‘procedural’ process was a methodological
approach aiming to meet requirements. The ‘elaborationist’ process involved
personal engagement in writing. The ‘reflective-revisionist’ process involved active
reworking of writing and the emergence of meaning. Low self-efficacy, procedural,
and spontaneous-impulsive writing processes were associated with surface
approaches to learning, and reflective-revisionist and elaborative writing processes
were associated with deep approaches to learning (Lavelle & Zuercher, 2001).
   However, a deep approach to learning does not appear to be the only type of
learning involved in using language well in essays. Adherence to grammatical and
other conventions is probably rightly treated as a transferable skill, and at least some
aspects of register are often treated as skills: ‘… academic register is a convention you
learn to adopt so that your essays “sound right”’ (Fabb & Durant, 1993, p. 74).
Whitehead (2002) concluded that the development of an academic writing style is a
skill that students must be willing to learn. There is also evidence that workshops
focusing on writing can be effective, at least in changing the writing processes adopted
by students. For example, workshops for graduate students led to reductions in
80 J. Elander et al.

procedural and spontaneous-impulsive processes and increases in elaborationist
processes (Biggs et al., 1999). At lower levels of ability and performance, therefore,
where the emphasis is mainly on correctness, use of language is a criterion for judging
the outcome of using generic skills, and can be differentiated quite clearly from other
core criteria. At higher levels, however, where the emphasis is on register and
academic literacy, it subsumes other core criteria, and describes the outcomes of a
complex process in which subject learning is central, qualifying therefore as a criterion
for judging properties of work that results from taking a deep approach to learning.


Structuring
Peck and Coyle (1999) distinguished between form-driven and content-driven
approaches to writing. They associated form-driven approaches with aspects of struc-
turing that resemble generic skills, and offered advice about structuring techniques
that can be learned, such as ‘how to build an essay, including how to shape a para-
graph and a sentence’: ‘One of the most useful rules in writing an essay … is the rule
of three, which is that the core of an essay should be divided into the three stages of
setting the issue up, pushing the issue along, and then seeing where we arrive. This
kind of division allows one to start to impose a shape on the raw material of the essay.
… this same structure … will work for virtually any essay set on any subject at univer-
sity’ (pp. 97–100).
   That advice would address many of the structuring criteria shown in Table 1, espe-
cially those proposed by Pain and Mowl (1996), but structure goes beyond generic
skills, as demonstrated by the advice offered by authors with content-driven
approaches to student writing. These approaches highlight the dependence of struc-
ture on content, consistent with research showing that higher grade student essays
had structures that were more closely integrated with content (Prosser & Webb,
1994). For example, Pirie (1985) has argued that ‘each paragraph must be recogniz-
able as a logical next step in a coherently developing argument that directly answers
the set question … the most effective order will almost always emerge through
thought about the particular problems which have occurred to you during your
research on each essay’s specific topic’ (p. 58). Similarly, Creme and Lea (1997)
proposed that: ‘By “structure” we mean both the way a piece of writing is organised
and—more importantly—what work it is doing: its function in the assignment. We are
particularly interested in how the structure constructs the relationships between
different ideas’ (pp. 84–88).
   The content-driven perspective aligns structuring with a deep approach to learning,
consistent with the rationale for the SOLO taxonomy (Structure of Observed Learn-
ing Outcomes), which is often used to assess the structural complexity of essays
(Biggs & Collis, 1982; see also Atherton, 2005). ‘SOLO … is based on the assump-
tion that the learning quality is reflected in the level of complexity with which the
learning outcome is structured, regardless of whether the item learnt is a skill, a
concept, or a problem’ (Biggs, 1988, p. 197). That assumption is supported by stud-
ies showing that more complex essay structures are related to deeper approaches to
                                                            Core assessment criteria 81

learning (e.g., Biggs, 1987). Campbell et al. (1998) found that essays with more
complex conceptual structures were more likely to have been written by students who
reported active, reconstructive note-taking, and who reported building arguments
rather than building information in their essay construction. Not all of the evidence
is positive, however: Lavelle (1997) found no significant relationships between the
structural complexity of student essays and measures of students’ composition
processes.
   However, even content-driven approaches acknowledge that there are common or
repeated elements or principles to structuring essays: ‘whatever piece of academic
writing you are attempting, whatever subject or course you are doing, you will be
putting together all the components into a structured whole’ (Creme & Lea, 1997,
p. 39). Structuring an essay can be thought of as a skill, therefore, in that it will
improve with practice, but its relationship with content means that it should be
treated as a complex rather than a generic skill, and is probably best taught in disci-
pline-based settings. The SOLO taxonomy can be used to help with teaching as well
as assessment of student writing (e.g., Boulton-Lewis, 1995), and there is evidence
that students who have been taught in ways that combine subject knowledge with
complex skills produce essays with more complex structures. Students taught using
writing tasks in which they had to transform knowledge by applying it and criticizing
it, for example, produced essays with more complex structures than those who
attended lectures on the same topics (Tynjälä, 1998).
   There are therefore aspects of structuring that qualify as generic skills, and struc-
turing can be improved by specific training. At higher levels of performance, however,
structuring is more closely linked to content and has been shown to be related to deep
approaches to learning. This does not preclude certain complex transferable skills
playing a role. One study examined essay structuring in relation to students’ under-
standing of the assessment criteria, and found that students with a better understand-
ing of ‘organisation’, ‘synthesis’, and ‘critical evaluation’ produced essays with more
complex structures (Campbell et al., 1998). This suggests that instruction based on
core assessment criteria could help improve students’ ability to structure their writing.


Argument
Argument is arguably the defining feature of the essay: ‘Your essay is your argument;
everything else makes sense because of it’ (Bonnett, 2001, pp. 50–51). What exactly
constitutes a good argument is something of a moot point, however, and the concept
of argument may contribute significantly to the ‘connoisseur’ model of student
assessment: ‘I can recognize a good piece of student writing when I see it. I know
when it is well structured and has a well-developed argument but it is difficult to say
exactly what I am looking for, let alone describe a good argument more fully’ (a
lecturer quoted in Creme & Lea, 1997, pp. 36–37).
   Students may also recognize the importance of argument but have difficulty in
articulating what arguments consist of. In one study where students were interviewed
about essay writing, 44% mentioned the importance of presenting their own views or
82 J. Elander et al.

opinions, making this the most frequently mentioned factor (Read et al., 2001). Argu-
ment has to go further than presenting one’s own view, of course. Branthwaite et al.
(1980) found that students were much more likely than lecturers to emphasize the
need for ‘original’ thought in essays, and students who believed in presenting their
own opinions obtained lower grades for their essays than those who did not.
   A common view is that argument is the antithesis of a transferable skill, being
rather the process by which personal and academic fulfilment is achieved: ‘Argument
is not simply a “transferable skill”. It isn’t something to be ranked alongside
computer literacy and time management. The ability to argue … is the core attribute
of all forms of advanced level education. But it’s more than that too. For argument
goes to the heart of who we are and what we want to do with our lives’ (Bonnett,
2001, p. 3). Some descriptions of argument resemble complex learning in that they
relate academic knowledge to personal transformation and enablement: ‘The ability
to engage in argument is what makes learning exciting. To feel comfortable with
debate changes your relationship with education and just about everything else. It
transforms you from a passive and bored receptacle of another’s wisdom into a
participant; into someone who is neither scared by, nor indifferent to, the society
around them but actively involved in its interpretation and transformation’ (Bonnett,
2001, p. 1).
   More analytical approaches to understanding argument are possible, however: ‘an
argument can be divided into several components of which the main ones are the
claim and the grounds. The claim can either be presented in the form of a conclusion,
or it may (or may not) include a conclusion. As both the claim and the conclusion
indicate the opinion of the writer, the purpose of the grounds is to provide evidence
for them’ (Marttunen & Laurinen, 2001, p. 139). Having defined argument in that
more systematic way, it is possible to identify skills associated with each element. For
example, analytical skills have been associated with identifying the components of an
argument; evaluative skills have been associated with determining whether the claims
are valid, whether the grounds support the claim, and whether the conclusions are
balanced; and constructive skills have been associated with presenting the arguments
(Marttunen, 1992). Several factors have also been associated with the ability to argue,
including intelligence (Perkins, 1985), age (McCann, 1989), and level of education
(King et al., 1990).
   The identification of specific skills, or elements, involved in argument should open
the door to interventions to improve students’ ability to develop arguments in their
essays, and there is some evidence about what form those interventions should take.
Perhaps the first question is the extent to which argument can be separated from
subject discipline. Andrews (1997) suggested that there are generic elements of argu-
ment at undergraduate level that can be separated from discipline-specific elements.
Regardless of the discipline, all academic arguing involves negotiating a new position
or defending an existing one in relation to others. Argument is, ‘an arrangement of
linguistic, visual and/or physical propositions in engagement with one or more other
points of reference in order to change or assert a position’ (Andrews, 1997, p. 267).
Mitchell and Riddle (2000) developed a generic undergraduate argument module,
                                                             Core assessment criteria 83

and suggested that ‘quality of argument needs to find a place on institutional agendas
in the same way that, for instance, key skills have done’ (p. 13).
   Andrews (1997, p. 266) also stressed, however, that disciplinary constraints play a
major part in shaping the nature of argument. Different ways of seeing and arguing
pertain in different disciplinary fields. One limitation of generic aspects of argument
is that they tend to be expressed in very abstract, academic language, as some of the
definitions given in this section illustrate, which is not useful for helping students
understand what is involved. Also, in generic argument teaching, the topics and source
materials must refer to general knowledge because students do not have a shared body
of disciplinary knowledge: ‘recourse to a general topic is limited in that it bypasses the
problem of how to help students improve within specialised disciplinary fields whose
arguments deploy particular theoretical perspectives, types of evidence and authority
… we felt that the greatest benefit to students would come from guidance and practice
within their subject disciplines’ (Mitchell & Riddle, 2000, p. 18).
   There is some evidence that argument can be improved within disciplines. In one
study, education students produced written texts on topics encountered in the course
books and lectures, then used email to express their opinions, present the grounds for
those opinions, criticize the views of other students, and defend their own opinions if
criticized by others. Those students performed better than a control group when the
commentaries they wrote on specimen arguments were assessed for argumentation
skills (Marttunen, 1992). In another study, the argument and counter-argument in
the email messages of education students in tutor-led and student-led email discus-
sion groups were assessed. The level of argument improved over time and was higher
in messages that included counter-arguments targeted at others’ standpoints.
Students in the student-led groups produced more and better counter-argument than
those in tutor-led groups (Marttunen, 1997). A third study of this type suggested that
different learning environments may promote improvements in different aspects of
argument: students who took an argumentation course delivered face-to-face
improved at putting forward counter-arguments, whereas those who took the same
course delivered by email improved at identifying and judging grounds. The overall
impact of the course was not impressive, however, and outcomes were not demon-
strably better than in a control group who were not taught to argue (Marttunen &
Laurinen, 2001). In another study, essays written by education students who had
attended constructionist, writing-to-learn discussion groups were compared with
those written by students attending lectures on the same topics. The essays were anal-
ysed by looking at the numbers of sentences belonging to epistemic categories, which
showed that the constructionist group had developed more elements of argument,
with fewer descriptions and more generalizations, classifications and comparisons
(Tynjälä, 1998).
   Argument therefore appears on first consideration to be a vague concept that is
related very much to personal development and self-efficacy, so that it could be
treated as a form of complex learning. On closer analysis, however, it comprises a
number of more specific complex skills that can be improved by training in the
context of subject learning.
84 J. Elander et al.

Discussion

Skills are involved to some extent in the demonstration of all the criteria we examined:
the activities required improve with experience, can be applied to a range of material,
and have been shown to respond to specific instruction. However, none of the criteria
describe the outcomes of truly generic or transferable skills, for in every case the skills
involved can be separated only in a limited way from subject knowledge. For example,
use of language and structuring are criteria for judging outcomes that require trans-
ferable skills only at lower levels of performance; at higher levels, meeting the criteria
requires a deep approach to learning because integration with subject knowledge is
needed. Critical thinking is a criterion for judging outcomes of learning that are more
separable from subject knowledge, and the activities involved resemble complex
learning because they are related to personal development, but demonstrating critical
thinking also involves a range of complex cognitive skills. The activities involved in
meeting the criteria for argument also resemble complex learning, especially when the
nature of argument is considered superficially, but argument, like critical thinking,
may most usefully be construed as a criterion for judging the demonstration of
complex cognitive skills.
   Each type of learning we have considered has different implications for the use of
assessment criteria in teaching situations. For example, treating the criteria as
descriptions of the outcomes of using generic skills would imply working with the
criteria one by one, away from the disciplinary context, and targeting novice and
poorly performing students. Treating the criteria as descriptions of the outcomes of
taking a deep approach to learning would imply closer links with subject knowledge,
more recognition of the inter-relationships among core criteria, and delivery to
students across a wider range of ability and experience. Treating them as descriptions
of the outcomes of complex learning would imply linkage with personal development
and performance outside the academic setting. In fact, most initiatives to use assess-
ment criteria in teaching have been discipline-based and delivered to first year
students (Pain & Mowl, 1996; Elander, 2003a; Rust et al., 2003). Our analysis
suggests that core assessment criteria for essays could underpin a wider range of
learning support, including interventions for more experienced and advanced
students.
   We propose that the type of learning required to demonstrate the core criteria for
written work is the learning of ‘complex skills’, and that the concept of complex skills
can help to inform the design and delivery of teaching interventions that use assess-
ment criteria. We propose complex skills because the abilities required to demon-
strate core criteria cannot be considered in isolation from one another, are closely
intertwined with subject knowledge, and, as they develop, help to promote a deep
approach to learning and personal attributes like self-efficacy that are associated with
complex learning. The reason for identifying core criteria with the manifestations of
skills is that they describe the outcomes of things that students do in the process of
understanding and producing knowledge. They are descriptions of abilities that are
needed in order to understand and produce knowledge, and those abilities can be
                                                              Core assessment criteria 85

acquired, practised, refined, and even go rusty without use. The reason for identifying
the relevant skills as ‘complex’ rather than ‘generic’ is that they embody disciplinary
practices of knowledge creation, and that understanding and acquiring each skill
involves understanding and acquiring all of the others.
   Complex means firstly, ‘consisting of or comprehending various parts united or
connected together’, and secondly, ‘complicated, involved, intricate; not easily anal-
ysed or disentangled’ (Simpson & Weiner, 1989). Core assessment criteria for essays
are complex in the sense that they are connected together and not easily analysed or
disentangled. Assessment criteria tend to overlap conceptually with one another, and,
‘many are operationally correlated together, so that whenever an attempt is made to
change a piece of writing according to one dimension, other properties are inevitably
affected at the same time’ (Sadler, 1989, p. 131). In an essay, therefore, the qualities
referred to in the assessment criteria work together to produce the kind of work
expected. Analytically, the skills required may be identifiable as separate, but under-
standing, using, and learning each skill happens in relation to understanding, using,
and learning all of the others. This is not to say that specific criteria cannot provide
an individual focus in teaching sessions. However, not only does an understanding of
each criterion come into focus only through an understanding of all the others, but
also, learning to employ each of the skills required to demonstrate the criteria is facil-
itated by learning to employ all of the others.
   The skills required to meet core assessment criteria are complex also in the way
they are related to subject knowledge. At the lower end of the performance range,
criteria like using language and structuring can be specified, and the skills and abilities
required to meet those criteria can be promoted, with little reference to a specific
discipline. As those skills become more highly developed, however, they become
much more closely linked with the content of the writing, and more specific to disci-
plinary knowledge. The concept of complex skills therefore puts the knowledge back
into skills. The concept is similar to academic literacy, which was defined as, ‘the
complex of linguistic, conceptual and skills resources for analysing, constructing and
communicating knowledge in the subject area’ (Warren, 2003, p. 109), and which
also makes very strong links between subject knowledge and writing skills. Complex
skills and academic literacy are both concerned with the construction of disciplinary
knowledge, but complex skills can be more explicit about the nature of the skills
required, especially when those skills can be mapped onto assessment criteria.
   The concept of complex skills is consistent with the development of study skills
interventions that emphasize integration with disciplinary content rather than trans-
ferability. Hattie et al. (1996), for example, concluded from a meta-analysis that study
skills interventions should take place in context, using tasks from the target domain,
and aim to promote learner activity and metacognitive awareness. Complex skills are
also consistent with the concept of ‘cognitive apprenticeships’ (Collins et al., 1989).
Hattie et al. (1996) even classified study skills interventions according to the SOLO
taxonomy. Unistructural programmes involved direct teaching of mostly mnemonic
devices, whereas relational programmes were delivered in relation to particular
content and were used for ‘near transfer’. Interventions using assessment criteria and
86 J. Elander et al.

adopting a complex skills approach would be expected to qualify as relational
programmes in Hattie et al.’s taxonomy.
   One possible risk associated with raising the profile of assessment criteria, and
especially with characterizing the criteria as describing the successful execution of
skills, is that what will be promoted is strategic or instrumental approaches rather
than deep approaches to learning. In a study of university students’ goals, those
with ‘performance goals’ and ‘multiple goals’ took the evaluation criteria more into
account when deciding what learning strategies to use, compared with those with
‘learning goals’ (Valle et al., 2003). It is not surprising that assessment criteria
should be related to performance goals, but performance and learning goals are not
necessarily mutually exclusive. One of the aims of using assessment criteria to
support learning should be to extend the benefits of understanding assessment
criteria to students with learning goals, and to encourage those with performance
goals to use the assessment criteria in ways that facilitate learning. Norton (2004)
proposed that assessment criteria be operationalized as ‘learning criteria’, shifting
the emphasis from outcome to process in a way that is intended to promote a deep
approach to learning and avoid encouraging students to adopt a strategic approach.
   In conclusion, assessment criteria have provided the focus for innovative efforts to
support students’ learning and achievement, which are likely to be further developed
in the future. The impact of those interventions will depend on educators arriving at
a clearer understanding in pedagogic terms of the meaning of assessment criteria and
the types of learning in which students must engage to produce written work that
meets the criteria. Our analysis suggests that the type of learning required to demon-
strate core criteria for essays and other academic writing is the learning of complex
skills that are transferable from one task to another within disciplines and are amena-
ble to improvement with practice and instruction. This provides a pedagogic frame-
work for future initiatives to promote student learning and achievement by using
assessment criteria in teaching. The concept of complex skills also enables teaching
about assessment criteria to support subject learning and knowledge acquisition, by
integrating the activities required to meet the criteria with those required to learn the
subject matter of the discipline concerned.

Acknowledgements
We are grateful to a referee for helpful comments on a previous draft. Work on
the article was supported by HEFCE funding through an FDTL4 grant for Assess-
ment Plus, a consortium project to develop methods and materials to support
student learning using assessment criteria. For details, see http://www.assessment-
plus.net.

Notes on contributors
James Elander is Professor of Psychology at Thames Valley University.
Katherine Harrington is Director of the Centre for Scientific Literacy at London
    Metropolitan University.
                                                                  Core assessment criteria 87

Lin Norton is Professor of Pedagogical Research at Liverpool Hope University College.
Hannah Robinson was a Research Assistant at Liverpool Hope University College at
    the time this work was conducted.
Pete Reddy is a Learning and Teaching Fellow at Aston University.


References
Allen, R. (Ed.) (2002) The Penguin English dictionary (London, Penguin).
Andrews, R. (1997) Reconceiving argument, Educational Review, 49(3), 259–269.
Andrews, R. (2003) The end of the essay? Teaching in Higher Education, 8(1), 117–128.
Atherton, J. S. (2005) Learning and teaching: solo taxonomy [On-line]: UK. Available online at:
     http://www.learning and teaching.info/learning/solo.htm (accessed 26 September 2005).
Bailin, S., Case, R., Coombs, J. R. & Daniels, L. B. (1999a) Common misconceptions of critical
     thinking, Journal of Curriculum Studies, 31, 269–283.
Bailin, S., Case R., Coombs, J. R. & Daniels, L. B. (1999b) Conceptualising critical thinking,
     Journal of Curriculum Studies, 31, 285–302.
Bartholomae, D. (1985) Inventing the university, in: M. Rose (Ed.) When a writer can’t write (New
     York and London, Guilford).
Beyer, B. (1985) Critical thinking: what is it? Social Education, 49, 270–276.
Beyer, B. (1987) Practical strategies for the teaching of thinking (Boston, MA, Allyn and Bacon).
Biber, D., Conrad, S., Reppen, R., Byrd, P. & Helt, M. (2002) Speaking and writing in the univer-
     sity: a multidimensional comparison, TESOL Quarterly, 36(1), 9–48.
Biggs, J. (1987) Process and outcome in essay writing, Research and Development in Higher Educa-
     tion, 9, 114–125.
Biggs, J. (1988) Approaches to learning and to essay writing, in: R. R. Schmeck (Ed.) Learning
     strategies and learning styles (New York and London, Plenum Press).
Biggs, J. & Collis, K. (1982) Evaluating the quality of learning: the SOLO taxonomy (New York,
     Academic Press).
Biggs, J., Lai, P., Tang, C. & Lavelle, E. (1999) Teaching writing to ESL graduate students: a
     model and an illustration, British Journal of Educational Psychology, 69, 293–306.
Biggs, J. B. & Rihn, B. (1984) The effects of intervention on deep and surface learning approaches
     to learning, in: J. Kirby (Ed.) Cognitive strategies and educational performance (New York,
     Academic Press).
Bonnett, A. (2001) How to argue: a student’s guide (Harlow, Pearson Education).
Boulton-Lewis, G. M. (1995) The SOLO taxonomy as a means of shaping and assessing learning
     in higher education, Higher Education Research and Development, 14(2), 143–154.
Branthwaite, A., Trueman, M. & Hartley, J. (1980) Writing essays: the actions and strategies of
     students, in: J. Hartley (Ed.) The psychology of written communication (London, Nichols and
     Cogan Page).
Bridges, D. (1993) Transferable skills—a philosophical perspective, Studies in Higher Education,
     18, 43–51.
Campbell, J., Smith, D. & Brooker, R. (1998) From conception to performance: how undergradu-
     ate students conceptualise and construct essays, Higher Education, 36, 449–469.
Chapman, B. S. (2001) Emphasising concepts and reasoning skills in introductory college molecu-
     lar cell biology, International Journal of Science Education, 23, 1157–1176.
Collins, A., Brown, J. S. & Newman, S. E. (1989) Cognitive apprenticeship: teaching the craft of
     reading, writing, and mathematics, in: L. B. Resnick (Ed.) Knowing, learning and instruction:
     essays in honour of Robert Glaser (Hillsdale, NJ, Lawrence Erlbaum Associates), 453–493.
Creme, P. & Lea, M. R. (1997) Writing at university: a guide for students (Buckingham, Open
     University Press).
88 J. Elander et al.

Elander, J. (2002) Developing aspect-specific assessment criteria for essays and examination
    answers in psychology, Psychology Teaching Review, 10, 31–51.
Elander, J. (2003a) A discipline-based undergraduate skills module, Psychology Learning and
    Teaching, 3, 48–55.
Elander, J. (2003b) The BPS Stage 1 qualification in health psychology, Health Psychology Update,
    12(4), 40–48.
Elander, J., Harrington, K., Norton, L., Robinson, H., Reddy, P. & Stevens, D. (2004) Core
    assessment criteria for student writing and their implications for supporting student learning,
    in: C. Rust (Ed.) Improving student learning 11. Theory, research and scholarship (Oxford,
    Oxford Centre for Staff and Learning Development), 200–212.
Elwood, J. & Klenowski, V. (2002) Creating communities of shared practice: the challenges of
    assessment use in learning and teaching, Assessment and Evaluation in Higher Education, 27,
    243–256.
Fabb, N. & Durant, A, (1993) How to write essays, dissertations & theses in literary studies (London,
    Longman).
Francis, B., Robson, J. & Read, B. (2002) Gendered patterns of writing and degree award, in: G.
    Howie & A. Tauchert (Eds) Gender, teaching and research in higher education (Aldershot, Ashgate).
Gadzella, B. M. & Masten, W. G. (1998) Relation between measures of critical thinking and
    learning styles, Psychological Reports, 83, 1248–1250.
Garside, C. (1996) Look who’s talking: a comparison of lecture and group discussion teaching
    strategies in developing critical thinking skills, Communication Education, 45, 212–227.
Girot, E. A. (1995) Preparing the practitioner for advanced study—the development of critical
    thinking, Journal of Advanced Nursing, 21, 387–394.
Harnon, J. E. (1992) An analysis of 50 citation superstars from the scientific literature, Journal of
    Technical Writing and Communication, 22(1), 17–37.
Hattie, J., Biggs, J. & Purdie, N. (1996) Effects of learning skills interventions on student learning:
    a meta-analysis, Review of Educational Research, 66, 99–136.
Hewings, M. & Hewings, A. (2002) ‘It is interesting to note that …’: a comparative study of antic-
    ipatory ‘it’ in student and published writing, English for Specific Purposes, 21, 367–383.
Hinchliffe, G. (2002) Situating skills, Journal of Philosophy of Education, 36, 187–205.
Holmes, L. (2001) Reconsidering graduate employability: the ‘graduate identity’ approach,
    Quality in Higher Education, 7, 111–119.
Hyland, K. (2001) Bringing in the reader—addressee features in academic articles, Written
    Communication, 18, 549–574.
Hyland, K. (2002) Directives: argument and engagement in academic writing, Applied Linguistics,
    23(2), 215–239.
     ˇ
Ivanic, R. (2004) Discourses of writing and learning to write, Language and Education, 18, 220–245.
      o
      c
      a
      r
      [
      n
      ]




Johnson, D. W., Maruyana, G., Johnson, R., Nelson, D. & Skon, L. (1981) Effects of cooperation,
    competitive and individualistic goal structures on achievement: a meta-analysis, Psychological
    Bulletin, 89, 47–62.
Keeley, S. M., Ali, R. & Gebing, T. (1998) Beyond the sponge model: encouraging students’
    questioning skills in abnormal psychology, Teaching of Psychology, 25, 270–274.
Kemp, I. J. & Seagraves, L. (1995) Transferable skills—can higher education deliver? Studies in
    Higher Education, 20, 315–328.
King, A. (1997) ASK to THINK—TEL WHY: a model of transactive peer tutoring for scaffolding
    higher level complex learning, Educational Psychologist, 32, 221–235.
Knight, P. T. & Yorke, M. (2003) Employability and good learning in higher education, Teaching
    in Higher Education, 8(1), 3–16.
Lavelle, E. (1993) Development and validation of an inventory to assess processes in college
    composition, British Journal of Educational Psychology, 63, 489–499.
Lavelle, E. (1997) Writing style and the narrative essay, British Journal of Educational Psychology,
    67, 475–482.
                                                                      Core assessment criteria 89

Lavelle, E. & Zuercher, N. (2001) The writing approaches of university students, Higher Educa-
     tion, 42, 373–391.
Lea, M. R. & Street, B. V. (1998) Student writing in higher education: an academic literacies
     approach, Studies in Higher Education, 23, 157–172.
Maguire, S., Evans, S. E. & Dyas, L. (2001) Approaches to learning: a study of first-year geogra-
     phy undergraduates, Journal of Geography in Higher Education, 25(1), 95–107.
Marttunen, M. (1992) Commenting on written arguments as part of argumentation skills—
     comparison between students engaged in traditional versus on-line study, Scandinavian
     Journal of Educational Research, 36, 289–302.
Marttunen, M. (1997) Electronic mail as a pedagogical delivery system: an analysis of the learning
     of argumentation, Research in Higher Education, 38, 345–363.
Marttunen, M. & Laurinen, L. (2001) Learning of argumentation skills in networked and face-to-
     face environments, Instructional Science, 29, 127–153.
McCann, T. M. (1989) Student argumentative writing knowledge and ability at three grade levels,
     Research in the Teaching of English, 23, 62–76.
McPeck, J. E. (1981) Critical thinking and education (New York, St Martin’s Press).
Miller, M. A. (1992) Outcomes evaluation—measuring critical thinking, Journal of Advanced
     Nursing, 17, 1401–1402.
Mitchell, S. & Riddle, M. (2000) Improving the quality of argument in higher education: final report
     (London, School of Lifelong Learning and Education, Middlesex University).
Norton, L. S. (1990) Essay writing: what really counts? Higher Education, 20, 411–442.
Norton, L. (2004) Using assessment criteria as learning criteria: a case study in psychology, Assess-
     ment and Evaluation in Higher Education, 29, 687–702.
Oates, J. (2002) Essay TMA scoring/feedback criteria, Personal communication.
O’Donovan, B., Price, M. & Rust, C. (2000) The student experience of criterion-referenced
     assessment (through the introduction of a common criteria assessment grid), Innovations in
     Education and Teaching International, 38, 74–85.
Pain, R. & Mowl, G. (1996) Improving geography essay writing using innovative assessment,
     Journal of Geography in Higher Education, 20(1), 19–31.
Peck, J. & Coyle, M. (1999) The student’s guide to writing: grammar, punctuation and spelling
     (Houndmills, Basingstoke and London, Macmillan).
Perkins, D. N. (1985) Postprimary education has little impact on informal reasoning, Journal of
     Educational Psychology, 77, 562–571.
Pirie, D. B. (1985) How to write critical essays: a guide for students of literature (London, Methuen).
Price, M. & Rust, C. (1999) Business assessment criteria grid, Personal communication.
Prosser, M. & Webb, C. (1994) Relating the process of undergraduate essay writing to the finished
     product, Studies in Higher Education, 19, 125–138.
Read, B., Francis, B. & Robson, J. (2001) ‘Playing safe’: undergraduate essay writing and the
     presentation of the student ‘voice’, British Journal of Sociology of Education, 22, 387–399.
Rust, C., Price, M. & O’Donovan, B. (2003) Improving students’ learning by developing their
     understanding of assessment criteria and processes, Assessment and Evaluation in Higher
     Education, 28, 147–164.
Sadler, D. R. (1987) Specifying and promulgating achievement standards, Oxford Review of Educa-
     tion, 13, 191–209.
Sadler, D. R. (1989) Formative assessment and the design of instructional systems, Instructional
     Science, 18, 119–144.
Scouller, K. (1998) The influence of assessment method on students’ learning approaches: multi-
     ple choice question examination versus assignment essay, Higher Education, 35, 453–472.
Simpson, J. A. & Weiner, E. S. C. (Eds) (1989) The Oxford English dictionary (2nd edn) (Oxford,
     Clarendon Press).
Smith, D., Wolstencroft, T. & Southern, J. (1989) Transferable personal skills and the job
     demands on graduates, Journal of European Industrial Training, 13(8), 25–31.
90 J. Elander et al.

Terenzini, P. T., Springer, L., Pascarella, E. T. & Nora, A. (1995) Influences affecting the devel-
     opment of students’ critical thinking skills, Research in Higher Education, 36(1), 23–39.
Thompson, G. (2001) Interaction in academic writing: learning to argue with the reader, Applied
     Linguistics, 22(1), 58–78.
Tynjälä, P. (1998) Traditional studying for examination versus constructivist learning tasks: do
     learning outcomes differ? Studies in Higher Education, 23, 173–189.
Tynjälä, P., Valimaa, J. & Sarja, A. (2003) Pedagogical perspectives on the relationships between
     higher education and working life, Higher Education, 46, 147–166.
Valle, A., Cabanach, R. G., Nunez, J. C., Gonzalez-Pienda, J., Rodriguez, S. & Pineiro, I. (2003)
     Multiple goals, motivation and academic learning, British Journal of Educational Psychology,
     73, 71–87.
van Merriënboer, J. J. G., Kirschner, P. A. & Kester, L. (2003) Taking the load off a learner’s
     mind: instructional design for complex learning, Educational Psychologist, 38, 5–13.
Warren, D. (2003) A discipline-based approach to developing academic literacy, in: D. Gosling &
     V. D’Andrea (Eds) International conference on the scholarship of teaching and learning: proceedings
     2001–2002 (London, Educational Development Centre, City University), 109–117.
Whitehead, D. (2002) The academic writing experiences of a group of student nurses: a phenome-
     nological study, Journal of Advanced Nursing, 38, 498–506.

				
DOCUMENT INFO
Tags:
Stats:
views:31
posted:7/7/2012
language:English
pages:21