Q500 – Introduction to Cognitive Science by pji19056

VIEWS: 0 PAGES: 30

									       CONCEPT MAPS AND ASSESSMENT:
   A PROPOSAL FOR EXTENDING COURSE-LEVEL
 MEASUREMENTS TO DISCIPLINE- AND CURRICULUM-
               LEVEL LEARNING
                                 Lee A. Freeman
                             School of Management
                      The University of Michigan – Dearborn
                           lefreema@umd.umich.edu

                              Andrew Urbaczewski
                             School of Management
                      The University of Michigan – Dearborn
                           aurbacze@umd.umich.edu

                                   ABSTRACT

Assessment of student learning objectives is often accomplished via in-class
projects and/or examinations. These techniques, and others like exit interviews,
provide faculty and administrators with a means of measuring what the students
have learned in a particular course, discipline (MIS, Marketing, etc.), or an entire
degree program or curriculum (BBA, MBA, etc.).         Faculty are accustomed to
course-level assessment, as this is the predominant method for determining
student grades. Faculty are presumably much less comfortable with or used to
this kind of assessment when it comes to assessing the discipline (i.e., the major
or concentration), and to a greater extent the entire degree or curriculum.
However, various accreditation organizations now require such assessment at
various levels within schools of business. An alternative method of measuring
the knowledge of students is to use mental models, and specifically concept
maps.    Concept maps provide a visual representation of conceptual and
relationship knowledge within a particular domain.        This paper outlines the
potential uses of concept maps for assessment requirements at the course,
discipline, and curriculum levels through a description of the assessment




Journal of Informatics Education Research                                        75
Freeman and Urbaczewski
requirements, an introduction to concept mapping theory and practice, and an
outline of possible directions for use.

Keywords: IA – IS Education, IA01 – IS Curriculum


                               I. INTRODUCTION
        Student learning assessment often takes the form of quizzes, exams,
homework, projects, and other assignments that allow students to demonstrate
their mastery of a domain. Some business schools require their students to
complete an exit interview upon graduation, or they may require some
comprehensive exam in one of the capstone courses, but these instances are (at
least anecdotally) not very common. As a result, most assessment occurs within
a particular course and has relevance to that particular course.

        Regarding these assessment measures, Freeman and Urbaczewski
[2003] stated “most of these measures are very structured and limiting to the
students … [as] these traditional techniques do not allow the students to
demonstrate knowledge and mastery beyond the assessment technique.” While
instructors create these measures of knowledge with the goal of measuring a
specific type of knowledge (declarative vs. procedural, for example) for a specific
content area (creation of Entity Relationship Diagrams or estimation of network
traffic patterns, for example), the structure of the question(s) and the limitations
of response format requirements do not allow students to fully express what they
know.

        In addition, these techniques are often instructor-centric, with the goal of
easy grading as the focus. This is not to say that these techniques are bad or
incorrect, but merely that they have limitations. Nor is this to say that instructors
should not be using these techniques. They are traditional and prevalent for a
reason – they serve a purpose and do it well. What this does say, however, is
that these traditional assessment techniques should be supplemented with other



76                                          Journal of Informatics Education Research
                                                            Freeman and Urbaczewski
techniques that allow for the student to show greater mastery of the material
being assessed.

       The driving forces behind the need for assessment are: 1) the need to
assign student grades (during and at the end of the course) by measuring
student knowledge and ability; 2) the need to provide concrete performance data
with regard to the course, the discipline or major, and/or the entire curriculum or
degree (often times in the context of accreditation); and 3) the need to improve
any of the above.      In addition, some of these assessments are completed
because they are required (student grades, some curriculum assessment), and
some are completed on a voluntary basis by faculty, departments, and/or schools
(self-improvement and performance data assessments). In general, assessment
has begun to move from a voluntary basis to a required (or at least a
recommended) basis, though assessment at the course level has always been
“required” in order to assign grades.       In terms of improvement, assessment
seems to be moving from the course to the discipline to the curriculum as the
concerns faculty have typically confronted and handled within a particular course
are now being faced at other levels of teaching. With regard to accreditation,
assessment is moving from the curriculum level (e.g., The Association to
Advance Collegiate Schools of Business (AACSB) down to the discipline level via
accrediting organizations and IT societies (e.g., the Computing Accreditation
Commission of the Accreditation Board for Engineering and Technology (ABET).

       However, over the last few years, many organizations outside of business
schools themselves have begun the process of requiring assessment of student
learning at various levels within the curriculum. Some of this assessment may
already be in place, but much has required the development of new assessment
measures.    Due to tradition, past experiences, and time constraints, it is no
wonder that a majority of these new measures utilize the traditional techniques
that are already utilized in most classrooms.




Journal of Informatics Education Research                                       77
Freeman and Urbaczewski
      The purpose of this paper is not to argue that these traditional techniques
should no longer be used for assessment of learning at the various levels. On
the contrary, this paper will argue that an additional technique can and should be
used to supplement these other techniques in order to gain a more robust and
meaningful understanding of what students know, what they do not know, and
what they feel is important. While there are numerous potential techniques that
could be utilized, this paper will argue that concept maps (a form of a mental
model) are an appropriate and proven technique.        The visually represented
concept map of a student’s mental conceptualization of the domain at hand
provides the readers of that map an alternate view of what the student is thinking
and what the student knows [Suen et al., 1997]. Concept mapping is not a new
technique, and it has been utilized in fields as diverse as education, history,
communications, biology, mathematics, engineering, and computer science
[Cliburn, 1986; Gaines and Shaw, 1995; Wallace and Mintzes, 1990; Williams,
1995]. More importantly, concept maps can be utilized for the assessment of
knowledge at a conceptual level [Fisher, 1990; Fisher et al., 1990; Gaines and
Shaw, 1995; O’Neil and Klein, 1997].

      In order to make an effective argument in favor of concept maps, this
paper will first review the assessment guidelines from AACSB and from the
Association for Information Systems (AIS), as well as discuss course-level
assessment in greater detail. The theory behind concept mapping and examples
of their use in research and the classroom will be discussed.           Finally, a
recommended proposal for the implementation of concept maps into the
assessment toolbox at the course, discipline, and curriculum levels will be
provided.
                     II. ASSESSMENT GUIDELINES
AACSB Curriculum/Degree Assessment
      In Eligibility Procedures and Accreditation Standards for Business
Accreditation, the Association to Advance Collegiate Schools of Business



78                                        Journal of Informatics Education Research
                                                          Freeman and Urbaczewski
(AACSB) devotes eleven pages to the topic of “Assurance of Learning” [AACSB,
2004].    Updated in early 2004, this document shows AACSB’s concern with
assurance of learning to demonstrate accountability and goal attainment of the
curriculum for external constituents, such as “(current and) potential students,
trustees, public officials, supporters, and accreditors” [AACSB, 2004, p. 56].
Assurance of learning is also important as a means to help faculty improve the
programs and courses being offered by measuring student success, planning
improvement, and perhaps providing feedback to individual students.

         The section on “Assurance of Learning” discusses in great detail the
identification of learning goals, on which the assessments are based. A listing of
three potential approaches to Assurance of Learning is given that includes pre-
selection criteria for admission (e.g., pre-existing proficiency in a second
language, written communication, or mathematics/statistics), course-embedded
measurement (e.g., term paper or capstone project), and stand-alone testing
(e.g., specific ability/skill test, qualifying examination, or senior thesis). However,
AACSB makes explicit that, “by no means does this imply that these approaches
exhaust the ways schools can demonstrate that learning goals are met” [AACSB,
2004, p. 60].    The document goes on to say that no particular approach is
required and that “schools are encouraged to choose, create, and innovate
learning measures that fit with the goals of the degree programs, pedagogies in
use, and the schools’ circumstances” [AACSB, 2004, p. 60].

         In the end, AACSB makes it quite clear that the purpose of this
assessment is not to test/measure students for the sake of doing so.              The
purpose is to make a difference in the school by using the results to “generate
changes in curricula, pedagogy, and teaching and learning materials” [AACSB,
2004, p. 65].

AIS Discipline/Major Assessment
         While AACSB is primarily concerned with the entire curriculum for
accreditation purposes, and not individual disciplines or majors within the


Journal of Informatics Education Research                                           79
Freeman and Urbaczewski
curriculum, they do point out that assessment at the level of the major can be a
valuable tool internally for the school. At the discipline level, the primary goal of
assessment is often improvement of the overall major (or minor) being offered to
the students in that particular discipline.           For information systems, the
Association of Information Systems (AIS) provides its most current guidelines at
the undergraduate level in “IS 2002: Model Curriculum and Guidelines for
Undergraduate Degree Programs in Information Systems” [AIS, 2002].
Unfortunately, this ‘model curriculum’ does not provide any specific guidelines for
assessing or measuring the knowledge of the students who complete the
program of courses for a major in Information Systems. There is guidance with
regard to learning objectives for the specific courses, and even a breakdown of
the types of learning according to a modified version of Bloom’s Taxonomy
[Bloom, 1956], but nothing in the way of specific assessment measures.

       Still, Information Systems faculty should feel the need to keep their
particular major current. To do this properly requires some form of assessment
of student learning. This type of assessment can also be seen as a learning tool
for faculty that will inform them about the design of the major, the content of the
courses in total, and the relative strength of the students coming out of the major.
Unfortunately,   as   with   most   non-traditional    techniques   (assessment   or
otherwise), there is a learning curve for the faculty (the individuals who are likely
to be implementing such assessments) in terms of determining whether
assessment should occur, determining what to assess, determining how to
assess, and actually implementing the plan. This “newness” in combination with
the time constraints already in existence can lead to stagnation of the
assessment process or to no assessment at all.

Faculty Course/Classroom Assessment
       The most common form of student learning assessment occurs as a
regular and embedded part of academic courses. In order to give students a
grade for the course, instructors assess students’ learning via examinations,


80                                          Journal of Informatics Education Research
                                                            Freeman and Urbaczewski
quizzes, homework, group work, projects, papers, etc. These techniques provide
feedback to the students as to their progress in the course and their
comprehension and retention of the course content. In most cases, very little
guidance is given to instructors with regard to what must be assessed, how it
must be assessed, and under what circumstances. These assessments are not
done because of the need or impetus from another organization (e.g., for
accreditation).

       While faculty will often utilize some of the information gained from these
techniques to modify their courses (e.g., when a majority of the students make
similar mistakes in their response to an exam question, the instructor may re-
teach that material), this is usually done at a very micro level. Such instances
involve a specific facet of knowledge, a specific terminology, or a specific task.
Rarely is there information available to the instructor regarding the course at a
more macro level, whether on an entire unit or even the entire course. Such
information, if available, would provide the instructors with a great tool to help
improve course content, pedagogy, and student learning.

       Even with all of the current assessment conducted at the course level, this
information does not provide a complete picture of the entire discipline or degree
program. In other words, if an IS major requires eight specific IS courses beyond
the core requirements for all majors, the course-level assessment in these eight
courses is done on a course-by-course basis by the appropriate instructor.
Combining these eight courses’ assessment knowledge does not yield an
appropriate assessment picture for the major. Rather, it yields eight individual,
course-level assessments that have been combined in a single document or
summative analysis. Similarly, if all of the course-level assessments from the
entire degree program (a BBA degree, for example) are combined in some way,
it does not yield a proper tool for curriculum-level assessment. There is more to
a major in a particular discipline than just a set number of courses (taken
individually with no connections between them), and there is more to an entire


Journal of Informatics Education Research                                      81
Freeman and Urbaczewski
degree program than just the collection of core courses, electives, and courses
within a major. This is the reason for the call for assessment at these three
levels, regardless of the techniques used for the assessment measurement.

                         III. CONCEPT MAPPING
Mental Models

      Mental models are external, physical models of someone’s internal,
cognitive representation of her structural and conceptual understanding of a
static situation, a process, a problem, or some combination [Craik, 1943]. The
mental model literature spans many disciplines, and mental models are used for
a variety of purposes, including knowledge assessment [Goldsmith et al., 1991].
As a result, there is not always a consistent use of the terminology. Besides the
concept map, other representations of mental models include influence diagrams
and cause-effect diagrams, in addition to related terminology such as causal
maps, network maps, knowledge maps, cognitive maps, cognitive models, and
schemas [Shavelson et al., 1994].

      While not exactly alike, a number of these terms describe similar models
and modeling techniques. Knowledge maps, network maps, and schemas are
often synonymous with concept maps, and they do not require causality, though
they do support the representation of causality.      Cognitive maps are often
synonymous with causal maps and cause-effect diagrams.          These last three
forms of mental models involve similar concepts and relationships as in concept
maps, but they require causality between the concepts. Influence diagrams are
similar to cognitive maps, causal maps, and cause-effect diagrams, though the
notation is different and the diagram is acyclic [Massey and Wallace, 1996].
Finally, cognitive models are another term for mental models in general and do
not refer to any specific technique. For a more complete review of the mental
model literature, see Gentner and Stevens [1983] or Wilson and Rutherford
[1989].



82                                        Journal of Informatics Education Research
                                                          Freeman and Urbaczewski
Concept Mapping Theory
       Concept mapping is a powerful technique precisely because it
       allows the individual to construct a visual representation of
       conceptual interrelationships, that is, a representation of the
       individual’s understanding of her knowledge.          The visual
       representation of the conceptual interrelationships is only one
       aspect of the power of this technique. Through the actual process
       of constructing a concept map the individual can also make new
       connections and recognize concepts which should be added.
       [Fraser, 1993, p. 33]

       Two cognitive theories of memory have been used to support concept
mapping    –   Ausubel’s   [1968]    Assimilation Theory and Deese’s [1965]
Associationist Theory. Assimilation Theory states that memory is hierarchical,
and new information is processed and stored as either a more general or more
specific concept to other, related concepts, i.e., assimilated into the existing
structure [Fraser, 1993]. For example, if someone already knows the concepts of
dog, bird, cat, and human, when the concept of animal is learned, it is put into the
hierarchy “above” these others already present. Also, if this same person were
to learn the concepts of eagle and canary, they would both be placed “under” bird
as new branches of the hierarchy.

       Associationist Theory states that memory consists of a network of
concepts that is not necessarily hierarchical. Relationships between concepts
are formed naturally when two concepts overlap on some dimension. This is
akin to word association games, though in these games the relationships are not
labeled. As learning occurs, this network of concepts and relationships becomes
more and more elaborate and complex. In the end, the memory structure in
Associationist Theory is extremely similar to that of Assimilation Theory, except
that hierarchies are not required.




Journal of Informatics Education Research                                        83
Freeman and Urbaczewski
         Concept mapping was originally developed as a research technique in
1974 to make sense of data gathered in clinical interviews [Novak and Musonda,
1991].     Since then, concept mapping has been used in numerous ways in
education, psychology, and organizational settings [Fraser, 1993; Novak, 1995].
Concept mapping enables one to visualize the specific relationships among
concepts as well as the hierarchical structure and organization of these
relationships. It can assist an individual in structuring her understanding of a
topic, in creating personal meaning, and in “making externally explicit the
individual’s understanding of her cognitive structure” [Fraser, 1993, p. 40].
Understanding the relationships among concepts within a discipline is the basis
for much of our knowledge [Goldsmith and Johnson, 1990], and structured
representations such as concept maps capture this configural property better
than other techniques [Markham et al., 1994].

         A concept map is a pictorial representation of a domain that consists of
concepts represented as nodes that are connected to each other by arcs (see
Figure 1). The concepts are words or ideas that represent events, objects, or
even emotions and feelings. The connecting arcs represent the conceptual links
– stating that the concepts are conceptually and logically related in some manner
– between two or more concepts within the concept map [Dorough and Rye,
1997]. Fraser [1993] provides the following “rules” to govern the construction of
concept maps which were used as a basis for previous works (discussed below)
and are intended as the basis of this proposed extension of concept map usage
for assessment. These rules are supported by Novak and Gowin [1984] and
Shavelson et al. [1994], and are based on Deese’s [1965] Associationist Theory
(meaning that hierarchies are supported, but not required):




84                                         Journal of Informatics Education Research
                                                           Freeman and Urbaczewski
                                                           IDEAS
                      PROPOSITIONS                                                                                       STUDY &
                                                                                  LINEAR TEXT
                                                                                                                       REVISION AID
                                                                   relate                 alternative to
                                  related by
                                                                                                                                   may develop
    CLASSROOM
     TEACHERS
                                                                                                                METACOGNITION
                                CONCEPTS
                                                                                                used as
            suitable for                  consist of                                                                                   awareness of



                                                               CONCEPT                                                                 LEARNING
                                      used as
        RESEARCH &                                              MAPS                                of                                 PROCESSES
        ASSESSMENT
           TOOL                                to assess
                                                                                                           LEARNING                                   may increase

                   reveals                                                  may address
                                                                                                                 increasing
                                                                                                                                           LEARNING
                                                                                                                                         EFFECTIVENESS
         MISCONCEPTIONS                                       AFFECTIVE
                                                              OBJECTIVES                        such as

                                     to do with
                                                                                                            ENJOYMENT
                                                                                                                                                       improving
                                                                               and
                                 FEELINGS                                                                             increasing
                                 & VALUES
                                                                                            INTEREST                                     MOTIVATION


   A concept map showing the mapping of “concept maps” with emphasis added by the authors.
   Note that the links are displayed as arrows showing the direction of the link (not causation), a
   convention not always used.

                             Figure 1. Sample Concept Map [Taber, 1994]

         1. Concepts are located in rectangles or other geometric forms. Lines
              connect the concepts. Linking words are written on the lines that
              describe the relationship between the two concepts.

         2. The linking words should specifically explicate the relationship
              between the two concepts.                                        Together with the two concepts, the
              linking words form a “proposition” – such as “the grass is green” from
              the concept ‘grass’, the concept ‘green’, and the linking word ‘is’.

         3. There is no “right” map as all maps are idiosyncratic to each
              individual. Different people may produce very different maps for the
              same conceptual domain. A concept map can be wrong, however, if



Journal of Informatics Education Research                                                                                                                  85
Freeman and Urbaczewski
              there are propositions that are incorrect, such as “the bear speaks
              English” or “802.11 is a systems model.”

         4. The interconnections between concepts give rise to the power of the
              concept map.      More interconnections and cross-linkages are an
              indication   of   a   greater   complexity    and    sophistication    of
              understanding.

       Describing an individual’s cognitive structure through other techniques
such as “a spoken narrative, an outline, a written summary, formal and informal
conversation, a flowchart, etc.” is limited in that these techniques are linear and
unable to depict the complexity of the relationships between concepts and ideas
[Fraser, 1993, pp. 40-41].

       By providing this visual representation of an individual’s conceptual
knowledge of the main concepts and sub-concepts within a particular domain,
concept mapping acts as an assessment of the Knowledge, Analysis, and
Synthesis competencies from Bloom’s taxonomy [Bloom, 1956].                    As an
alternative to linear text, concept maps are able to help reveal misconceptions of
the domain (one of the main drivers of assessment). As a student-driven tool,
the students are not solely constrained to answering questions or completing
assignments on a particular topic chosen by the instructor or the entire faculty.
Using the main topic, students are able to express their knowledge graphically
and unconstrained, thereby providing a more accurate assessment of what they
know, do not know, do not understand, or perhaps do not find important enough
to include.

Creating Concept Maps
       The mental modeling and concept mapping theories provide the
background and support for the use of concept maps for student learning
assessment. However, their actual creation and evaluation must be explained in
order to use them properly and effectively.


86                                            Journal of Informatics Education Research
                                                              Freeman and Urbaczewski
       There are five basic steps for creating a concept map:

        1. Determine the main topic or the domain to be mapped.
        2. Write that term or phrase (the main concept) at the top or in the
           middle of a sheet of paper.
        3. Determine the related terms or phrases (additional concepts) to the
           main concept, and write them down on the paper near the main
           concept.
        4. Connect the new concepts to the main concept and/or other new
           concepts with a line. Near this line, write the appropriate linking word
           or phrase that creates the proposition in rule #2 above.
        5. Continue to add more concepts and the appropriate connections and
           linking words to build the map.
       While concept maps are often drawn by hand, especially when utilized
within a course for note-taking or planning purposes, there are a number of
concept mapping software applications available, and their use has been shown
to be as effective as manually drawn maps (Anderson-Inman and Horney, 1997].
One such tool is CmapTools, available for free download from the Institute for
Human and Machine Cognition (IHMC) at http://cmap.ihmc.us/, which has been
used for the following example.

       If the main topic or domain from step #1 is “telecommunications,” then
some of the related terms from step #3 could be “Internet,” “e-mail,” “digital
pictures,” “chat rooms,” and “cell phone.”     After completing steps #1-4, the
concept map might look like Figure 2.

       After determining additional concepts in step #5, adding them to the initial
concept map, and re-organizing the layout and the final content, the resulting
concept map might look like Figure 3. It is important to keep in mind that there is
the potential for each and every concept to be related to each and every other
concept. At some level, one could make the argument that a relationship exists



Journal of Informatics Education Research                                       87
Freeman and Urbaczewski
between each and every concept. However, such a cluttered and noisy concept
map would be extremely hard to read, and it would lose its value as a
communication tool.




              Figure 2. Initial Concept map of Telecommunications

      This is a concept map of “telecommunications” according to its creator.
This map can be compared to the maps of others, or this map can be used as a
communication tool so that others can gain an understanding of the topic
according to the map’s creator.     Of course, if the map were not complete,
additional concepts would be added and linked as appropriate. There is no size
restriction on concept mapping, provided the map remains on the topic at hand.

Evaluating/Grading Concept Maps
      With any assessment technique, it is critical that there exists a scoring
key, a grading rubric, or some other way of knowing that the students have met
the learning objectives. This is no different with concept maps. However, as rule
#3 above points out, there is no absolute “right” map, thereby requiring that


88                                        Journal of Informatics Education Research
                                                          Freeman and Urbaczewski
concept maps be analyzed differently. To this end, Ruiz-Primo [2004] has done
significant work in formalizing the evaluation of concept maps as assessment
tools, including the option of providing some or all of the concepts and/or linking
words in advance. This “seeding” of the map’s content creates a different form of
assessment with different evaluation aspects, but it is a viable alternative.




            Figure 3. Completed Concept Map of Telecommunications

       Typically, concept maps of students are compared to an expert’s map in
both quantitative and qualitative ways.      This expert’s map may be from the
instructor for course-level maps, from the discipline faculty for discipline-level
maps, or from a group of faculty representing the entire curriculum for curriculum-
level maps. The expert map may be from a third party as well, such as from a
textbook or from an assessment organization.            Common nodes, common
relationships, and measures of size (nodes, relationships, and interconnections)
and scope (topic coverage and depth) can all be incorporated into the analysis to
determine if the students have met the learning objectives.




Journal of Informatics Education Research                                       89
Freeman and Urbaczewski
       For more details on the quantitative analyses, with specific instructions,
see Croasdell et al. [2003] and Freeman and Urbaczewski [2003].


          IV. PRIOR CONCEPT MAP ASSESSMENT USAGE

       Concept maps have been utilized for assessment of learning across many
fields, including engineering [Turns et al., 2000], chemistry [Gouveia and
Valadares, 2004], biology [Rocha et al., 2004], and general science [Ruiz-Primo
and Shavelson, 1996], though this is by no means an exhaustive list. Concept
mapping has only recently been utilized within the field of Information Systems.

       Freeman and Urbaczewski [1999] asked students to create concept maps
of their entire IS knowledge (i.e., at the discipline level). A difficulty associated
with this level of assessment was that much more time was needed by the
students to create an adequately representative map of their discipline-level
knowledge. The authors went on to assert that it was unlikely that this use of
concept maps for curriculum-level assessment would take hold at many
institutions due to the necessary pedagogical and cultural changes required for
assessment at this level (by any means).        While hindsight tells us that this
assertion from the late 1990s was shortsighted in the sense that organizations
such as AACSB are now requiring curriculum-level assessment, curriculum-level
assessment does indeed require pedagogical and cultural shifts to be effective.
Once institutions decide to assess at the curriculum level, concept maps can be
utilized along with other assessment techniques.

       In a subsequent study, the same authors reduced their focus to a single
course within the IS curriculum [Freeman and Urbaczewski, 2003]. Students
created concept maps of their course knowledge at the beginning, middle, and
end of a single semester.       Comparisons were then conducted within time
periods, across time periods, and to an expert’s map. A significantly increasing
overlap existed of the students’ concepts to the expert’s concepts over time.



90                                          Journal of Informatics Education Research
                                                            Freeman and Urbaczewski
They concluded that this technique was a valid and valuable assessment
method, and they went on to state that “student maps could be compared to a
‘departmental’ or ‘institutional’ map or even to the map at a discipline level from
the IS 1997 or IS 2002 requirements” [Freeman and Urbaczewski, 2003, p. 52].
Other studies at the course level have found concept maps to be valuable in
identifying misconceptions and in restructuring teaching content [e.g., Gouveia
and Valadares, 2004].      Looking back at Figure 3, some misconceptions are
evident through the linking of the concepts “star is a bus” and the concepts “ISP
provides LAN.” These misconceptions are valuable teaching tools in that they
may lead to individual work with the student or perhaps re-teaching the material
to the entire class. Other linear relationships might have been better expressed
as branch relationships, such as the Napster and eBay links from the
Telecommunications term in Figure 3. This may indicate that the student needed
further training in the appropriate use of concept maps or was simply writing
terms in a “helter-skelter” fashion.

       One aspect of student assessment at all of these levels that is rarely
discussed is the implicit assumption that students do not enjoy the process –
they do not enjoy taking tests and examinations, and they do not like many of the
traditional forms of assessment either due to the time involved or the relationship
to actual grades.    While concept mapping is not a fast technique, especially
considering that the students need to receive some training as to what concept
maps are and how to create them, their results are not always incorporated into
actual grades within a course.         Instructors typically utilize them for other
purposes such as note-taking, course review, or lecture outlines [Croasdell et al.,
2003], and the concept maps are not associated with traditional grading from the
perspective of the students’ overall course performance. As a result, students
are much more likely to accept and be amenable to their use and incorporation
into a course. In fact, several studies have shown that students have actually




Journal of Informatics Education Research                                       91
Freeman and Urbaczewski
found concept maps to be fun [Freeman and Urbaczewski, 1999; Taber, 1994],
an emotion not generally associated with most forms of assessment.

                     V. PROGRAM OF ASSESSMENT

      The previous sections of this paper have provided the background on the
current state of assessment within business schools at the curriculum level, the
discipline level, and the course level. Concept mapping was then introduced,
explained, and shown to be a valuable tool for such assessments, with emphasis
at the course level.    This section will now outline a potential Program of
Assessment that utilizes concept mapping by discussing the major issues,
several sample uses, and the main limitations.

Issues
      Before using concept maps for assessment, there are number of key
issues that must be resolved for the particular situation and its learning goals and
objectives.   These issues apply across the board to all assessment uses of
concept maps.

      •   Grading: Will the concept maps be graded in a traditional manner with
          letter grades, will they be graded on a pass/fail or
          satisfactory/unsatisfactory basis, or will they not be graded at all? As
          previously mentioned, concept maps are often not graded in a
          traditional manner, but that does not mean that they cannot nor should
          not. This decision would depend on the underlying objective of the
          assessment and the reason for using concept maps in addition to, or
          instead of, other techniques. If the maps are to be graded, the next
          two questions take on even greater importance.
      •   Evaluation: Will the concept maps be evaluated on a quantitative or a
          qualitative level, or both? Again, this is dependent on the assessment
          objective and how faculty will know that the objective has been met.
          The more formal the assessment goals, the more likely a quantitative
          evaluation will be needed. As with other assessment techniques, a
          grading rubric [see Ruiz-Primo, 2004] would be required.


92                                         Journal of Informatics Education Research
                                                           Freeman and Urbaczewski
       •   Expert Map: Will an expert map (created by a single faculty member, a
           group of faculty within or across disciplines, or an outside entity) be
           utilized for evaluating the concept maps? For quantitative evaluations,
           expert maps are typically used. Expert maps are valuable tools for
           determining whether the student concept maps match the expectations
           of learning from others.
       •   Faculty Training: How much training, and on what topics, should
           faculty receive? Faculty must be trained in concept mapping, including
           their creation and evaluation. Faculty must also be trained and
           convinced of the efficacy of using concept maps for assessment. This
           may require a champion within the school or discipline.
       •   Student Training: How much training, and on what topics, should
           students receive? Students must be trained in concept mapping,
           including their creation. This training can be fairly short (20-30 minutes
           has been shown to be effective [Freeman and Urbaczewski, 2003]),
           but longer training will create better “mappers” and maps that better
           reflect student learning.
       •   Scope: How much or how little should the concept maps include?
           Scope will be inherently determined by the level of the assessment
           (i.e., a curriculum-level assessment will have a much broader scope
           than at the discipline-level and especially at the course-level).
           However, within each level there is still the need to determine the
           relative scope.       This includes whether specific examples of
           technologies, products, organizations, websites, etc. are allowed, as
           some would argue they provide little more than “bulk” to the concept
           map. This also includes the setting of a “boundary” in terms of what
           concepts are beyond the domain area, and while relevant and
           important, are just too “far” from the desired content area.
       •   Time: How much time should be allowed for the completion of the
           concept maps? As with any task, the more time that is given for
           completion, the better the resulting product or outcome will be. The
           more time someone is given for creating a concept map, the larger and
           more complex it will likely be (at least until the person feels there is


Journal of Informatics Education Research                                         93
Freeman and Urbaczewski
           nothing more to add). However, determining the appropriate time (as
           with any assessment technique) can be very difficult and may require
           several iterations. Or, perhaps the amount of time remains constant
           and the requirements in terms of scope increase or decrease to fit the
           assessment objectives.
       •   Computer-assisted Creation: Should the maps be drawn by hand or
           with the assistance of a computer? There are effective computer-
           based concept mapping applications, such as the previously
           mentioned CmapTools, that enable students to create very clean and
           legible maps. Some of the applications also assist with inter-map
           comparisons. However, if using such an application, training in the use
           of the application is needed for both the faculty and students.
           Additionally, using hand-drawn maps provides more flexibility (fewer
           constraints) in terms of where and when the maps are drawn.

Sample Assessments
       The following examples borrow the approach utilized by AACSB [AACSB,
2004, pp. 62-64] for providing examples of assurance of learning.                      These
examples are not meant to represent the full spectrum of possibilities for the
incorporation of concept mapping into assessment, but attempt to illustrate a few
possible uses and provide some guidance.

Course-Level Assessment
Situation/Context
A telecommunications instructor wants to measure students’ knowledge of small LANs.

Learning Goal
“Students should be able to build and administer small LANs.”

Demonstration of Achievement
A concept map which shows the various parts of the computer, media, network devices,
protocols, and how they are all connected allows the instructor to measure student mastery of
the domain without the need to actually build the LAN – a time consuming, expensive, and often
difficult process in an academic setting.



       This course-level example is likely to be the most common example of
assessment with concept maps. Assessment at this level is determined by the



94                                               Journal of Informatics Education Research
                                                                 Freeman and Urbaczewski
course’s instructor, and requires no involvement of additional faculty or
administrators. For situations like this example, employing a concept map as the
assessment technique may be the optimal choice (over other techniques) for
faculty. For topics where there is a great amount of terminology and conceptual
knowledge (e.g., telecommunications, hardware/software, system modeling,
etc.), concept maps can provide a means for students to show what they know to
the instructor. These maps can be used for grade-based assessment, course
improvements, or general reviews of the material.

Discipline-Level Assessment
Situation/Context
The MIS faculty wants to measure students’ knowledge of outsourcing and the MIS discipline
before completing the capstone MIS course.

Learning Goal
“Students should be able to identify and assess opportunities where outsourcing would benefit or
harm a particular organization.”

Demonstration of Achievement
A concept map which shows the different subjects in the MIS discipline correctly related to the
concept of outsourcing and the factors which affect the outsourcing decision (globally or locally)
can complement or supplement a term paper or essay-based examination.


        This discipline-level example illustrates one of many approaches for using
concept maps at this level.           Whether as part of a capstone course, an exit
examination, a skills survey, or some other format, the faculty within the MIS
discipline at any given institution can make use of concept maps to help them
determine what the MIS majors know and understand about MIS as a discipline
or field. Unlike the course-level example above, a concept map at this level of
assessment can examine the major MIS issues across multiple courses and
topics (i.e., not limited to a single course). This is where the discipline-level use
achieves its greatest potential as many of the individual concepts from the
various subtopics (courses) within MIS overlap. This overlap allows students to
create a single map or picture of the entire discipline in a clean and uncluttered
manner. If the question or requirements for the map are specific enough, as




Journal of Informatics Education Research                                                            95
Freeman and Urbaczewski
would be from the above example, the scope and size of the map would be
sufficiently constrained to keep the map useful and readable. While potentially
created as part of a single course (within the capstone course), the concept map
would not focus on the topics of a single course, but rather the discipline-level
issue of outsourcing.

 Curriculum-Level Assessment
 Situation/Context
 The business school wants to measure students’ knowledge of the connections and inter-
 relatedness of corporate strategy and IS strategy in a curriculum-level assessment.

 Learning Goal
 “Students should be able to explain the importance of properly aligning corporate and IS
 strategies.”

 Demonstration of Achievement
 A concept map which shows the various components of corporate strategy, the many business
 units within an organization, the key issues of an IS strategy, and the links and relationships
 between all of these allows the evaluators to view this issue graphically instead of textually.



       This curriculum-level example may in fact be the most difficult to
implement due to curriculum-level assessment requiring the coordination of many
faculty and the measurement of many disciplines and subject areas. This is true
regardless of the assessment technique. However, when such assessment is
needed, the above example illustrates the potential use of concept maps to
supplement the overall assessment plan. In other words, the entire assessment
does not need to occur as a single format, but should include multiple formats in
order to best assess students’ knowledge most appropriately. When there are
concepts or issues that are inherently multi-disciplinary (e.g., organizational
strategy or even supply chain management), concept maps can be valuable in
enabling the students to more easily convey the inter-relationships within the
larger topic area and show the natural connections and links across the different
disciplines or subtopics.




96                                                Journal of Informatics Education Research
                                                                  Freeman and Urbaczewski
Implementation
       This paper has argued that concept mapping can be effectively and
successfully used at any or all of these measurement levels (course, discipline,
and curriculum). However, the greatest potential for the use of concept maps for
assessment will likely be at the discipline level, i.e., the Information Systems
major. While Croasdell et al. [2003] make strong arguments for and provide
numerous examples of the incorporation of concept mapping into a variety of
activities within a course, this is also the area where faculty feel most comfortable
with their current assessment of learning techniques, as this is seen as a regular
part of teaching. Additionally, there is no reason or limitation that precludes the
use of concept mapping at the curriculum level, however based on practical
issues such as creation time, evaluation time, and size/scope of the actual map,
concept maps at the curriculum level may be too onerous (large size/scope) or
too abstract (not enough time) to be valuable for assessment purposes.
Additional research at the curriculum level is warranted.

       It is at the discipline level, between the individual courses on one end and
the entire curriculum on the other, where assessment is still in its early stages.
As previously mentioned, course-level assessment is not a new idea, and while
curriculum-level assessment is a much newer idea, due to accreditation needs,
curriculum-level assessment has existed for many years on a formal basis.
There are, however, few guidelines for assessing student learning within a
particular discipline. In most cases, if a student achieves a minimum GPA in the
appropriate courses (either on a course-by-course basis and/or across all
courses combined), he/she will be deemed to have mastered the material. For
transcript and graduation purposes, this may be sufficient. However, with regard
to determining how much a student knows at the end of the coursework or
assisting faculty with improving the overall content of the major, very little is done
in most schools. Concept mapping has a great deal of flexibility in terms of the
mapped content, but also possesses structured rules for creation. The focus and



Journal of Informatics Education Research                                          97
Freeman and Urbaczewski
scope of the domain can vary according to need. Therefore, to assist faculty
within a department and/or a discipline in determining the effectiveness of the
courses as a whole (the major or minor) and measuring the overall performance
of the students, concept mapping can be a very powerful tool.

Limitations
       With any assessment technique (including written examinations, term
papers, or group projects), there are always limitations. Concept maps are no
different.    They can show procedural knowledge, but they are excellent
techniques for showing declarative knowledge. The amount of training provided
to both faculty and students can have a significant impact on the quality of the
resulting concept maps and, therefore, their usefulness as an assessment
technique. Concept maps, especially when first introduced, can often take more
time than other assessment techniques in terms of their creation and their
evaluation. Faculty must be aware of this prior to their use. Finally, as a new
technique that many faculty and students have likely never seen nor utilized
previously, there may be some resistance and therefore a need to “manage” its
introduction. Since many faculty may not incorporate concept mapping on their
own, someone in an administrative capacity (e.g., an Associate Dean or even a
Department Chair) may need to champion the efforts and take on a leadership
role within the school or discipline. A possible solution to this issue may be to
gradually incorporate concept mapping into an institution’s assessment of
learning techniques by starting with their use at the course level. Over time, as
faculty become comfortable with and supportive of the technique, assessment via
concept mapping can be introduced at the discipline and then curriculum levels.


                              VI. CONCLUSION

       The need for assessment of student learning continues to grow as
business schools and the Information System programs within them attempt to
stay competitive and provide the best and most relevant content to their students.


98                                        Journal of Informatics Education Research
                                                          Freeman and Urbaczewski
Whether due to AACSB accreditation requirements, university-wide or school-
wide requirements, or even as an individual desire to be a better teacher, more
emphasis is placed on assessment than ever before. Assessment may be for
student grades; for course, discipline, or curriculum improvement; for
performance measurement; or for accreditation. No matter the driver, meaningful
and useful assessment results are the goal. Concept mapping has been shown
to be an effective technique for learning assessment, and specific guidelines for
their creation and use have been offered. Additionally, a program of assessment
has been described so that concept maps can be incorporated into course-,
discipline-, and curriculum-level assessment strategies.   The use of concept
maps allows students to convey their mastery of the assessment domain in ways
that traditional assessment techniques fall short.


                              VII. REFERENCES

AACSB (2004), Eligibility Procedures and Accreditation Standards for Business
    Accreditation, St. Louis, MO: The Association to Advance Collegiate
    Schools of Business.
AIS (2002) IS 2002: Model Curriculum and Guidelines for Undergraduate Degree
      Programs in Information Systems, Atlanta, GA: The Association for
      Information Systems.
Anderson-Inman, L. and M. Horney (1997) "Computer-Based Concept Mapping:
      Enhancing Literacy with Tools for Visual Thinking", Journal of Adolescent
      and Adult Literacy, (40), pp. 302-306.
Ausubel, D. (1968) Educational Psychology: A Cognitive View, New York: Holt,
     Rinehart and Winston.
Bloom, B. (ed.) (1956) Taxonomy of Educational Objectives: The Classification of
      Educational Goals: Handbook I, Cognitive Domain, New York: Longmans,
      Green.
Cliburn, J., Jr. (1986) "Using Concept Maps To Sequence Instructional
       Materials", Journal of College Science Teaching, (15), pp. 377-379.
Craik, K. (1943) The Nature of Explanation, Cambridge: Cambridge University
       Press.



Journal of Informatics Education Research                                     99
Freeman and Urbaczewski
Croasdell, D. et al. (2003) "Creating, Assessing, and Understanding the Use of
      Concept Maps as a Teaching and Assessment Technique",
      Communications of the AIS,(12), pp. 396-405.
Deese, J. (1965) The Structure of Associations in Language and Thought,
     Baltimore: The Johns Hopkins Press.
Dorough, D. and J. Rye (1997) "Mapping for Understanding", Science Teacher,
     (64), pp. 36-41.
Fisher, K. (1990) "Semantic Networking: The New Kid on the Block", Journal of
       Research in Science Teaching, (27), pp. 1001-1018.
Fisher, K. et al. (1990) "Computer-based Concept Mapping", Journal of College
       Science Teaching, (19), pp. 347-352.
Fraser, K. (1993) "Theory Based Use of Concept Mapping in Organization
      Development: Creating Shared Understanding as a Basis for the
      Cooperative Design of Work Changes and Changes in Working
      Relationships", Unpublished Doctoral Dissertation, Cornell University.
Freeman, L. and A. Urbaczewski (1999) "Concept Maps And Information
     Systems: An Investigation Into The Assessment Of Students’
     Understanding Of IS", Proceedings of the Fifth Americas Conference on
     Information Systems, Milwaukee, WI.
Freeman, L. and A. Urbaczewski (2003) "Concept Maps as an Alternative
     Technique    for     Assessing      Students’   Understanding      of
     Telecommunications", Journal of Informatics Education Research, (5)2,
     pp. 41-54.
Gaines, B. and M. Shaw (1995) "Collaboration Through Concept Maps",
      Proceedings of CSCL95: Computer Supported Cooperative Learning,
      Bloomington, IN.
Gentner, D. and A. Stevens (eds.) (1983) Mental Models, Hillsdale, NJ:
     Lawrence Erlbaum Associates, Inc.
Goldsmith, T. and P. Johnson (1990) "A Structural Assessment of Classroom
     Learning" in Schvaneveldt, R. (ed.) Pathfinder Associative Networks:
     Studies in Knowledge Organization, Norwood, NJ: Ablex Publishing
     Corporation, pp. 241-254.
Goldsmith, T. et. al. (1991) "Assessing Structural Knowledge", Journal of
     Educational Psychology, (83), pp. 88-96.




100                                     Journal of Informatics Education Research
                                                        Freeman and Urbaczewski
Gouveia, V. and J. Valadares (2004) "Concept Maps and the Didactic Role of
     Assessmen", Proceedings of the First International Conference on
     Concept Mapping, Pamplona, Spain.
Markham, K. et al. (1994) "The Concept Map As A Research And Evaluation
     Tool: Further Evidence Of Validity", Journal of Research in Science
     Teaching, (31), pp. 91-101.
Massey, A. and W. Wallace (1996) "Understanding and Facilitating Group
     Problem Structuring and Formulation: Mental Representations, Interaction,
     and Representation Aids", Decision Support Systems, (17), pp. 253-274.
Novak, J. (1995) "Concept Mapping: A Strategy for Organizing Knowledge" in
      Glynn, S. and R. Duit (eds.) Learning Science in the Schools: Research
      Reforming Practice, Mahwah, NJ: Lawrence Erlbaum Associates, Inc., pp.
      229-245.
Novak, J. and D. Gowin (1984) "Learning How to Learn", New York: Cambridge
      University Press.
Novak, J. and D. Musonda (1991) "A Twelve-Year Longitudinal Study of Science
      Concept Learning", American Educational Research Journal, (28), pp.
      117-153.
O’Neil, H. et al. (1997) "Feasibility of Machine Scoring of Concept Maps", Los
       Angeles: National Center for Research on Evaluation, Standards, and
       Student Testing.
Rocha, F. et al. (2004) "A New Approach to Meaningful Learning Assessment
     Using Concept Maps: Ontologies and Genetic Algorithms", Proceedings of
     the First International Conference on Concept Mapping, Pamplona, Spain.
Ruiz-Primo, M. (2004) "Examining Concept Maps as an Assessment Tool",
      Proceedings of the First International Conference on Concept Mapping,
      Pamplona, Spain.
Ruiz-Primo, M. and R. Shavelson (1996) "Problems and Issues in the Use of
      Concept Maps in Science Assessment", Journal of Research in Science
      Teaching, (33), pp. 569-600.
Shavelson, R. et al. (1994) On Concept Maps as Potential “Authentic”
      Assessments in Science: Indirect Approaches to Knowledge
      Representation of High School Science, Los Angeles: National Center for
      Research on Evaluation, Standards, and Student Testing.
Suen, H. et al. (1997) "Concept Map as Scaffolding for Authentic Assessment",
      Psychological Reports, (81), pp. 734.



Journal of Informatics Education Research                                 101
Freeman and Urbaczewski
Taber, K. (1994) "Student Reaction On Being Introduced To Concept Mapping",
      Physics Education (29), pp. 276-281.
Turns, J. et al. (2000) "Concept Maps for Engineering Education: A Cognitively
      Motivated Tool Supporting Varied Assessment Functions", IEEE
      Transactions on Education (43), pp. 164-173.
Wallace, J. and J. Mintzes (1990) "The Concept Map As A Research Tool:
      Exploring Conceptual Change In Biology", Journal of Research in Science
      Teaching, (27), pp. 1033-1052.
Williams, C. (1995) "Concept Maps As Research Tools In Mathematics", Annual
       Meeting of the American Educational Research Association, San
       Francisco.
Wilson, J. and Rutherford, A. (1989) "Mental Models: Theory And Application In
      Human Factors", Human Factors, (31)6, pp. 617-634.


                          Authors' Biographies
Lee A. Freeman is an Associate Professor of MIS and the Director of
Technology and Online Education in the School of Management at The
University of Michigan – Dearborn.    He has a B.A. from The University of
Chicago, and he received both his M.B.A. and Ph.D. in Information Systems from
Indiana University. His teaching interests include systems analysis and design,
information security, and human-computer interaction; and his primary research
interests include the conceptualization and use of information systems
knowledge, systems analysis and design, information ethics, and information
security.   He has published over 25 refereed manuscripts in journals and
conferences, including MIS Quarterly, the Communications of the ACM,
Information Systems Frontiers, the Journal of IS Education, and Communications
of the AIS, among others. He can be reached at lefreema@umd.umich.edu.

Andrew Urbaczewski is an Associate Professor of Management Information
Systems at the University of Michigan - Dearborn. He received a Ph.D. in
Information Systems from Indiana University, and also holds an MBA from West
Virginia University and a BS in Finance (with honors) from the University of



102                                      Journal of Informatics Education Research
                                                         Freeman and Urbaczewski
Tennessee. His research interests include wireless mobile collaboration,
electronic commerce, and electronic monitoring of employees. He has published
over 25 refereed manuscripts in several prestigious journals and conferences,
including the Journal of Management Information Systems, Communications of
the ACM, Journal of Organizational Computing and Electronic Commerce, and
Communications of the AIS.

Andrew has much international experience in research and teaching. He has
delivered several executive education and traditional student programs
throughout Europe, focusing mainly on Finland through his rich relationship with
the Helsinki School of Economics, HSE-International Center Mikkeli, and Mikkeli
Polytechnic Institute.

Prior to returning to academia, Andrew was a consultant to several waste
management and pollution prevention programs for the United States
Department of Energy in Washington, DC and at the DOE sites at the Oak Ridge
National Laboratory in Oak Ridge, TN.




Journal of Informatics Education Research                                   103
Freeman and Urbaczewski
104   Journal of Informatics Education Research
                      Freeman and Urbaczewski

								
To top