Your Federal Quarterly Tax Payments are due April 15th Get Help Now >>

From Assessment to Action Using the Delphi Technique to by kqt20646

VIEWS: 34 PAGES: 20

									Page 1




         From Assessment to Action: Using the Delphi Technique to Encourage Faculty Buy-In




                                          Conference Paper,
                   Association for Institutional Research 2007 Annual Conference
                                           Kansas City, MO
                                              June 2007




                                        Sean A. McKitrick
                                      Binghamton University
Page 2

                                                    Abstract

This paper introduces a case in which the Delphi method was used to assess critical thinking. It discusses
the reasoning behind using the method, introducing additional validity issues (consequential and
pedagogical validity in the eyes of regional accrediting organization, such as the Middle States Association
of Colleges and Schools). It provides the results of the actual study, and then focuses on some reasons why
the approach can be used for assessment, especially in areas in which student learning outcomes are
difficult to define, such as critical thinking, aesthetics, and interdisciplinary studies. There follows a
discussion of why the approach especially enables faculty discourse to occur, and concludes with some
advantages and limitations of the approach.


          With regard to assessment of student learning, no one can doubt that we live in interesting times.

The United States Department of Education, specifically the Secretary’s Commission on the Future of

Higher Education, has strongly encouraged regional and national accrediting organizations to require their

member institutions to provide proof that students are learning.i Even higher education trade associations

such as the American Association of Colleges and Universities have issued statements strongly

encouraging colleges and universities to use standardized tests that move toward proving the “value added”

of a university or college education.ii And recently, the Department of Education has sanctioned accrediting

organizations for not requiring their member institutions to provide appropriate evidence of student

learning.iii

          Undoubtedly, much of the pressure to demonstrate value added is welcomed by institutional

researchers and college and university administrators struggling to move the academy along a path toward

accountability. Indeed, a popular view among regional and national accreditors is that the Department of

Education is pushing toward a value added approach because fewer tax dollars are available to higher

education; thus, in order for colleges and universities to receive future funding, they will have to

demonstrate that they are using current funding effectively.iv

          Accrediting organizations are also heavily encouraging institutions to demonstrate that they are

doing something with the data they collect. The problem for institutional researchers and assessment

professionals is that, given sincere efforts to collect, aggregate, and summarize assessment and other

student learning data, there is still little guarantee that program faculty will discuss the data and manage

student learning using the data submitted to them. In sum, measures of student learning data might have

high degrees of predictive validity, but little consequential validity.v
Page 3

         In this paper, I address the question of buy-in. Can an assessment method be used that both

measures student learning, but maximizes the chance that faculty will use such information for program

improvement? In what follows, I describe a case study of how critical thinking was assessed using a

qualitative method of assessment, the Delphi method. I then discuss the applicability of the method in

assessing student learning, especially with regard to the pressures described above.


                               Case Study: Assessing Critical Thinking

Background

         At a Carnegie doctoral-extensive university in the northeastern United States, the faculty are

charged by the state government to assess critical thinking, which is defined as follows: “Students will

identify, analyze, and evaluate arguments as the occur in their own or other’s [sic] work; and develop Well-

reasoned arguments.” No additional wording or specifications were provided, leaving it up to the liberal

arts faculty in composition and oral communications courses, as well as the division of academic affairs, to

define what these terms met. Assessment of critical thinking was therefore difficult to achieve because few

knew how to proceed beyond the challenges associated with unpacking vague terms such as “analyze” and

“evaluate.” In addition, the institution has had a strong cultural predisposition toward qualitative, locally-

managed assessments, and against state-mandated, standardized tests, or other ways of assessing student

learning that did not include faculty as the primary assessors, and which did not include some degree of

qualitative information, especially in the liberal arts and general education programs of the university. It

was therefore clear that one way of initially conducting assessment of critical thinking as part of the

university’s general education program was to use a discourse-oriented, qualitative technique such as the

Delphi method, described below.



Conceptual Framework

         I indicated above that educational researchers and regional accrediting organizations prefer to see

student learning assessments that demonstrate value added, such as in the form of standardized tests and

other quantitative instruments. Yet also important is the use of assessment information by program and

general education faculty, many of whom are not trained in the area of quantitative analysis.vi And while

discounting quantitative, value-added approaches is well-advised given the above-described political and
Page 4

institutional pressures, focus must also be upon the effective and purposive triangulation of assessment

information to (1) reduce error due to over-reliance on one instrument or indicator; and, (2) enhance

opportunities for face validity of assessment results in the eyes of the core audience, in this case program

and general education faculty.

           Suskie’s vision of pedagogical and consequential validity, in which the quality of assessment

information is judged by its effect on teaching and learning based on faculty input, therefore serves as the

conceptual framework for this study.vii This conceptual framework asserts that we must be cognizant of

positivist concerns about validity, reliability, and value added, but must also be concerned about the

legitimacy of the information in the eyes of its core audience. Including qualitative information and

considering what it has to say about student learning outcomes are just as important as more succinct,

quantitative information because these actions provide substantive, contextualized information for use by

faculty.viii



                                                 Methodology

Rationale

           I chose a qualitative method to understand student performance in respect to critical thinking.

Since the purpose of the assessment was, first, to get faculty to discuss the quality of student performance

in respect to critical thinking as defined above and, second, to enable them to compare the findings of a

direct observation of student performance on critical thinking writing using a rubric, I felt that use of the

Delphi method was most appropriate for our purposes.

           The Delphi method was developed in the 1950s to forecast future developments, based on the

opinions of experts who gathered for this express purpose.ix Since then, it has been used in the medical field

to produce generalizations about patient treatment options and outcomes, in teacher education to determine

appropriate pedagogies and curricula, and in several other fields of interest.x The advantage of the method

is that it involves experts who can be relied on for informed opinions about the quality of various areas of

focus. With regard to assessment, the method is a way of gathering faculty feedback after a key event (such

as the submission of final grades, the grading of key assignments such as master’s theses and doctoral

dissertations, etc.), at a distance or in person, and gaining informed opinions about strengths and
Page 5

weaknesses in student performance, especially when learning outcomes are particularly vague. It is the last

of these reasons—the vague, fairly open-ended critical thinking student learning objectives as provided to

us by the state government—that led us to use this method.

         This was the best approach for us because (1) it included faculty as the primary source of

information about specific assignments and projects students have completed; (2) it generated numerical,

aggregate information enabling us to reduce information so we could understand the comparative weight of

consensus (or non-consensus) about the quality of student performance, using the above-stated student

learning outcomes as specific reference; and, (3) it contained specific information and feedback to be used

for discussion and reflection by specific parties, including the Center for Learning and Teaching, the

Institute for Student Centered Learning, the Faculty Senate and its associated committees, and the

University Libraries. These groups could then act on any information they found significant, thereby

“closing the loop” from assessment, to discussion, to recommendations for action, to the actions

themselves.xi



Procedure

         This Delphi method of evaluation involves at least four steps.xii First, “experts” are identified, and

asked to answer a number of open-ended, evaluative questions. Second, these open-ended responses are

listed on a second, closed-ended survey, with an indication of how many respondents stated each. Third,

respondents rate their level of agreement with each statement on a five-point Likert scale, with (1) meaning

“strongly disagree,” and (5) meaning “strongly agree.” Four, responses with mean values of 2.0 or lower or

medians of 2.0 or lower, and standard deviations of less than 1.0 are designated items of consensus, and

reported as such in an assessment report.xiii I explain each of the above in turn:

         1.     Twenty-eight faculty and graduate teaching assistants who taught upper division composition
                and combination composition/oral communication courses were contacted. I explained that
                the purpose of the research was to gather their thoughts on strengths and weaknesses in
                student performance in respect to critical thinking. Thirteen agreed to participate.
         2.     I sent an open-ended survey to each respondent, making sure that they were not known to the
                other participants. The survey read as follows:
Page 6



                                              Open-Ended Survey Questions

                Based upon the assignments you have graded in student work in the course you taught
                during this last session, what were some specific strengths or weaknesses you have
                observed in student performance in respect to the following?

                •    Developing well-reasoned arguments (i.e., distinguishing fact from opinion,
                     identifying assumptions and reasons in an argument, recognizing the need for
                     additional information of particular types, etc.)
                •    Identifying, analyzing, and evaluating arguments as they occur in their own or
                     others’ work (i.e., finding flaws or gaps in an argument, recognizing separate
                     components of a reasoned argument, identifying assumptions and reasons in an
                     argument, etc.)
                •    Performing the basic operations of personal computer use
                •    Understanding and using basic research techniques (i.e., employing methods of
                     information collection and manipulation, locating and evaluating information from a
                     variety of sources, designing and implementing data-oriented studies for answering
                     research questions, etc.)
                •    Locating evaluating and synthesizing information from a variety of sources

               13 instructors responded to this first open-ended survey and supplied numerous comments.
               The comments to each question were then read several times in order to identify common
               themes. In order to accomplish this task, the focus of analysis was phrases used in the
               responses, which were then listed and compared for similarity. Those phrases that were very
               similar were combined and tallied after several readings of the responses until it was clear that
               the process would not yield further similarities or tallies, in a process called constant
               comparative analysis.xiv In this way, the analysis achieves construct validity in that data were
               directly related to the questions on hand, involved an exhaustive process in which there was a
               high probability that all phrases would be part of the analysis through the constant
               comparative analysis technique mentioned above, and responses were mutually exclusive
               from other questions on the survey.xv

          3.   A closed-ended survey was created which, as described above, asked the anonymous
               instructors to evaluate the open-ended comments on a five-point Likert scale of agreement,
               with (1) meaning strongly agree, and (5) meaning strongly disagree. Ten instructors
               completed the online survey, for a response rate of 77%.


Results

The results of the second Delphi round can be found in tables one through four at the end of this paper. On

each of these tables, consensus answers are those that had mean responses of 2.0 (1 = strongly agree with

the statement; 2 = agree with the statement) or lower, and standard deviation values of less than 1.0. In

respect to each of the state-mandated student learning outcomes for critical thinking, the results are

summarized in the following boxes:
Page 7


               Critical thinking outcome 1: Developing well-reasoned arguments

      In total, faculty made eighteen comments. When asked to rate these comments
      according to degree of agreement, there was moderate consensus that students were
      good at understanding general concepts as far as making correct arguments was
      concerned and applying different types of information for different kinds of application.
      There was also moderate consensus that students needed to improve in respect to
      factually supporting their arguments or opinions.




         Critical thinking outcome 2: Identifying, analyzing, and evaluating arguments

      Faculty made a total of thirty comments. There was moderate consensus that students
      know how to organize their thoughts in essays and written assignments, that they can
      identify gaps in theoretical frameworks outlined in texts and other readings, challenge
      each others’ opinions, and apply their own research and interests in their writing.
      There was also moderate consensus that students had a difficult time evaluating
      arguments, engaging in self-evaluation of the quality of their work, and clearly
      communicating complex ideas.



       Critical thinking outcome 3: Understanding and using basic research techniques

      Faculty made seventeen statements. There was moderate consensus that students had
      done little of their own research prior to their taking a 300-level composition or
      combination composition/oral communication course, and that they have had little
      practice using the library’s online databases. There was a high level of consensus that
      there is a great deal of variance from student to student in respect to their
      understanding of basic research techniques and using them in 300-level assignments.
      There was also a good deal of consensus that student performance in respect to using
      basic research techniques was dependent on how much time students put into their
      papers and projects in these courses. The highest degree of consensus was that
      students use Google or other search engines instead of library resources.




       Critical thinking outcome 4: Locating, Evaluating, and Synthesizing Information

      Faculty made thirty statements on this student learning outcome. There was a
      moderate level of consensus that students needed to improve in their development of
      acceptable bibliographies, and that evaluating and synthesizing information was
      underdeveloped in students taking 300-level composition and combination
      composition and oral communication courses. There was a high level of consensus
      that students rely too much on Web pages as sources for the papers they write in
      these courses.



Although the sample size for several of the questions was quite small, it was interesting to note that, in

several cases, those responses that had originally received only one mention in the initial open-ended

survey received a high degree of agreement and consensus when all respondents were asked to answer the
Page 8

second follow-up closed-ended survey. For example, table 2 indicates that only one person mentioned that

“students are better at picking out the weaknesses in the essays they read than in evaluating their own

essays,” yet received a mean of 1.63 and standard deviation of .518 in the follow-up survey. Table 3 attests

that only one respondent mentioned that “students use Google (or other search engines) instead of library

resources,” and yet was one of the four highest items of consensus on the table on the follow-up survey.

Interestingly, when the results were presented to the faculty senate organization responsible for assessment

oversight, this item caused the most discussion; all those present—experienced in one form or another in

teaching critical thinking courses—wholeheartedly agreed with this statement, and most moved to

recommend actions to counter such a trend.



Use of Data for Program Improvement

         A central question for this study is, “To what extent did the assessment method lead to buy in?”

Although aggregate measures are difficult to develop, use of the Delphi method did accomplish a number

of feats necessary for faculty buy-in, and for subsequent faculty-based discussion and recommendations,

developed to further enhance student learning.

         First, once the Delphi data were aggregated, it became important to summarize them in ways that

faculty and key faculty senate groups could understand. There is no question that if I had presented the data

as represented in the data tables at the end of this paper, they would not have understood what the tables

had to say, given that many are in non-quantitative, non-scientific disciplines.

         Second, it became clear that contextualizing the data by focusing on what the Delphi results had to

say about faculty consensus was very helpful. In other words, taking the four points listed above, making

them into points of reference (or “talking points”), and then asking faculty members or members of the

faculty senate responsible for assessment oversight (in this case for the general education outcomes relating

to critical thinking) to make recommendations geared toward improving student performance in these

points of reference was very important.

         Contextualizing the data via use of the Delphi method helped spark further specification of the

assessment process. Interestingly (and as mentioned above), much of the information contained in the

Delphi study came as no surprise to faculty and those who played a part in overseeing assessment.
Page 9

However, because the information had been systematically collected and there were clear attempts to

survey faculty about their thoughts on students’ strengths and weaknesses with regard to critical thinking,

the results were better legitimized than if a simple standardized test such as the Collegiate Learning

Assessment (CLA) or MAAP had been used. Because of this heightened degree of legitimacy, the

assessment staff was better able to articulate an already-existing, but rarely followed process at the

university, outlined in Figure 1.

         Although the Delphi information in and of itself would have been insufficient from a positivist,

value added paradigm, faculty and members of the faculty senate appeared more comfortable using several

sources of assessment information. The assessment staff was able to communicate concerns about students’

acquiring critical thinking skills during their time at the university through the use of multiple assessments,

encourage discussion of these concerns at the faculty senate level, focus on recommendations, and then

bring these recommendations to various university organizations for implementation, including first-year

programs, the university libraries, and the university center for learning and teaching.


Figure 1: Triangulating assessment data, moving from discussion to action
                    Critical
                    Thinking
                                                                                 Delphi
                    Rubric               Faculty             Library
                                                                                 Survey on
                    Scores               Survey              Survey
                                                                                 Critical
                                                                                 Thinking


                                                   Faculty-based
                                                   Assessment
                                                   Organization



                Center for
                Learning               First-Year             University           Undergraduate
                and                    Programs               Libraries            Curriculum
                Teaching                                                           Committee


Use of the Delphi method therefore had face validity in that it appeared legitimate to ask faculty teaching

300-level critical thinking courses about the quality of student learning using the state-mandated student

learning goals relevant to critical thinking. In addition, faculty concerns about standardized tests such as the
Page 10

Collegiate Learning Assessment or MAAP producing vague information about what knowledge students

were not acquiring were somewhat alleviated by providing contextualized, specific information through the

Delphi study. The method therefore also provided consequential validity in that it lent itself toward

discussion and action by faculty groups responsible for providing oversight for critical thinking and other

general education outcomes.

                                                  Discussion

         Use of the Delphi method has its weaknesses. For example, in this study, the number of

respondents was low, and the response rate for the second closed-ended survey was lower than desired.

Moreover, the Delphi method did not provide baseline or value-added data. For example, it would be

difficult to follow up on this study because those who responded to the original surveys would be difficult

to recruit a second time, if only because teaching assignments change, and thus they may not teach the

same courses the next time such a Delphi study were repeated. It is also difficult to organize a study with

control and experimental groups, although it would be interesting to see if faculty teaching critical thinking

courses say similar things to those teaching upper division courses in programs or majors.

         That said, use of the Delphi method provided contextualized, qualitative information that can be

used to help interpret the results of other assessments (triangulation). For instance, given a hypothetical

circumstance in which a university has randomly selected an appropriate number of students to take the

MAAP or CLA, and receives scores and sub-scores indicating that students in aggregate score in the 50th

percentile on all measures. The university might be comforted to know that it is not doing terribly, or

might be alarmed that students are not performing better than the national average, but the information

provided by CLA or MAAP can really only present a vague picture of where the university stands in

important areas of emphasis such as critical thinking, analytical thinking, or writing. Use of a qualitative

method such as the Delphi method, when carefully administered, can provide contextualized information

that might be used to interpret standardized test scores or to focus on what they can do to further enhance

student performance.

         Most important, use of a qualitative technique such as the Delphi method enables information to

be both triangulated and contextualized so that faculty are enabled to discuss and act upon results. In the

case of this study, faculty focused in the beginning almost solely on the Delphi results; the assessment
Page 11

office needed to encourage them to offer other more quantitative information to balance the final

assessment of the quality of student learning of critical thinking objectives. Without doubt, the technique

enabled initial buy-in so that assessment results could be balanced between quantitative and qualitative

results.
Page 12


                                                 References

Allen, J. & Bresciani, M. J. (2003). Public institutions, public challenges: On the transparency of

         assessment results. Change, January/February, 21-23.


Chioncel, N. E., Van Der Veen, Wildemeerch, D., & Jarvis, P. (2003). The validity and reliability of focus

         groups as a research method in adult education. International Journal of Lifelong Education, 22,

         495-517.


Council of Higher Education Accreditation. (2007). CHEA board issues resolution of support: Reiterates

         Department of Education mandates concerning accreditation and learning outcomes. Retrieved

         June 1, 2007, from, http://www.chea.org/pdf/CHEA_Press_Release_May_2007.pdf


Educational Testing Service. (2006) A culture of evidence: Post-secondary assessment and learning

         outcomes. Princeton, NJ: Author.


Exley, C., Sim, J., Reid, N. G., Jackson, S., & West, N. (1996). Attitudes and belief within the Sikh

         community regarding organ donation: A pilot study. Social Science & Medicine, 43, 23-28.


Geertz, Clifford (1987). The interpretation of cultures: Thick description toward an interpretive theory of

         culture. Au taut, 217-18, 151-176.


Hunt, D. P., Haidet, P., Coverdale, J. H., & Richards, B. (2002). The effect of using team learning in an

         evidence-based medicine course for medical students. Teaching & Learning in Medicine, 15, 131-

         139.


Linkon, S. L. (2005). How can assessment work for us? Academe, 91(4), 28.


Lorenzetti, J. P. (2004). Transformative assessment in higher education. Distance Education Report, March

         15, 2004, 3-7.
Page 13

Meyer, M. K., Conklin, M. T., & Turnage, C. (2002). School foodservice administrators’ perceptions of the

        school nutrition environment in middle grades. Topics in Clinical Nursing, 17, 47-54.


Mundhenk, R. T. (2004). Communities of assessment. Change, November/December, 36-41.


North Central Association of Colleges and Schools. (2003). Assessment of student academic achievement:

        Assessment culture matrix. Retrieved May 19, 2005, from

        www.ncahigherlearningcommission.org/resources/assessment/assessmatrix03.pdf


Robertson, M., Line, M., Jones, S., & Thomas, S. (2000). International students, learning environments and

        perceptions: A case study using the Delphi technique. Higher Education Research &

        Development, 19, 89-102.


Schroeder, C., & Neil, R. M. (1992). Focus groups: A humanistic means of evaluating an HIV/AIDS

        programme based on caring theory. Journal of Clinical Nursing, 1, 265-274.


Shavelson, R. J., & Huang, L. (2003). Responding responsibly to the frenzy to assess learning in higher

        education. Change, January/February, 10-19.


Shively, J. (1992). Cowboys and Indians: Perceptions of Western films among American Indians and

        Anglos. American Sociological Review, 57, 725-734.


United States Department of Education (2006) Retrieved June 1, 2007, from

        http://www.ed.gov/about/bdscomm/list/hiedfuture/reports/final-report.pdf


Waiters, E. D., Treno, A. J., & Grube, J. W. (2002). Alcohol advertising and youth: A focus group analysis

        of what young people find appealing in alcohol advertising. Contemporary Drug Problems, 28,

        695-718.


Winslow, B. W. (2003). Family caregivers’ experiences with community services: A qualitative analysis.

        Health Nursing, 20, 341-348.
Page 14

Wong, S. Y., & Wong, T. K. S. (2003). An exploratory study on needs of parents of adults with a severe

        learning disability in a residential setting. Issues in Mental Health Nursing, 24, 795-811.




                                                 Endnotes
Page 15


                                               Delphi Method Procedure

                            Recruit Participants: Focus on those who have graded similar tasks
                            and are connected with the same general education outcome



                            Design 1st Survey: Write open-ended questions about student learning
                            outcomes, asking respondents to write down strengths and weaknesses



                            Content Analysis: Use “constant comparative method,” (grouping like
                            responses), and tally like responses and themes



                            Design 2nd Survey: Report tallied responses (see above), list from top
                            to bottom in order of frequency, and ask respondents to rate on 4 or 5
                            point Likert scale.


                            Analysis: Report means/medians, and standard deviations. Usually,
                            exclude comments with sd>=1.0, and those below “agree,” according to
                            specifications of Likert scale


                            Triangulate: Use with other student learning assessment information,
                            including standardized test scores, etc.



                            Report/Discuss: Keep it simple, reporting consensus items, but engage
                            key audience in discussion, using Delphi results to contextualize other
                            provided assessment information.




i
     “A test of leadership: Charting the future of U.S. Higher Education,” U. S. Department of Education,
               http://www.ed.gov/about/bdscomm/list/hiedfuture/reports/final-report.pdf , downloaded June 1, 2007. Comparability of
               data has apparently been the greatest issue, in the form of standardized tests, etc. That said, associations linked to higher
               education, such as the Commission on Higher Education Accreditation (CHEA), are adamant that assessment be
               institution-specific, and identifying standards applicable to all institutions is inadvisable, given that institutions have
               different mission and value statements; see “CHEA board issues resolution of support: Reiterates Department of Education
               mandates concerning accreditation and learning outcomes, http://www.chea.org/pdf/CHEA_Press_Release_May_2007.pdf
               downloaded June 1, 2007.
ii
 “Value added assessment: Accountability’s new frontier,” Perspectives, Spring 2006, American
Association of State Colleges and Universities, http://aascu.org/pdf/06_perspectives.pdf, downloaded June
1, 2007.
Page 16


iii
    “Trade-School Unit of Major Accreditor Faces Loss of Authority, as Talks on Rules Changes Are Set to Resume, “ Chronicle of
Higher Education, May 31, 2007, downloaded on May 31, 2007.
iv
    “Assessing student learning and institutional effectiveness,” Middle States Association of Colleges and Schools,”
http://www.msche.org/publications/Assessment_Expectations051222081842.pdf, downloaded May 31, 2007
v
    Suskie, Linda, “What is “Good” Assessment? A New Model For Fulfilling Accreditation Expectations,” notes taken at Assessment
Institute, IUPUI, October 30, 2006.

vi
      Ibid.
vii
       Ibid.
viii
     Ibid. Also see Sean McKitrick “The Politics of Assessment: The Successes and Risks in Using the Delphi Method to Assess Critical
Thinking,” Third International Congress of Qualitative Inquiry, University of Illinois, Urbana-Champaign, May 2-5
ix
    Hunt, D. P., Haidet, P., Coverdale, J. H., & Richards, B. (2002) The effect of using team learning in an evidence-based medicine
course for medical students. Teaching & Learning in Medicine, 15, 131-139; Robertson, M. Line, M., Jones, S., & Thomas, S. (2000).
International students, learning environments and perceptions: A case study using the Delphi technique. Higher Education Research &
Development, 19, 89-102; Winslow, B. W. (2003). Family caregivers’ experiences with community services: A qualitative analysis.
Health Nursing, 20, 341-348.
x
    Leape, L. L., et al., Coronary Angiography: Ratings of Appropriateness and Necessity by a Canadian panel, (2003), RAND
Corporation; Fitch, et al., The RAND/UCLA Appropriateness Method User’s Manual, (2001), RAND Corporation;
xi
     Maxim, Bruce R., “Closing the loop: Assessment and accreditation,” http://delivery.acm.org/10.1145/1050000/1040232/p7-
maxim.pdf?key1=1040232&key2=9988070811&coll=GUIDE&dl=GUIDE&CFID=20204477&CFTOKEN=32209894, downloaded
June 1, 2007.
xii
     See Hasson, Keeney, & McKenna
xiii
     Hasson, F., “Research guidelines for the Delphi survey technique,” Journal of Advanced Nursing, 32 (4), 2000, 1008-1015; PoIll,
Catherine, “Early indicators of child abuse and neglect: A multi-professional Delphi study,” Child Abuse Review, 12, 2003, 25-40;
Broomfield, D. and G. M. Humphries, “Using the Delphi technique to identify the cancer education requirements of general
practitioners,” Medical Education, 35, 2001, 928-937; Richardson, J., “Developing and evaluating complementary therapy services,”
Journal of Alternative and Contemporary Medicine,” 7(3), 2001, 253-260.
xiii
     Hein, S., D. C. Lustig, and A. Aruk, “Consumers’ recommendations to improve satisfaction with rehabilitation services,”
Rehabilitation Counseling Bulletin, 49, 1, pp. 29-39 (2005)
xiii
     Kondracke, N. L., N. Wellman, and D. R. Amundson, “Content analysis: Review of methods and their application in nutrition
education,” Journal of Nutrition Education and Behavior, 34(4), (2002), pp. 224-230.




Table 1: Developing Ill-reasoned arguments
(1=strongly agree; 2=agree; 3=neither agree nor disagree; 4=disagree; 5=strongly
disagree)


Open-ended responses                                               N                 Mean              Median                 Standard
                                                                                                                              Deviation
Students need to improve in respect to factually                    10                 1.90                2.00                     .568
supporting their arguments/opinions (7 responses)
Students simply report what they learn, rather than                 10                 2.30                2.00                    1.252
using/applying what they learn in their assignments (4
responses)
Students are able to develop Ill-reasoned arguments (4              10                 2.80                3.00                     .789
responses)
Students had difficulty developing an argument (2                   10                 3.00                3.00                     .943
responses)
Page 17


Students are good at applying general concepts to                         9           2.67     2.00      1.00
specific examples or cases (2 responses)
Most students recognize the need for supporting their                     10          2.80     2.50      .919
claims with additional information (2 responses)
Students struggled with taking (the) information and                      8           2.38     2.00      1.061
incorporating it into Ill-written papers
Students had a hard time writing hypotheses and                           6           2.33     2.50      1.211
rationale for the hypotheses
Students shoId a Iakness in analyzing research                            8           2.75     2.50      .886
Students Ire Ill prepared for applying the skills of critical             10          3.20     3.50      1.135
thinking
Students Ire Ill prepared for applying the skills of                      8           2.88     3.00      .835
questioning
Students Ire Ill prepared for applying the skills of                      9           2.89     2.00      1.167
challenging the status quo
The majority of students appear to have a general                         10          2.10     2.00      .994
interest in learning
Students are able to identify types of information                        5           2.40     2.00      .548
systems used for different kinds of applications
Students are good at analyzing correctly                                  9           2.89     3.00      .782
Students Ire not able to combine findings from different                  9           3.11     3.00      .782
disciplines
Students who had taken one or more English classes                        5           1.80     2.00      .837
demonstrated a better understanding of how to analyze a
text, versus merely summarizing a text they had read




Table 2: Identifying, Analyzing, and Evaluating Arguments
(1=strongly agree; 2=agree; 3=neither agree nor disagree; 4=disagree; 5=strongly
disagree)
Open-ended Response:                         N    Mean    Median      Standard
                                                                      Deviation
Students are adept at evaluating arguments (2 responses)              9        3.00     3.00     .866
Students are adept at analyzing arguments (2 responses)               10       3.10     3.00     .738
The skills that come with analyzing arguments are just                10       2.70     2.50     1.059
developing for students in the cours(es) I taught
Students’ skills are just developing in respect to evaluating         9        2.56     2.00     .726
arguments
Older students Ire able to evaluate arguments more easily             9        3.00     4.00     1.732
It took a lot of effort for students to realize that they needed to   9        2.67     3.00     1.225
provide actual substantial critiques
Students Ire at a rudimentary stage in learning to write              10       2.80     2.50     1.135
Students had a hard time with critical thinking                       10       2.60     2.00     .843
Page 18


Students could write decent expository papers, but getting them     7        2.14          2.00          1.069
to do further analysis was somewhat akin to pulling teeth
Students figured they would pass without putting much effort        10       2.10          2.00          1.101
into performing further analysis
Students had difficulty analyzing others’ arguments                 8        2.63          2.50          .744
Students shoId a real keen sense of craft in their research         8        3.50          4.00          .756
papers, using research to back up their claims
Students took issue with the critics they cited, which shoId a      8        3.25          3.00          .886
true understanding of the subjects/issues
Students challenged each others’ opinions                           7        2.29          2.00          .448
Students challenged popular opinion                                 8        2.63          2.50          .744
Students Ire able to identify gaps in theoretical frameworks        7        2.71          2.00          .951
outlined in the texts and other readings
Students Ire able to apply their own research and interests in      7        2.71          2.00          .951
their writing
There is not enough emphasis on critical thinking at this           10       2.10          2.00          .994
university
In general, students exhibit an acceptable level of skill in        9        2.78          3.00          1.093
respect to identifying arguments
Students are better at picking out the Iaknesses in the essays      8        1.63          2.00          .518
they read than in evaluating their own essays
Students are excellent evaluators of their peers’ work              7        3.14          3.00          .690
Students do not have an ability to offer critical feedback          7        3.86          4             .690
Students are able to understand arguments                           10       2.20          2.00          1.033
Students have trouble getting beyond the most superficial           10       2.90          3.00          .876
points of an argument
Students do not read assigned materials                             10       3.10          4.00          1.197
Students are proficient at pointing out a Iak thesis statement      8        2.75          3.00          .707
Students have difficulty creating an argument and developing it     8        2.13          2.00          .991
in essays
Students understand how to organize essays                          8        2.50          2.00          .756
Students have been able to find the Iaknesses in their own work     10       2.70          2.50          1.059
When writing about particularly complex ideas, student have         8        2.63          2.00          .916
trouble organizing their paragraphs; they’ll allow a paragraph to
go on for a page or more because they are not sure how to break
it up




Table 3: Understanding and Using Basic Research Techniques
(1=strongly agree; 2=agree; 3=neither agree nor disagree; 4=disagree; 5=strongly
disagree)

Open-Ended Response:                                                     N          Mean          Median    Standard
                                                                                                            Deviation
Students’ basic understanding of basic research techniques varies        10         1.40          1.00      .516
greatly from student to student (2 responses)
Students’ ability to use basic research techniques varies from student   10         1.40          1.00      .516
to student (2 responses)
Students have some difficulty correctly citing sources (2 responses)     10         1.30          1.00      .483
Students do not care much about understanding basic research             10         2.60          2.00      1.075
techniques
Student performance depends on how much time they put into their         9          1.78          2.00      .833
papers and projects
Research sessions with librarians help students in their research        5          2.20          2.00      1.225
Page 19


Students need a refresher course on using infoLINK                     8           2.00     2.00    .756
Students rarely use database searches                                  9           2.67     2.00    1.118
Students use Google (or other search engines) instead of library       10          1.40     1.00    .516
resources
Students are able to gather research data from a variety of sources    10          2.80     2.50    .919
Students are able to link data with their opinions to form cogent      9           2.78     3.00    .667
arguments
In general, students have done little research                         8           2.38     2.00    .916
Students need to learn more about how to evaluate sources              10          1.60     1.50    .699
Students run into plagiarism problems because they do not understand   10          2.30     2.00    1.252
how to format citations
Students know how to format citations                                  10          3.80     4.00    .789
Students have little practice using the library’s online databases     10          2.10     2.00    .738
Students can track down book reviews, interviews with authors, etc.    7           3.29     4.00    .951




Table 4: Locating, Evaluating, and Synthesizing Information
(1=strongly agree; 2=agree; 3=neither agree nor disagree; 4=disagree; 5=strongly
disagree)

Open-Ended Responses:                                       N               Mean          Median   Standard
                                                                                                   Deviation
Students need to improve in respect to synthesizing ideas   10              1.80          2.00     .422
(4 responses)
Students are able to locate information                     10              2.60          2.50     .699
Students are able to evaluate information                   10              2.90          3.00     .738
Students’ struggles with the synthesis of information is    9               3.33          4.00     1.658
limited to their writing
Students would be good at all of these if they devoted      10              3.00          3.00     1.247
sufficient time
Page 20


Having research sessions with librarians was helpful in       5    2.00      2.00          1.225
respect to students’ being able to locate information for
their research
Students are able to gather research data from a variety of   10   2.80      2.00          1.033
sources
Students are able to link the data they research with their   10   2.80      2.50          .919
opinions to form acceptable arguments
Locating appropriate information was underdeveloped in        10   2.30      2.00          1.059
students
Evaluating information is underdeveloped in students          10   1.80      2.00          .919
Students performed satisfactorily in respect to locating a    10   2.90      3.00          .738
variety of source materials
Students struggle with research-based papers                  9    2.22      2.00          .833
Students feel they need to include everything they find to    9    2.11      2.00          1.167
the exclusion of their ideas to the extent that the paper
becomes a research dump instead of an argument on their
own
Students Ire able to develop acceptable bibliographies        10   3.60      4.00          .966
Students Ire able to perform searches using the               8    2.50      2.00          1.069
university’s library catalog
Students Ire able to evaluate internet sources                9    3.44      4.00          1.130
Students rely too much on Ib pages as sources                 10   1.30      1.00          .483
Students seem to be unaware that a serious research           10   2.30      2.00          1.252
paper cannot be completed using only internet sources
Students are unfamiliar with locating and using journal       10   2.20      2.00          1.135
articles
In general, students are unfamiliar with library resources    10   2.30      2.00          1.059




xiv
   Hein, S., D. C. Lustig, and A. Aruk, “Consumers’ recommendations to improve satisfaction with
rehabilitation services,” Rehabilitation Counseling Bulletin, 49, 1, pp. 29-39 (2005)
xv
   Kondracke, N. L., N. Wellman, and D. R. Amundson, “Content analysis: Review of methods and their
application in nutrition education,” Journal of Nutrition Education and Behavior, 34, 4, (2002), pp. 224-
230.

								
To top