WHAT GETS MEASURED GETS DONE EVALUATING THE ASSESSMENT RESULTS OF

Document Sample
WHAT GETS MEASURED GETS DONE EVALUATING THE ASSESSMENT RESULTS OF Powered By Docstoc
					WHAT GETS MEASURED GETS DONE: EVALUATING THE ASSESSMENT RESULTS OF ACCREDITED INFORMATION SYSTEMS PROGRAMS
Bruce White Quinnipiac University Hamden, Connecticut, USA bruce.white@quinnipiac.edu Richard V. McCarthy Quinnipiac University Hamden Connecticut, USA richard.mccarthy@quinnipiac.edu

ABSTRACT
In 2002, the Computing Accrediting Commission of the Computing Sciences Accrediting Board of ABET began accrediting programs in Information Systems. Currently, ten campuses and eleven programs have received accreditation by ABET (one campus has both a Bachelor of Arts and a Bachelor of Science in Information Systems). In this paper, the authors present an overview of the assessment criteria that is specified in the accreditation process and call attention to the critical role that assessment now plays in information systems programs. Qualitative research was conducted to identify the critical success factors of assessment processes for accredited information systems programs and the results are presented herein. Keywords: IS assessment, IS accreditation, IS curriculum

I. INTRODUCTION
Accreditation is an indication of quality and has received increasing emphasis, not only at the university level but also at the individual program level. In 2002, the Computing Accrediting Commission of the Computing Sciences Accrediting Board (CSAB) of ABET (formerly known as the Accreditation Board for Engineering Technologies) began accrediting programs in Information Systems. Currently, ten campuses and eleven programs have received

Journal of Informatics Education Research White and McCarthy

53

accreditation by ABET (one campus has both a Bachelor of Arts and a Bachelor of Science in Information Systems). In this paper, the authors examine the research question: What are the critical success factors of assessment programs within accredited information systems programs? Detailed interviews were conducted to determine the assessment methods and the results are presented herein. We present an overview of accreditation and assessment to demonstrate the increased importance it has received and to provide grounding for the importance of this research.

II. ACCREDITATION
Accreditation is a common aspect of higher education within the United States. Almost every university in the United States is accredited by a regional accreditation agency. The regional accreditation institutions are: • • • • • • Middle States Association of Colleges and Schools New England Association of Schools and Colleges North Central Association of Colleges and Schools Northwest Commission on Colleges and Universities Southern Association of Colleges and Schools Western Association of Schools and Colleges

NEASC (New England Association of Schools and Colleges) states: “Accreditation is a status granted to an educational institution or a program that has been found to meet or exceed stated criteria of educational quality. In the United States, accreditation is voluntarily sought by institutions and programs and is conferred by non-governmental bodies.” [NEASC Role and Value of Accreditation, 2005] As accreditation is voluntary, there are institutions that are not accredited. But, as stated by the Michigan Department of Civil Service “Degrees from these institutions (non-accredited) will not be accepted by the Department of Civil

54

Journal of Informatics Education Research White and McCarthy

Service

as

satisfying

and

educational

requirements

indicated

on

job

specifications.”[Michigan Hiring, 2005] Accreditation implies a stamp of approval that the institution accredited has undergone a rigorous analysis and review and has met or exceeded the stated criteria.

III. ASSESSMENT
As a requirement of accreditation, the agencies that grant accreditation require on-going assessment. Assessment has been informally be described as feedback, but it goes farther than just feedback. NEASC (New England Association of Schools and Colleges), in their accreditation standards, reports the following regarding assessment: “The

Institution implements a systematic and broad-based approach to the assessment of student learning focused on educational improvement through understanding what and how students are learning through their academic program and, as appropriate, through experiences outside the classroom. This approach is based on a clear statement or statements of what students are expected to gain, achieve, demonstrate, or know by the time they complete their academic program.” [NEASC Accreditation Standards, 2005] Accreditations are generally done campus wide – across all programs and academic units. Generally speaking, in the United States, a college or university that is not accredited by one of these agencies would not be able to receive federal funds (or student federally underwritten funds). Many employer

sponsored educational reimbursement policies exclude programs colleges or universities that are not accredited. Further, an institution that is not accredited is often viewed as an inadequate institution. The North Central Association’s Higher Learning Commission has a matrix of assessment with three stages: 1) Beginning Implementation of Assessment Programs 2) Making progress in Implementing Assessment Programs

Journal of Informatics Education Research White and McCarthy

55

3) Maturing stages of Continuous Improvement [NCA Assessment, 2005] Within these three stages are seven sub-units: 1) Collective / Shared Values (i.e., what is the purpose of the institution) 2) Mission (the mission statements reflect the educational goals) 3) Faculty (faculty have developed measurable goals for the program’s educational goals) 4) Administration / Board – (oversees the assessment processes) 5) Students – (are knowledgeable about the assessment processes and programs) 6) Resources – (the institution has a budget to support on-going assessment activities) 7) Structures - (there is an ongoing, regular, annual assessment process) [NCA Assessment, 2005] One might ask “Why do assessments?” Some responses include: Southern Illinois University at Edwardsville responded by stating: “Two reasons for "doing" assessment come to mind rather quickly. First, assessment is what we faculty members can do in order to demonstrate to ourselves that we actually do what we say we do. It is our source of in-process feedback. As opposed to grades, assessment decomposes the curriculum (or an assignment, class, or course) into component parts and makes those parts visible. Second, assessment satisfies the demands for accountability by external agencies. Physicians, surgeons, lawyers, and nurses all practice their professions daily in front of their peers. They are constantly subject to peer review and feedback. Professors are perhaps the only professionals who habitually isolate themselves from peers behind closed [classroom] doors, there to practice the major activity for which they receive payment. Given the immense costs of higher education, if we the faculty don't use assessment to provide accountability, surely someone else will do it for us.” [Southern Illinois University Assessment, 2005] AACSB (The Association to Advance Collegiate Schools of Business) answered that same question this way: “Over the past decade, mounting demands on educators for accountability have increased interest in the

56

Journal of Informatics Education Research White and McCarthy

assessment of student learning. Institutions at all educational levels now are often required to prove to legislatures that students are indeed learning what educators claim they are teaching. In response, accreditation agencies,

including regional assessment organizations (e.g., North Central Association of Colleges and Schools) and many professional accreditation agencies (including AACSB), also are placing a higher priority on assessment. This trend is gaining momentum–demands for assessments are here, and are not expected to abate any time soon.” [AACSB Assessment Overview, 2005] ABET, the organization that accredits Information Systems programs, has a solid history in the assessment arena. “In 1997, following nearly a decade of development, ABET adopted Engineering Criteria 2000 (EC2000), considered at the time a revolutionary approach to accreditation criteria. The revolution of EC2000 was its focus on what is learned rather than what is taught. At its core was the call for a continuous improvement process informed by the specific mission and goals of individual institutions and programs. Lacking the inflexibility of earlier accreditation criteria, EC2000 meant that ABET could enable program innovation rather than stifling it, as well as encourage new assessment processes and subsequent program improvement.”[ABET History, 2005]. Assessment is one aspect of continuous quality improvement that is found in other quality programs such as ISO 9000; Malcolm Baldridge awards; Total Quality Management and others. Although assessment is becoming a mandate for many universities in general, and schools of business in particular, there are other reasons to embrace it. Just as excellent businesses must carefully measure the quality of their outputs, so should excellent business schools. Our ‘output’ is not teaching; it is, in fact, student learning. Thus, assessment programs need to shift their focus from ‘what we teach’ to ‘what they have learned.’ The goal of assessment is improved student learning, and the data we gather can play a critical role in improving curricular programs”. [AACSB Assessment Overview, 2005]

Journal of Informatics Education Research White and McCarthy

57

Paloma and Banta [Palomba and Banta, 1999] in their work Assessments Essential, stated: “Assessment is the systematic collection, review, and use of information about educational programs undertaken for the purpose of improving student learning and development.”

IV. ACADEMIC UNIT ACCREDITATION
A second level of accreditation applies to specific academic units. For example, AACSB (Association to Advance Collegiate Schools of Business) accredits Business Schools. There are 494 programs accredited by AACSB. [AACSB Members, 2005] NCATE (National Council for Accreditation of Teacher Education) accredits Teaching programs, with over 500 programs accredited. [NCATE Accredited, 2005] NLNAC (National League for Nursing Accrediting

Commission) accredits nursing programs – with over 1500 programs (including community colleges and hospital specific programs). [National League for Nursing Accrediting Commission, 2005] The third level of accreditation is for specific programs. These programs are generally in an academic unit – such as a Nursing program within a Health Sciences School; or an Electrical Engineering program within an Engineering School.

V. ABET INFORMATION SYSTEMS ACCREDITATION
ABET is an organization that has traditionally accredited Engineering programs. The original meaning of ABET was “Accreditation Board for

Engineering Technologies”, although that specific identification of the ABET acronym is not currently used. In 1986, the Computer Science Accrediting Board (CSAB) began accrediting computer science programs. In 2001, ABET and

CSAB merged. As of summer 2004, approximately 250 campuses had their computer science programs accredited. In 2002, ABET accredited Pace

University as the first information systems (IS) program to receive such accreditation. [ABET Accredited, 2005] Accreditation of both Computer Science and Information Systems falls under the Computer Accreditation Commission

58

Journal of Informatics Education Research White and McCarthy

(CAC). The CAC is aligned with the CSAB, which functions as a member society of ABET. The CSAB is currently made up of 4 representatives from IEEE, 4 representatives from ACM and 1 representative from AIS. [Computer Sciences Accrediting Board, 2005] The Association for Information Systems (AIS) stepped into the Accreditation of Computing Programs under the ABET / CSAB in 2001. Joe Valacich, the AIS representative to the CSAB Board of Directors at that time, noted in a November 13, 2001 report to the AIS executive committee that “IS Accreditation is moving forward”. [Valacich, 2001] This report prior to the first accreditation of IS programs suggested that there were two options for accreditation – AACSB and ABET. But, AACSB only accredits business

programs and accounting programs and was not looking at specific accreditation for IS programs. He also indicated that the push for accreditation for IS

programs was largely coming from programs outside AACSB business schools – frequently in academic units where both computer science programs and information systems programs resided. Many of these schools already had their computer science programs accredited by ABET, but had no options for accreditation of their information systems programs.[Valacich, 2001] Of the

currently accredited programs, only Virginia Commonwealth University has an Information Systems program within a School (or College) of Business that is AACSB accredited. Since 2002, ten campuses have had their Information Systems programs accredited. These schools include: • • • • • • • Drexel University, Philadelphia, Pennsylvania Illinois State University, Normal, Illinois Kennesaw State University, Kennesaw, Georgia New Jersey Institute of Technology, Newark, NJ Pace University, New York, New York Robert Morris, University, Moon Township, Pennsylvania University of Nebraska-Omaha, Omaha, Nebraska

Journal of Informatics Education Research White and McCarthy

59

• • •

University of North Florida, Jacksonville, Florida University of South Alabama, Mobile, Alabama Virginia Commonwealth University, Richmond, Virginia

The criteria for accrediting Information Systems Programs (2005-2006) is composed of eight standards: I. II. III. IV. V. VI. VII. VIII. Objectives and Assessments Students Faculty Curriculum Technology Infrastructure Institutional Support and Financial Resources Program Delivery Institutional Facilities

VI. ASSESSMENT CRITERIA
“The criteria for standard 1 “Objectives and Assessments” is given below: Intent The program has documented educational objectives that are consistent with the mission of the institution. The program has in place processes to

regularly assess its progress against its objectives and uses the results of the assessments to identify program improvements and to modify the program’s objectives. Standards I-1. The program must have documented educational objectives. I-2. The program’s objectives must included expected outcomes for graduating students. I-3. Mechanisms must be in place to periodically review the program and the courses. I-4. The results of the program’s assessment must be used to help identify and implement program improvement. I-5. The results of the program’s review and the actions must be documented.” [ABET Standards, 2005]

60

Journal of Informatics Education Research White and McCarthy

This is the first criteria specified for IS Accreditation under the ABET standards. There are the five standards as specified above relating to objectives and assessment. The last three standards within this first criterion for

accreditation become the focus of this study. Before we investigate the specifics of the ABET Accreditation Assessment Criteria, let’s look at the process of developing an Assessment Plan. Gloria Rogers [ABET Rogers, 2005], who recently joined the ABET staff as an assessment expert has the following steps for developing an Assessment Plan: 1) 2) 3) 4) 5) 6) Performance Objectives Established Learning Outcomes Developed Performance Criteria Defined Curriculum Aligned with Outcomes Assessment Method(s) chosen Feedback loop closed

Graphically, Rogers assessment plan includes: [Rogers, 2005]

Figure 1. Assessment Planning Flow Chart

Journal of Informatics Education Research White and McCarthy

61

VII. EXAMINATION OF ABET ASSESSMENT STANDARD
I-3. Mechanisms must be in place to periodically review the program and the courses. The accreditation guidelines do not state what the mechanisms must be, but that they must exist and that they must be performed periodically. Rogers [Rogers, 2005] suggests the following assessment methods: Table1. Assessment Methods
Assessment Method Archival Records Behavioral Observations Exit Interviews External Examiner Focus Groups Locally Developed Exams Oral Exams Performance Appraisal Portfolios Simulations Surveys and Questionnaires Standardized Tests

TESTING These mechanisms tend to fall into both quantitative and qualitative categories. The recently developed standardized IS test from the Center for Computing Education Research (CCER) provides comprehensive quantitative feedback. [Center for Computing Education Research, 2005] The test is currently matched to the IS 2002 Curriculum standard, with over 500 test questions relating to various sub-skills and learning objectives in the model curriculum. The CCER has the following mission statement: “Our mission is to facilitate

development of excellence in computing education and training by generating current, accurate, and pertinent assessment examinations and processes to facilitate and recognize continuous improvement in university and industry computing programs”. [Center for Computing Education Research, 2005] This test is in its first year as a formal assessment program in 2005, but has been in a beta release for the past two years.

62

Journal of Informatics Education Research White and McCarthy

The CCER test identifies 38 IS skills within 8 content areas that fit within 3 general categories: (Information Technology Skills; Organizational and

Professional Skills; and Strategic Organizational Systems Development with IS). The 8 content areas are: [IS Education, 2005] Table 2. CCER Content Areas
CCER Test Content Areas Software Development Web Development Databases Systems Integration Individual and Team Interpersonal Skills Business Fundamentals Organizational Systems Development Project Management

Some campuses have opted for local developed tests. These tests are created by the faculty to test skills and concepts for courses. Some campuses do such local testing after the sophomore year and senior year to provide assessment feedback for improving the program. Qualitative data frequently comes from more subjective sources – such as questionnaires and interviews. Such qualitative information might include data such as the following example taken from an alumni survey: • • • • • • • What is your current position? Who is your employer? What skills from your IS courses do you use on a daily (or very frequent) basis? What other skills from your other courses do you use daily? What were the strengths from your IS major? How did it prepare you for your current position and the job market? In your opinion, what could / should we do to improve and strength the IS program? What things do you see in your current position that could and should be incorporated into the IS program.

Journal of Informatics Education Research White and McCarthy

63

Qualitative feedback assists an IS program in finding that they have weaknesses in particular areas or their curriculum needs to be updated to reflect changes in technology or standards. In general, the qualitative analysis should focus on the program, not the instructors. The feedback is to give direction to the program (and to the teaching of courses in the program.). While instructors are the main delivery mechanism of such education, it may be difficult to directly change the individual. Changes to courses and the program can be less threatening and easier to implement. The noted qualitative management expert, W. Edwards Deming put the focus on process changes, not on people.[Deming, 2005] While feedback from

quantitative or qualitative data may indicate that a particular professor is not adequately teaching his or her class, the feedback generally points to improving the program directly and the professor indirectly. I-4. The results of the program’s assessment must be used to help identify and implement program improvement. Assessment is only as good as the use of the data. Data from either quantitative or qualitative sources that is not used does not improve the program. Frequently programs perform a “SWOT” analysis (Strengths, Weaknesses, Opportunities, Threats). With both quantitative and qualitative data, the program can assess their strengths and weaknesses. To be accredited by ABET under the IS criteria, the campus must demonstrate that the assessment measurements are gathered, and also that they are used to “identify and implement program improvement.” Does the data show (for example) a weakness in networking and telecommunications? Is the weakness across the board? Are there explanations for the weaknesses? Are they ways to improve the networking and telecommunications knowledge transfer?

64

Journal of Informatics Education Research White and McCarthy

Depending on the specific program – size of faculty and number of students; either the entire IS faculty can be the “assessment committee” or a subset of IS faculty can be the assessment committee. ABET specifies that the assessment process review with actions to be taken documented; therefore minutes of meetings of the assessment committee must be taken. I-5. The results of the program’s review and the actions must be documented. For the ABET accreditation process, the standard specifies that the assessment process be documented. This means that the department’s

assessment committee meets to review and discuss the assessment measures and feedback. Minutes of the meeting and documentation of actions of the

assessment committee must be kept. As part of the ABET accreditation visit, team members will review the assessment committee minutes and actions to verify that the program has met this criteria. To be accredited by ABET, campuses must have such assessment criteria and standards in their program.

VIII. RESEARCH METHODOLOGY
Phenomenological research is appropriate when a small subset of the sample population can be interviewed to determine characteristics of the entire population. We utilized this approach interview the assessment coordinators of the ten accredited information systems programs to determine: What are the critical success factors of assessment programs within accredited information systems programs? Ten universities were the focus of this study. These represent the

universities that have information systems programs that have been accredited during the 2002-2004 time period and include:

Journal of Informatics Education Research White and McCarthy

65

1. 2. 3. 4. 5. 6. 7. 8. 9. 10.

Drexel University, Philadelphia, Pennsylvania Illinois State University, Normal, Illinois Kennesaw State University, Kennesaw, Georgia New Jersey Institute of Technology, Newark, NJ Pace University, New York, New York Robert Morris University, Moon Township, Pennsylvania University of Nebraska-Omaha, Omaha, Nebraska University of North Florida, Jacksonville, Florida University of South Alabama, Mobile, Alabama Virginia Commonwealth University, Richmond, Virginia

A standard set of 11 questions was developed to review with the assessment coordinators (in some cases this is one individual, in other cases it is a committee). These questions included: Please describe your assessment program. Do you use some form of test for your assessment program? Could you describe the test? [e.g., is it the CCER IS Assessment test from http://computingeducation.org, or a some other test (if other, please specify)] 4) Do you utilize a graduate survey? 5) If so, when do you administer such a survey (i.e., 1 year out, 2 years out?) 6) Could you provide samples of the survey questions? (not the actual feedback) 7) Do you use some form of undergraduate qualitative feedback as part of your assessment process (e.g., a senior exit interview process) 8) If so, please describe your process 9) Do you use feedback and assessment of your program from an advisory board? 10) If so, how frequently do you meet? What is the make-up of your board? What input do they provide? 11) Please describe any other assessment issues / ideas. The questions were emailed to the assessment coordinators before the interviews took place to allow them time to prepare for the interview. 1) 2) 3)

66

Journal of Informatics Education Research White and McCarthy

The research questionnaire was followed up either by a telephone interview to the assessment coordinator(s) or through the return e-mail response for each of the ten campuses.

IX. RESULTS
To better illustrate the results, we will divide the findings into specific assessment methods:

X. EXIT INTERVIEWS
Robert Morris University, Moon Township Pennsylvania (outside of Pittsburgh) has a strong relationship with the Career Development Department on campus [Kohun, 2005]. Many students take internships or cooperative

learning experiences as part of their students and these are coordinated through the Career Development Department. As an exit interview, both the Information Systems Department and the Career Development Department meet with students who have been on internships or coop experiences. They ask the

students about their preparation for the job market such as “Did you have the necessary skills to function in the work environment;” “What do you wish you knew prior to the internship that you know now?” In addition, both the IS

department and the Career Development Department get feedback on the work site – where to send students there in the future.

XI. PORTFOLIO ANALYSIS
Kennesaw State University, Kennesaw, Georgia conducts an extensive portfolio analysis as an assessment tool. [Murray, 2005] Information Systems majors there are required to complete a senior capstone project, and that project is the focal point of the portfolio analysis. A team of IS faculty across the

subareas of analysis and design, networking, database and project management meet to critically assess student performance on the project. The project is structured to include aspects of almost all prior IS courses, therefore the portfolio

Journal of Informatics Education Research White and McCarthy

67

analysis gives feedback to the department on skill areas and may suggest areas that can be strengthened in the program.

XII. EXIT EXAMINATIONS
Several of the campuses have been using the IS Assessment test developed by the Center for Computer Education Research. The feedback was mixed on the test. Most felt the test offered good coverage of IS topics; but the length of the test (at three hours) was too long to fit within either a class time or in a final examination testing period of two hours. For example, Pace University has used the IS assessment test, but indicated they would be going away from it due to the time factor. [Houle, 2005] In a similar vein, Illinois State University also has used the IS Assessment test, but is in the process of developing their own assessment test that would more specifically map to their courses and also fit within the time frame. Robert Morris University indicated some surprise with the CCER test as their students did poorer on the systems development concepts than expected (the assessment coordinator felt was a strong area in their program), yet did surprisingly well on networking concepts. [Kohun, 2005]

XIIIl. ADVISORY BOARDS
Almost every of the currently accredited IS programs have an active advisory board. There seems to be two approaches to advisory boards. The first approach is a review board, where the campus academics present the status and information about their program to the advisory board; the second approach involves the advisory board as more of a “think tank,” with ideas, suggestions, and oversight. NJIT exhibits both sides – as their annual advisory board

meetings has a period where the campus program provides information and updates to the computing professionals on the advisory board and then the advisory board “provides the campus with feedback on whether our courses are preparing graduate well for their professional work in IS.” [Scheer, 2005]

68

Journal of Informatics Education Research White and McCarthy

XIV. CLASS OUTCOME ASSESSMENT
Virginia Commonwealth indicated, “We take an outcomes based approach in most of our classes where a student’s success hinges on his/her ability to demonstrate competence. [Redmond, 2005] They must build something. We do this in addition to tests which we find useful but not complete. I like to think of our motto as ‘Our students not only know information systems but they can do information systems’. This is reflected in our outcomes-based approach in many of our classes.”[Redmond, 2005] The focus is on outcomes; however they do not have quantitative measurements in place to evaluate their outcomes. They

currently utilize graduate surveys and an information systems advisory board as their primary assessment tools. The advisory board meets twice a year and recent feedback from their board has resulted in curriculum changes that focused on project management and teamwork based projects.

XV. CONCLUSION
There is no one standard method for assessing the learning outcomes of information technology programs; nor is there one standard method needed for an information technology program to become accredited by ABET. ABET,

through the EC2000 initiative, has encouraged programs to use flexible approaches to assessment and reaching accreditation. However, each of the programs that have been accredited have multiple assessment programs in place designed to receive feedback at checkpoints in a student’s career that spans their program of study, at graduation and during their post-graduate career. Our study showed that of the currently accredited campuses, table 3 lists the predominate assessment methods:

Journal of Informatics Education Research White and McCarthy

69

Table 3. Common Assessment Methods
Assessment Method: Advisory Boards Standard Test Portfolio Analysis Exit Interviews

The assessment methods used falls into qualitative and quantitative areas. General assessment guidelines are in place but it is up to each program to determine specifically which metrics they will utilize and to put processes in place to ensure these metrics are captured and evaluated. In effect, the accreditation assessment process requires that a program have an academic total quality management program in place. To be accredited demonstrates that your program demonstrates commitment to ensuring that learning outcomes are met and that you have a disciplined process in place to measure results and make improvements when necessary. Although handled differently, each program has also incorporated an advisory board into the assessment process. The various assessment programs and the willingness to share the information on the part of the participants indicated that each has developed an internal culture that openly seeks to selfassess and continuously improve their program.

XVI. FUTURE WORK
This initial exploratory study will be followed up with a more rigorous look at the cost and value of program assessment. We intend to survey current and future accredited information systems programs. We anticipate that this will be a longitudinal study as it will take time to measure the value of accreditation outside the university community.

XVII. REFERENCES
AACSB Accredited Members, www.aacsb.edu/accreditation/AccreditedMembers.asp, (current Jan. 11, 2005).

70

Journal of Informatics Education Research White and McCarthy

AACSB Assessment Overview, www.aacsb.edu/resource_centers/assessment/overviewwhy.asp, (current Jan. 10, 2005). ABET Accredited Programs in Computing,
www.abet.org/accredited_programs/computing/schoolarea.asp, (current Jan. 10,

2005). ABET Accredited, www.abet.org/accredited_programs.html, (current Jan. 12, 2005) ABET Criteria, www.abet.org/criteria.html, (current Jan. 11, 2005). ABET History, http://www.abet.org/history.html, (current May 11, 2005). ABET Rogers, www.abet.org/press%5Freleases/gloriarogers.html, (current Jan. 11, 2005). ABET Standards, www.abet.org/images/Criteria/C001%2005-06%20CAC%20Criteria%201129-04.pdf, (current Jan. 10, 2005). Center for Computing Education Research http://computingeducation.org, (current May 11, 2005). Center for Computing Education Research, www.computingeducation.org/index.html, (current Jan. 13, 2005). Computing Sciences Accrediting Board, http://www.csab.org , (current May 11, 2005). Deming, http://www.deming.org/theman/teachings02.html, (current May 12, 2005). Dennis, Terry, Director of School of Information Technology, College of Applied Science and Technology, Illinois State University, Normal, IL (February 9, 2005), Phone Interview. Houle, Bernice, Assistant Dean, School of Computer Science and Information Systems, Pace University, White Plains, NY (February 1, 2005), Phone Interview. IS 2002 Model Curriculum, is2002.org, (current Jan. 10, 2005 ). IS Education, iseducation.org, (current Jan. 10, 2005 ). Kohun, Fred, Associate Dean, School of Communications and Information Systems, Robert Morris University, Moon Township, PA (February 9, 2005), Phone Interview.

Journal of Informatics Education Research White and McCarthy

71

Longenecker, Herbert Jr., Professor of Computer and Information Systems, Coordinator of Information Systems, School of Computer and Information Sciences, University of South Alabama, Mobile, AL (February 26, 2005), Electronic Mail. Michigan Hiring, www.michigan.gov/documents/Non-accreditedSchools_78090_7.pdf, (current Jan. 10, 2005). Murray, Meg, Assistant Professor of Information Systems, College of Science and Mathematics, Kennesaw State University, Kennesaw, GA (February 3, 2005), Phone Interview. National League for Nursing Accrediting Commission, www.nlnac.org/home.htm, (current Jan. 11, 2005). NCA (North Central Association) Assessment,
www.ncahigherlearningcommission.org/resources/assessment/AssessMatrix03.pdf,

(current Jan. 10, 2005). NCATE Accredited, www.ncate.org/accred/list-institutions/the_list.htm, (current Jan. 13, 2005). NEASC Accreditation Standards,
www.neasc.org/cihe/revisions/draft_standards_for_accreditation.pdf, (current Jan. 10,

2005). NEASC Role and Value of Accreditation, www.neasc.org/cihe/ciherole.htm, (current Jan. 12, 2005). Palomba, C. and T. Banta (1999) Assessment Essentials, San Francisco: Jossey Bass. Redmond, Rich, Chairman and Associate Professor, Information Systems, School of Business, Richmond, VA (January 3, 2005), Electronic Mail. Rogers, Gloria, http://www.assessmentplan.org (developed by Gloria Rogers), (current May 11, 2005). Roggio, Bob, Professor of Computer and Information Sciences, College of Computing, Engineering and Construction, University of North Florida, Jacksonville, FL (January 31, 2005), Electronic Mail. Sheer, Julian, Associate Professor of Information Systems and Associate Chair of Undergraduate IS Programs, College of Computing Sciences, New Jersey Institute of Technology, Newark, NJ (February 8, 2005), Electronic Mail.

72

Journal of Informatics Education Research White and McCarthy

Southern Illinois University Assessment, www.siue.edu/~deder/assess/why.html, (current Jan. 11, 2005). Valacich, J. (2001) "Accreditation in the Information Systems Academic Discipline" White paper submit to the AIS Executive Committee, November 13. Wolcott, Peter, Associate Professor, Information Systems, College of Information Science and Technology, University of Nebraska-Omaha, Omaha, NE (February 3, 2005), Phone Interview.

Authors’ Biographies
Dr. Bruce A. White (Ph.D. University of Nebraska-Lincoln, BS, MS Winona State University) is a professor of Information Systems Management in the School of Business at Quinnipiac University and Chair of the Information Systems Management programs at Quinnipiac University. He is active as an ABET

accreditation visiting team member, with the Center for Computing Education Research, has chaired the ISECON conference four times. His current research is in IS education, the IS model curriculum, accreditation and assessment. Prior to coming to Quinnipiac University, he was a professor at Dakota State University. Richard V. McCarthy (MBA, Western New England College, DBA, Nova Southeastern University) is a professor of Information Systems Management at the School of Business, Quinnipiac University. Prior to this, Dr. McCarthy was an associate professor of Management Information Systems at Central Connecticut State University. He has twenty years of experience within the insurance

industry and has held a Charter Property Casualty Underwriter (CPCU) designation since 1991. He has authored numerous journal articles and

contributed to the current edition of the textbook Decision Support Systems and Intelligent Systems. His current research interests include enterprise

architecture, information systems strategy and Information technology education issues.

Journal of Informatics Education Research White and McCarthy

73

74

Journal of Informatics Education Research White and McCarthy


				
DOCUMENT INFO
Shared By:
Stats:
views:69
posted:12/19/2009
language:English
pages:22
Description: WHAT GETS MEASURED GETS DONE EVALUATING THE ASSESSMENT RESULTS OF