Kelly M. Gordon
May 16, 2008
A comparison of instruction in chat and
in-person reference interactions
Abstract: Reference interactions provide opportunities to deliver point-of-need
instruction to patrons in the skills needed to be competent and critical users of
information. As chat becomes more widely adopted as a mode of providing reference in
academic libraries, it has been increasingly recognized that chat can be a viable way of
delivering instruction during the reference interaction. However, this author is aware of
no studies that compare how reference providers take advantage of the “teaching
moment” in chat and in-person reference. Using unobtrusive methodology, and the
ACRL Information Literacy Competency standards and the Approaches to Teaching
Inventory as frameworks, the proposed study will address the following questions: 1.)
Does instruction occur equally frequently in chat and in-person reference modes? 2.)
Are there differences between chat and in-person reference with regards to the delivery of
instruction that supports information literacy standards and “deep learning” behaviors in
Background and context
It has long been recognized that reference interactions provide opportunities to
deliver point-of-need instruction to patrons in the skills needed to be competent and
critical users of information. Instruction that occurs during the reference interaction may
be even more pertinent and useful than classroom bibliographic instruction, because 1) it
is delivered at the time the user needs it, and thus can be immediately applied, 2) it is
actively sought by the user, and thus is an example of self-directed rather than passive
learning, and 3) the level and content of instruction can be tailored to the need of the user.
In academic libraries, in-person reference and chat reference can both be
important tools in the teaching of information literacy skills; however, there have been
few direct comparisons of instruction offered in each mode of reference. Fennewald
(2006) found that 72% of chat questions were classified as “reference” (as opposed to
directional, policy, or troubleshooting) questions, compared to only 38% of in-person
questions. This may lead one to predict that chat reference will yield a higher proportion
of questions that provide opportunities for “teachable moments”. Of more interest is
whether there are differences in the ways that librarians engage with these teachable
moments in chat and in-person reference interactions. For instance, Ellis (2004) predicts
that librarians working in chat environments may not provide as much detailed
information about complex information literacy skills such as the evaluation of
information sources, because of the awkwardness of conveying detailed information in
the chat medium. However, it may be that the use of co-browsing technology, for
instance, actually facilitates instruction related to information seeking in electronic
Investigation into the dynamics of instruction delivery in chat and in-person
reference environments may provide valuable insights into how instruction is delivered,
and how both in-person and chat reference services may be improved to more fully take
advantage of the opportunities of point-of-need instruction. Additionally, the research
proposed here may provide insight into ongoing questions about whether chat reference is
as effective as in-person reference for the delivery of point-of-need instruction.
Statement of research problem
The growing popularity of chat reference in recent years has prompted a great
deal of investigation into the effectiveness of this medium as an alternative means of
delivering reference service. The ability to preserve an exact written record of chat
reference interactions without introducing observer biases has facilitated numerous
studies of multiple dimensions of chat reference, including studies of instruction in chat
reference. However, few studies have attempted direct comparisons between chat and in-
person reference, perhaps because of the continuing difficulty of unobtrusively observing
in-person reference interactions.
I propose to conduct a study that directly compares instruction delivered via in-
person and chat reference services in academic libraries. I‟m interested in addressing the
1.) Does instruction occur equally frequently in chat and in-person reference modes?
2.) Are there differences between chat and in-person reference with regards to the
delivery of instruction that supports information literacy standards and “deep learning”
behaviors in patrons?
Instruction in reference interactions
Although numerous studies attempt to evaluate some aspect of instruction in
reference, few compare differences in instruction between chat and in-person reference
services, and few attempt to address the efficacy of reference instruction in teaching
information literacy and critical thinking skills. To this author‟s knowledge none
combine these questions by examining the whether the delivery mode of reference (i.e.
whether the interaction occurs virtually or in-person) affects the promotion of
information literacy and critical thinking skills during reference interactions. However,
the importance of promoting information literacy standards and critical thinking skills
during reference interactions is widely recognized in the literature. Beck and Turner
(2001) note that, because students tend to be more receptive to learning new skills at the
time that they are needed, reference interactions provide the ideal opportunity to impart
research techniques. Several studies (Beck & Turner, 2001; Elmborg, 2002; McCutcheon
& Lambert, 2001; Woodard & Arp, 2005) offer advice to librarians about how to best
take advantage of the teaching moment during reference interactions. All of these studies
stress the importance of utilizing teaching techniques that encourage active learning,
critical thinking, and problem solving on the part of the student. Beck and Turner (2001)
note that “in the process of conducting reference, we also want to be coaching students in
applying problem-solving methods of library research.” McCutcheon and Lambert
(2001) encourage librarians to promote the goals of information literacy at the reference
desk. Elmborg (2002) observes that librarians need to unlearn the habit of answering
questions, and learn to ask them instead, to foster students‟ abilities to answer their own
Fennewald (2006) compares the types of questions received by chat and in-person
reference services at Penn State University and finds that 72% of questions received by
the chat service are “Reference” (as opposed to directional, troubleshooting, or other
types) questions, in contrast to 38% of the questions received by the in-person service.
This may imply that a higher percentage of chat reference interactions are likely to
contain opportunities for the delivery of instruction. Johnston (2003) found that 60% of
chat reference interactions occurring via the University of New Brunswick‟s LIVE virtual
reference service contain some form of instruction, and that general reference and subject
specific questions were both the most frequently received types of questions and the
interactions that most frequently involved instruction.
Graves and Desai (2006) examine the frequency with which various instructional
strategies are employed during chat reference interactions with and without a co-browse
feature and find that resource suggestion was the most common instructional strategy
used in co-browse environments, whereas leading was most typically employed when
relying on chat only. However, the authors speculate that this phenomenon may have had
more to do with librarians‟ discomfort with co-browse software than with any inherent
property of either reference mode. Ward (2004) used unobtrusive methodology to pose
questions to a chat reference service at the University of Illinois in order to assess the
“completeness” of interactions. He developed four criteria to measure completeness, two
of which (recommendation of a specific database and suggestion of key words and/or
search tips) he classified as “instructional” criteria, and found that 79% of the interactions
included both of these instructional criteria.
Standards-based reference instruction assessment
The ACRL Information Literacy Competency Standards (2008) and the
Eisenberg- Berkowitz Information Problem-Solving model have both formed the basis
for assessments of whether, and how, instruction during the reference interaction
promotes information seeking skills. Cottrell and Eisenberg (2001) assess in-person
reference interactions to determine what phase(s) of the information problem-solving
process is/are represented; they find that the location and access phase is by far the most
commonly addressed, whereas the task definition phase is a distant second. However,
Smyth (2003), in her comparison the usefulness of three different frameworks for
reference interaction evaluation, indicates that the Eisenberg-Berkowitz model is of
limited use because of the difficulty of determining how the patron is progressing through
the research process. She notes that the use of ACRL standards as a framework provides
interesting insights into which standards are rarely addressed during chat interactions.
Ellis (2004) coded 138 chat transcripts to determine which of the standards are most
frequently taught during virtual reference interactions. She found that the second
standard (dealing with information access) was most commonly taught, appearing in 62%
of the interactions examined, whereas the first standard (dealing with the nature and
extent of the information need) was second, appearing in 22% of the interactions
examined. Ellis also found, surprisingly, that none of the transcripts examined dealt with
the third standard, the evaluation of information sources; she speculates that this may be
due to the fact that instruction surrounding source evaluation is likely to be lengthy and
involved, and perhaps difficult to convey in a chat reference interaction.
Approaches to teaching
Theory on teaching and learning approaches can provide a conceptual framework
upon which to base an assessment of instruction efforts during reference interactions. In
particular, Trigwell and Prosser‟s (2004) conceptual framework is both applicable and
appealingly simple. They propose a hierarchy of approaches to teaching, developed from
a qualitative study of teachers of first-year science at the college level. Approach A, at
the lowest level of the hierarchy, is a teacher-focused approach with the intention of
imparting a specific body of information. The prior knowledge of students is not
relevant, and it is assumed that the students will learn passively in this approach.
Approach E, at the highest level of the hierarchy, is a student-centered approach. The
intention is to create a conceptual change in students, with students constructing their
own knowledge. Students learn actively and integrate new worldviews with their already
existing knowledge. Approaches B-D represent intermediate stages between these two
The approaches can be roughly broken down into four different intentions (or
teacher ideas about student outcomes) interacting with three different strategies . Teacher
intentions include information transmission, concept acquisition, conceptual
development, and conceptual change. Strategies include teacher-focused, student/teacher
interaction, and student-focused approaches. Thus, Approach A is represented by the
intersection between the teacher-focused strategy and the intention of information
transmission. Approach E is at the intersection between student focused strategy and
conceptual change. These two extremes define a continuum between an Information
Transmission/ Teacher-focused approach and a Conceptual Change/ Student-focused
approach. Trigwell and Prosser (2004) emphasize that an individual teacher may use
elements of any of the approaches in different contexts; the approaches are meant to
typify teachers‟ approaches to particular teaching tasks.
Multiple studies have identified a relationship between teaching approaches and
learning approaches. The Approaches to Teaching Inventory (ATI) is an index that
quantifies a teacher‟s position along the continuum described above with regards to a
particular teaching task. A corresponding Approaches to Learning Inventory quantifies a
student‟s position along a “surface learning” to “deep learning” continuum in regards to a
particular learning task. Trigwell , Prosser and Waterhouse (1999) found that students in
classes with teachers who scored high on the ATI tended to take more of a deep learning
approach. And, Gibbs and Coffey (2004) found that teachers displayed increases in their
ATI scores after a year-long training course when compared to their initial ATI scores.
Moreover, students of these teachers displayed increases in their learning inventory
scores over time.
Through this research I seek to determine whether differences exist in instruction
delivery between chat and in-person reference. My first question is, are there differences
in the frequency of instruction between these two modes of reference? I will address this
question by quantifying instances of instruction in reference interactions. My second
question is, are there differences between the two modes in the degree to which
instruction promotes information literacy skills and “deep learning”? This question has
two parts. The first is an evaluation of the degree to which the ACRL Information
Literacy Competency Standards (2008) are promoted in the two types of reference
interactions. The second involves analyzing teaching approaches demonstrated during
reference interactions, based upon the framework of the Approaches to Teaching
Inventory described above (Trigwell & Prosser, 2004).
The unit of analysis for the first question will be one reference interaction, but in
the second question the unit of analysis shifts to individual teaching behaviors. The
independent variable for all questions is the mode of reference; chat reference
interactions will be compared to in-person reference interactions for the first question,
and chat reference behaviors will be compared to in-person reference behaviors for the
second question. The first question examines the relationship between the mode of
reference and whether or not instruction occurs during a given interaction. The second
question will address whether the mode of reference impacts how frequently teaching
behaviors are exhibited that support each of the five ACRL standards. Additionally, the
second question will address whether the mode of reference impacts the teaching
approaches exhibited by the reference provider.
In order to determine whether instruction has occurred during a given interaction,
it‟s important to generate a working definition. For the purposes of this study, instruction
includes any communication on the part of the reference provider that appears to be
intended to impart understanding of some aspect of the retrieval, use, or evaluation of
information. To assess whether instruction has occurred that supports one of the five
ACRL standards, a set of example behaviors have been developed to assist with coding
transcripts of the reference interactions. These example behaviors are shown in Table 1.
Table 1: Example instruction behaviors associated with ACRL standards 1-5
ACRL standard Example behavior
1. …determin(ing) the nature and Working through whether books or
extent of the information needed. journal articles are more appropriate
2. … access(ing) needed information Using online catalog, database, print
effectively and efficiently. reference source
3. … evaluat(ing) information and its
Evaluating quality of web pages
Discussion of research process or
4. … us(ing) information effectively
analyzing, synthesizing info for
to accomplish a specific purpose
5. … understand(ing) many of the
economic, legal, and social issues
surrounding the use of information and Discussion of plagiarism, explanation of
access(ing) and us(ing) information access issues for library
ethically and legally
Other teaching behaviors that are encountered as the transcripts are coded will be
evaluated to determine whether they support any of the five ACRL standards, and will be
counted as well.
As described in the literature review, the Approaches to Teaching Inventory
(Trigwell & Prosser, 2004) examines teacher approaches to a given instructional task
along two continuums, information transmission – conceptual change and teacher-
oriented – student oriented. Teaching behaviors that occur during reference interactions
will be evaluated in light of these approaches; therefore, here I operationalize how these
concepts might be manifested as teaching behaviors. An information transmission
behavior is any instructional activity that appears to be intended to transfer simple
information from the reference provider to the patron. A conceptual change behavior is
any instructional activity that appears to be intended to bring about a change in the way
the patron understands a concept or process; i.e. a change in the patron‟s “worldview”
about information. A teacher-oriented behavior is any behavior in which the teacher
takes the active role; whereas a student-oriented behavior is any behavior in which the
student (in this case, the patron) takes the active role. Example behaviors for each of
these categories are shown in Table 2.
Table 2. Example behaviors associated with Approaches to Teaching categories
Approach to teaching Example behavior
An explanation of what the online catalog
is used for.
Conceptual change Eliciting ideas about the research process
Teacher-oriented A demonstration of an article database
Guiding the patron as she searches the
The following null hypotheses will be tested by this research:
H0a. No significant difference exists in the frequency of instruction between chat and in-
person reference interactions.
H0b. No significant difference exists in the frequency with which each ACRL info lit
standard is addressed between chat and in person reference modes.
H0c. No significant difference exists in the frequency with which each teaching
approach is exhibited between chat and in person reference modes.
The ten university libraries in the University of California system participate in a
cooperative “Ask a UC Librarian” program that provides chat reference services.
Although anyone can log on and ask a question, the program is primarily intended for use
by UC students, faculty, and staff. The chat service is staffed by librarians from each of
the ten participating schools, and uses QuestionPoint 24/7 software as a platform.
In order to create a directly comparable pool of reference interactions, in-person
reference interactions will be conducted at the reference desks of main and branch
libraries of each of the ten UC schools that participate in the “Ask a UC Librarian”
I will conduct all of the reference interactions myself to avoid the difficulties
involved in recruiting and training proxies and insuring consistent data collection. Since
it would be difficult for me as an individual to appear at a reference desk forty times with
forty different questions over a short time span without compromising my anonymity, I
felt that focusing the study on the entire UC system would help to circumvent this
difficulty. Additionally, since many chat reference services run by individual universities
receive relatively few queries, there‟s a risk of “swamping” a smaller service if forty
questions are submitted over a relatively brief period of time. Disadvantages of
collecting data from the UC system include the need to find time and money to travel to
each school, and the necessity of enlisting the support of each library in the system before
beginning the research.
The data collected in this study will be transcripts from two sets (in- person or
chat) of reference interactions, between University of California reference providers and
an observer (myself) who will pose as a patron of the library. A list of forty questions
will be created, and each of these questions will be asked both via chat and in person, for
a total sample of eighty reference interactions. The forty in-person questions will be
distributed equally amongst the 10 different UC schools, with four questions randomly
assigned to each school. Hubbertz (2005) notes that most unobtrusive evaluations of
reference services are flawed by the fact that different questions are posed to each
treatment group, which may mean that variations in the questions themselves may
introduce biases in the data. Although different questions will be asked at each of the in-
person reference services at the 10 different UC schools, the overall set of questions
posed to in person and chat reference librarians will be identical. Since differences
between UC schools are not being examined, this design will not introduce bias into the
In an effort to avoid oversampling of a particular chat reference librarian, chat
reference questions will be spread evenly throughout the open hours of the service. Since
most librarians only participate in the service for 1-2 hours per week, this will hopefully
prevent one librarian handling multiple questions. Similarly, in-person questions at each
of the UC schools will either be posed at different subject specialty libraries within the
school, or at different times to ensure that different librarians are staffing the reference
desk. This measure will also, hopefully, help to preserve the anonymity of the observer.
If possible, permission will be obtained to record the in-person interactions. The
recordings of these interactions will be transcribed. Transcripts of the chat interactions
will be saved by the observer. To supplement the transcripts, immediately after the
interactions the observer will make notes about pertinent activities and events that may
have occurred during the interaction but that weren‟t captured in the recording. For
instance, if the reference provider has the observer move to a terminal and navigate
through an online resource, this will be noted.
To ensure that kinks in data collection are ironed out before data is collected, the
observer will conduct ten practice reference interactions, five in person and five via chat.
In order to avoid contamination of the system under study, these practice interactions will
be conducted at San Jose State‟s King Library, and using San Jose State‟s AskNow chat
reference service, which also uses QuestionPoint software.
The questions used in this study will be specifically formulated to contain
teaching opportunities. Ready-reference, directional, and policy questions will be
avoided; instead, the questions will all fall into the “strategy” or “extended” types of
reference questions, as classified by Fennewald (2006). Questions will be devised by
informally surveying librarians for examples of “real-world” reference questions that
they‟ve dealt with that have contained an instructional component. Excessively
complicated, obscure, or specialized questions will be avoided, as these questions may
result in a referral or a request (from chat librarians) that the patron ask the question in
person. Ten back-up questions will be prepared in case a question does result in a
Since some questions will predispose themselves to particular instructional tasks,
an attempt will be made to select questions that require the use of a variety of information
resources, so that comparisons can be made on the basis of multiple types of instructional
Data collection and analysis
Data will be collected during the Fall 2008 semester. Practice interactions will be
conducted in September 2008, and actual interactions will be conducted during October –
November 2008. No interactions will be conducted during Thanksgiving or Fall breaks,
in an attempt to maintain some consistency in the level of busyness experienced by the
reference services when the questions are posed. Both in person and chat interactions
will be evenly distributed across all open hours for each service, insofar as this is
The observer will record the time and date that each interaction takes place, the
university by which the librarian is employed, and whether the interaction is live or chat.
The observer will also record the length of the interaction and whether it was terminated
due to a referral, technical difficulties, or other reason. Interactions that are prematurely
terminated by technical difficulties will be re-attempted. Interactions that result in a
referral will be discarded, and that question will be removed from both the chat and in-
person dataset and a new question introduced from the back-up pool.
Once these basic data have been recorded for each interaction, the transcripts will
be coded according to the scheme described in the “Problem formulation” section of this
proposal. Coding will be carried out by the observer and two assistants. The assistants
will be familiarized with the ACRL information literacy competency standards and the
Approaches to Teaching inventory, but will not be made aware of the central purpose of
the study; the comparison between in-person and chat reference interactions, to help
minimize bias in coding. The observer and assistants will code the practice interactions
collected from San Jose State together, in an effort to standardize coding practices and
resolve questions and points of confusion. Then, both the observer and the assistants will
separately code all eighty of the reference interaction transcripts according to the
following protocol, resulting in three separate coding sets. First, each interaction will be
evaluated to determine whether or not instruction occurred. If instruction did occur, one
or more teaching behaviors will be identified within the transcript. These teaching
behaviors will be flagged and evaluated based on the following three questions:
1. Does this behavior support any of the ACRL‟s information literacy standards?
If so, which one(s)?
2. Does this behavior typify an information transmission or a conceptual change
teaching approach, or neither of the two?
3. Does this behavior typify a teacher-focused or a student-focused teaching
approach, or neither of the two?
The three coding sets will be compared in order to achieve consensus on a) what is coded
as a teaching behavior and b)how that teaching behavior is coded according to the three
questions, above. If two or three coders agree on a given code, it will be accepted. If
there is no agreement between code sets for a given code, the coders will discuss the code
until a consensus is reached. If no consensus is reached, that datum will be discarded
from the study.
Analyses of the data collected will be primarily descriptive, since most of the
variables under examination for this study are nominal. For the first research question,
the number of interactions that contained instruction will be compared between chat and
in-person interactions. A chi-square analysis will be conducted to determine whether or
not differences in frequency of instruction between the two sample groups are significant.
For the research question regarding ACRL standards, two types of comparisons will be
made for each of the five standards. One will compare the percentage of total teaching
behaviors that support a particular standard for chat and in-person reference. The other
analysis will compare the number of teaching behaviors that support a particular
standard, between chat and in-person reference. The research question regarding
teaching approaches will be examined similarly, with each of the four teaching approach
categories being analyzed separately.
Keeping in mind that any predictions I make are purely speculation that may or
may not be borne out by the data, I expect that the differences in frequency of instruction
between chat and in-person reference will be minimal. There is solid support in the
literature for the notion that chat can be an effective tool for delivering instruction during
reference interactions (Graves & Desai, 2006; Ward, 2004; Woodard & Arp, 2005), and
chat reference has been a mode of reference delivery for enough years now that I believe
most reference providers who practice it are familiar with the notion of providing
instruction via chat. It‟s possible that staff just coming to the chat environment may not
be comfortable with this technique, but new reference staff offering in-person reference
may not have developed their reference skills enough to be comfortable with offering
instruction, as well. Another factor that may come into play is that the Ask-a-UC-
Librarian service is less likely to be staffed by paraprofessionals than the in-person
Based upon Ellis‟ (2004) findings, I expect that ACRL‟s standard 2, regarding
accessing information resources, will be by far the most frequently supported in both
reference environments. Further, I expect that ACRL‟s standard 3, regarding the
evaluation of information resources, will be supported more frequently during in-person
reference interactions than chat reference interactions. Ellis found that no instruction
occurred surrounding evaluation of resources in the chat transcripts that she analyzed, but
in-person interactions may lend themselves to the more involved explanations that
accompany discussions of resource evaluation.
I suspect that concept-changing and student-oriented teaching approaches will be
uncommon in both chat and in-person interactions. Though there have been calls in the
literature for teaching approaches that involve asking questions rather than giving
answers (Elmborg, 2002) and allowing patrons to take the lead when navigating online
resources (Woodard & Arp, 2005), based on my limited personal experience this advice
seems seldom heeded in the “real world” of reference. It may be that student-oriented
teaching approaches are more common during in-person reference interactions, because
of the possibility of accompanying students to computer terminals, the stacks, or other
physical resources in the library, and allowing them to try information seeking activities
for themselves. However, the co-browse feature in QuestionPoint allows patrons to
navigate through webpages, so it‟s possible that some reference providers may ask
patrons to take the reins during a chat reference interaction.
I hope that the results of this research will elucidate some key questions that still
surface in discussions of instruction in chat and in-person reference interactions. First, I
hope that some light will be shed on the continuing debate regarding the adequacy of chat
as a tool for the provision of reference services. Although chat has been widely adopted
in academic libraries as a reference tool, and although many assessments of the use of
chat in providing instruction in reference interactions have been published in the
literature, it is still common to hear reference providers express skepticism about the
usefulness of chat reference. Given that there are so few direct comparisons of chat and
in-person reference in the literature, a study like this could provide considerable insight
into this issue. I hope, as well, that the data collected about ACRL standards and
teaching approaches will provide some food for thought for reference providers as to how
teaching goals that promote information literacy and “deep learning” might be met.
Study limitations and suggestions for further study
Because this study uses unobtrusive methodology, the reference interaction
transcripts that are collected will be of “staged” rather than “real world” interactions.
Although every attempt will be made to create questions that are representative of those
that might typically be asked of the UC systems reference service, the fact that the
interactions are carried out by an observer rather than a student might influence their
content. For instance, the observer may be more likely to engage in behaviors that elicit
instruction from the reference provider, being more aware than the average patron of the
types of instruction that can be given during a reference interaction.
Results from this study will be indicative of the chat and in-person reference
environments in the UC system. The way in which reference is provided by these
services may be influenced by the system‟s institutional culture; therefore, although I
believe the results will be informative, they may not be applicable to other academic
libraries. The chat reference interactions will take place via QuestionPoint, making it
difficult to generalize results to other chat reference services.
Transcripts of the reference interactions will capture spoken or typed words, but
won‟t capture activities such as navigating through online resources or going to the stacks
or the reference collection to consult printed material. The data collection protocol
specifies that the observer will take notes of these activities, which will help to fill in the
gaps, but the notes inevitably will not be as detailed and precise a source of data as the
transcripts will be.
The dataset that will be collected for this study has the potential to be a rich
source for qualitative analyses. It would be very interesting to explore the dataset using a
grounded theory approach, to see what sorts of patterns not captured by the analyses
described here might emerge. Since each question will be posed twice, once to a chat and
once to an in-person reference service, these paired transcripts could be examined and
compared, perhaps using a cross-case study method.
Future work could also explore differences in instruction in reference interactions
involving real patrons, perhaps by positioning an observer near the reference desk to take
extensive notes during in-person reference interactions, and then following a similar
protocol to take notes from chat transcripts. Such a study would require obtaining
permission from patrons of both chat and in-person reference services after the reference
interaction occurred, but this might be an opportunity to gather patron data to see, for
instance, if there are differences in instruction delivered to underclassmen,
upperclassmen, graduate students, faculty, members of the public, et cetera.
Association of College and Research Libraries. (2008). Information literacy competency
standards for higher education. Retrieved 5/15/2008 from
Beck, S. E., & Turner, N. B. (2001). On the fly BI: Reaching and teaching from the
reference desk. Reference Librarian, 34, 83-96.
Cottrell, J. R., & Eisenberg, M. B. (2001). Applying an information problem-solving
model to academic reference work: Findings and implications. College and Research
Libraries, 62(4), 334-347.
Ellis, L. A. (2004). Approaches to teaching through digital reference. Reference Services
Review, 32(2), 103-119.
Elmborg, J. K. (2002). Teaching at the desk: Toward a reference pedagogy. Portal:
Libraries & the Academy, 2(3), 455.
Fennewald, J. (2006). Same questions, different venue: An analysis of in-person and
online questions. Reference Librarian, 46, 20-35.
Gibbs, G. & Coffey, M. (2004). The impact of training of university teachers on their
teaching skills, their approach to teaching and the approach to learning of their
students. Active Learning in Higher Education, 5(1), 87-100.
Graves, S. J., & Desai, C. M. (2006). Instruction via chat reference: Does co-browse
help? Reference Services Review, 34(3), 340-357.
Hubbertz, A. (2005). The design and interpretation of unobtrusive evaluations. Reference
& User Services Quarterly, 44(4), 327-335.
Johnston, P. E. (2003). Digital reference as an instructional tool. Searcher, 11(3), 31-33.
McCutcheon, C., & Lambert, N. M. (2001). Tales untold: The connection between
instruction and reference services. Research Strategies, 18(3), 203-214.
Smyth, J. (2003). Virtual reference transcript analysis. Searcher, 11(3), 26-30.
Trigwell, K., & Prosser, M. (2004). Development and use of the approaches to teaching
inventory. Educational Psychology Review, 16(4), 409-424.
Trigwell, K., Prosser, M., & Waterhouse, F. (1999). Relations between teachers„
approaches to teaching and students„ approaches to learning. Higher Education,
Ward, D. (2004). Measuring the completeness of reference transactions in online chats.
Reference & User Services Quarterly, 44(1), 46-56.
Woodard, B. S., & Arp, L. (2005). One-on-one instruction. Reference & User Services
Quarterly, 44(3), 203-209.