|Volume 59 Number 2
Early Evaluation of a New Library Chat Service
Middle Tennessee State University
TLA 2009 Conference Program Abstract: Presented as Is Your IM Service Working? Lessons from the Log Lady: By examining the conversation logs of your IM and chat service, you can determine how effective and well managed your service is. The Walker Library at Middle Tennessee State University looked at response times, types of questions asked, and IM and chat traffic patterns to assess staffing and training needs.
Chat and instant messaging reference services have been offered in libraries since the early 2000s. At Middle Tennessee State University, we have had stops and starts in our virtual reference journey. In 2004, we purchased access to OCLC’s QuestionPoint virtual reference service for chat and email. After a year of low use, technical difficulties, and staffing problems, we discontinued our chat service. Over the next few years we considered other options, but server breaches left our systems staff wary of technologies that might open us up to more attacks.
In Summer 2008, the User Services department submitted a proposal that satisfied all concerns, and in Fall 2008, we began offering chat through a MeeboMe widget on our website. We also created accounts with all of the major IM services, including Google, Yahoo, AIM, and others. We use a free, open source software called Pidgin to intercept conversations from all of these services using one interface. We decided to staff our chat/IM service (henceforth referred to as "chat") from the reference desk, and this meant that a librarian could potentially be faced with a phone, chat, and walk-up question all at once, but it was more desirable than having librarians confined to their offices for hours at a time to cover chat. This also allowed us to offer the chat service for more hours, since the reference desk is staffed for all but two of the library’s open hours each day (fig. 1).
Figure 1. Technical details for our chat reference service
New services in libraries do not always deliver a lot of bang for the buck (even when they are “free”), so we decided that we would assess the service in concrete terms to determine if it was being used, when it was being used, and whether we were going about it the right way. We were able to configure Pidgin to save all chat transcripts on our local server, and by analyzing them we were able to determine the following: how many chats we were getting, when they were coming in, how long they were lasting, and how quickly we were answering them. By analyzing the conversations, we could also tell what kinds of questions we were getting and how accurate our answers were.
I reviewed all transcripts from September, October and November 2008, which were our first three full months of service. Total numbers of chats from September 2008-March 2009 were used for some less detailed analysis. Chat transcripts are automatically saved as plain text documents in a special folder within the Pidgin software files (fig. 2).
Figure 2. A sample transcript from the MeeboMe chat box.
For each transcript, I pulled the date, beginning time stamp, librarian response time stamp, ending time stamp, and the patron's opening question. Each of these was copied and pasted into an Excel spreadsheet (fig 3). From this data, I was able to calculate librarian response time (lapse) and conversation length using Excel formulas. I also created a distinction for "resolved" questions, which were questions for which a librarian responded and the patron responded back. In addition, I labeled conversations according to their primary purpose using the three categories we use for our reference desk statistic: reference, technical, and directional (including questions about services and hours of operation). Other fields held information about holds or transfers and general comments.
Figure 3. The chat analysis spreadsheet.
Is it being used?
The first and simplest question to answer was how often our new chat/IM service was being utilized by our students. Our service went live in August of 2008, but we didn't advertise it until the fall semester began in early September. This gave us a month to get our feet wet on the practice chats that we sent amongst ourselves and on those rare serendipitous encounters from real users (though it turns out that we weren't very good at responding until we began getting a steady supply of questions.) When the semester began, we advertised the new service on our library blog, on the website homepage, and in our library instruction classes. We later added flyers in the library restrooms. Our marketing efforts paid off, and we saw significant usage right away (see fig. 4). The downturns in December, January, and March can be attributed to between semester breaks and/or chat logging problems.
Figure 4. Number of chats received per month
When are they using it?
We didn't really need a graph to tell us that the chat service was a hit: we could feel it when were working the reference desk. Most of the chat and IM clients -- including the most popular one, the Meebo chat box -- only allowed one librarian to be logged into the account at a time, so only one person on our typical two-person reference desk team was able to field the questions that popped up. And this duty was on top helping our walk-up and telephone patrons. Some of the librarians began to feel barraged and pulled in different directions. One colleague mused that there must be a button on the floor in front of the reference desk that activates the chat window whenever anyone walks up. It always seemed that in-person patrons and chat patrons would appear and disappear at the same time. We were clearly starting to feel a staffing crunch, so I looked at incoming chats by hour for September, October, and November of 2008 to see if there was a pattern and to see if it coincided with busy times on the desk (see fig. 5).
Figure 5. Number of chats by hour of day for September, October, and November of 2008.
There was a fairly consistent pattern of busy times and slower times for each month. The busiest time for chats was from 10:00 a.m to 11:00 a.m. and a second peak came between 1 p.m. and 2 p.m., which also tends to be a busy time for walk-ups at the reference desk. Another peak time during both October and November was between 6 p.m. and 7 p.m. when we usually only have one librarian working at the reference desk. This confirmed that what we were feeling was true: we were getting chats at inconvenient times. We began discussing whether we should continue staffing chat from the reference desk or whether we should have librarians take turns monitoring it from their offices. We had tried running our Questionpoint chat service this way years before (out of our offices) and no one liked it because we felt locked down for hours at a time. Therefore, we dismissed this option and looked for other solutions.
How long are the chats?
When we mark statistics in our reference log for walk up or phone encounters, we do so subjectively. A particularly long and involved question might merit two or three tick marks (or more, depending on the librarian), while a simple question will only get one mark. However, when we tally our chat encounters, we are only counting each conversation as one, so our total effort is not represented by simple numbers. To get a better picture of how much effort is involved, I looked at chat conversations by their length. The graph below (fig. 6) shows that 60% of our conversations (197) lasted under 5 minutes, though 17% were over 10 minutes in length.
Figure 6. Number of chat transaction by conversation length, represented in minutes.
What is unique about chat conversations, though, is that librarians can put the conversation on hold for minutes at a time to assist walk up patrons, and chat patrons can put the conversation on hold for several minutes to test out search strategies before coming back with more questions. Therefore, conversation lengths are not a guaranteed way to assess librarian effort. The fact we can put conversations on hold and that the majority are short questions makes staffing chat at the reference desk reasonable and doable most of the time.
Are we doing a good job?
No analysis of a new service would be complete without assessing it's quality. We knew that our chat service was a big hit and we knew that we were very busy with it, but were we doing it well? We didn't have time at first to perform a detailed content analysis of transcripts, but what I was able to do was calculate what I termed "resolution" rates. A resolved question, in my analysis, was one in which a librarian responded to the initial patron question, and then the patron responded back. Because we answer chat questions during reference desk shifts and are sometimes unable to respond immediately, the "resolved" distinction allowed us to determine how long a patron would wait for a response. The assumption is that a patron who does not make a reply to a librarian's intial response has terminated the conversation without receiving an answer, but that a patron who does make a reply has received an answer to his or her question. This is by no means a perfect method for determining whether or not a patron has received an answer for the following reasons: 1) a patron may receive a perfectly useful answer from a librarian, but for any number of reasons, does not reply back. This question is not "resolved," though it was answered; 2) A patron responds with a "thank you" or some similar reply to a librarians request that he or she hold for an answer. The librarian comes back several minutes later with an answer, but the patron may have already given up and left the conversation. This question would be marked as "resolved" even though the patron never received the answer.
Despite these shortcomings, a breakdown of "resolved" questions by a librarian's initial response rate shows exactly what one would expect. The more quickly a librarian responds to a question, the more likely it is to be resolved (see fig. 7).
Figure 7. Resolved and unresolved questions by initial librarian response time.
Ninety percent (90%) of questions that were responded to by a librarian within 20 seconds went on to be resolved, while questions that were left hanging for 3 minutes or more only had a 29% resolution rate. Our overall resolution rate was only 64%, and we made it a goal to reach an 80% resolution rate, which could be achieved with a response time of one minute or less. In order to reach this goal even when we are busy with walk-up patrons, we try to make a quick "please hold" response to chat patrons and check back with them every few minutes to update them on anticipated hold time. Chat patrons on hold will often busy themselves with other tasks while waiting for us to answer their questions.
Outcomes and Next Steps
This analysis of chat transcripts for the first full three months of our service confirmed for us that it was being utilized and that we were busier than ever at the reference desk. Probably the most valuable data was the correlation between librarian response times and resolution rates. We do now make a concerted effort to make early contact with chat patrons even if we are not going to be able to fully answer the question right away.
We did decide that it was a problem for only one librarian at a time to be able to monitor chat, so we investigated other options and found a replacement solution. Like Meebo, Libraryh3lp is a web-based chat utlity that can be embedded as an anonymous chat box on a website. It can also be configured to work with the Pidgin software for easy monitoring and chat logging. But unlike Meebo, with Libraryh3lp we can have more than one librarian monitoring a single chat box. This has enabled us to have both reference desk librarians on chat duty so that if one of us is busy helping a walk-up patron, the other can field chat questions. We can also have someone temporarily log on from his or her office to monitor chat during times when the reference desk librarians are both too busy with walk-up patrons. Libraryh3lp was developed by librarians for libraries, so it works better for us on many levels. While Meebo is free to use, there is a small annual fee for using Libraryh3lp, but it has been worth it, especially considering how much our service is used.
Since this analysis, our chat usage has continued to increase. We added a text messaging service, which is available through our AIM account, but the most chats by far still come through our Libraryh3lp chat box on our website. In September 2009, we received 529 chats, IMs, and texts. Though we do still have some difficult moments managing all of our references services, most of us have become accustomed to juggling chats, phone calls, and walk-up patrons fairly well, and we foresee manning our chat service from the reference desk indefinitely.
The next step in our chat service assessment is to perform a content analysis of our transcripts to determine what sort of reference training needs we have as a group. With traditional reference, interactions are not recorded, so assessment is difficult, but we have a unique opportunity with our chat transcripts to do some quality assurance. We are not interested in singling out individual librarians for reprimand, but only in finding particular areas of deficiency on which we can offer training to our whole reference department. It is our hope that our chat service will not only increase the quantity of reference interactions (and it has already done that) but also the quality.
The MTSU Library virtual reference webpage is located at http://library.mtsu.edu/help/needhelp.php.
LibraryH3lp H3lp (documentation). Retrieved August 30, 2009 from http://libraryh3lp.com.
University of North Carolina Libraries. (2008, April 12). Pidgin Setup for Library IM Services. Retrieved August 30, 2009, from http://www.lib.unc.edu/reference/eref/pidgin/.