Print Page   |   Contact Us   |   Report Abuse   |   Sign In   |   Register
TL v63n3: Applying Outcomes-Based Assessment to Information Literacy
Share |
 

Applying Outcomes-Based Assessment to Information Literacy

by 

Rachel Radom
Learning, Research, and Engagement Librarian
University of Tennessee, Knoxville

Lane Wilkinson
Reference & Instruction Librarian
University of Tennessee, Chattanooga

Matt Jabaily
Integrated Library Systems Librarian
University of Memphis


Originally presented at the Tennessee Library Association Annual Conference (Chattanooga, TN) in April 2013.

Conference Abstract:  You work hard on your teaching, but do your efforts make a difference?  Our panel of experts will suggest strategies on how to define desired outcomes, determine whether they were achieved, and identify appropriate online tools to deliver assessments and analyze the data.  Bring your ideas and questions!

Introduction

Our desire to learn about methods that other schools use to assess their information literacy classes, and a looming SACS reaffirmation at the University of Memphis inspired this program.  Perveen and I are indebted to Rachel, Lane, and Matt for agreeing to share their considerable expertise with (and insights about) outcomes-based assessment at the class and program level.

Bess Robinson, Head of Research and Instructional Services
Perveen Rustomfram, Government Publications Librarian
University of Memphis

Information Literacy Assessment Tools and General Education Courses at the University of Tennessee Knoxville Libraries

--Rachel Radom, Learning, Research, & Engagement Librarian, University of Tennessee Knoxville

Librarians at the University of Tennessee (UT) Libraries in Knoxville are developing an information literacy assessment plan for General Education courses.  As part of this plan, which relies heavily on Oakleaf’s 2009 recommended best practices in assessment plan design, librarians are creating a standardized set of library-related student learning outcomes and assessments tailored to core lower-division undergraduate classes that request library instruction, including First-Year Studies (FYS) 101, First-Year Composition (English 101, English 102, and Honors English 118), and Communication Studies 210 courses.  Below are select features of the UT Libraries’ assessment program, with an emphasis on the methods and tools used to collect assessment data.

Many students first receive library instruction in FYS 101 classes.  Because this course does not have a research assignment, the library session is an introduction to the library and its website.  The main learning objectives are to distinguish between good and ineffective searches, identify where to begin searches from the library’s homepage, and to know where and how to ask a librarian for help.  Assessing these outcomes at the end of the 50-minute session takes the form of a “clicker” quiz, using the TurningPoint student response system software.

In English 101, all students are required to submit an argumentative paper that includes outside sources, and class sections visit the library for instruction in finding sources and evaluating those sources.  An in-class activity introduces a method of evaluating sources and serves as a formative assessment.  Approximately 3 weeks after the session, summative assessment takes place when students complete a 12-question follow-up survey via SurveyMonkey.  Students are asked whether they used library resources for a paper that semester and if they evaluated their sources.  Those students who state that they evaluated sources are then asked to explain how they did so.

Public Speaking classes at UT include a library assignment in the customized textbook.  In spring 2012, the assignment included a Blackboard component, results of which librarians can access for assessment purposes; previously, all work was submitted to course instructors in paper, and librarians had no access to student responses.  Achievement of the session’s student learning outcomes, which relate to broadening and narrowing searches and evaluating sources, is currently being assessed using the Atlas.ti text analysis program to code a sample taken from approximately 800 responses.   Other programs under consideration for related assessments include SoftChalk and Qualtrics.

Student learning outcomes and course assignments drive assessment design, as do other factors, such as staff available to review the responses, but the tools used to deliver assessments are a significant consideration in any assessment plan (Oakleaf, 2009).  The availability of a wide variety of software programs and other assessment tools gives librarians flexibility in designing appropriate assessments, but determining the best tool for the task can be challenging.   In any assessment plan, librarians will spend significant time making that determination, with the aim of improving library instruction and demonstrating the value of library instruction to the university.

Assessing Library Instruction at the University of Tennessee-Chattanooga’s Lupton Library

-- Lane Wilkinson, Reference & Instruction Librarian, University of Tennessee Chattanooga

When it comes to one-shot classes, the library instruction team at the University of Tennessee at Chattanooga is committed to developing a dynamic and responsive curriculum for first-year Rhetoric and Writing students (ENGL 1010 and 1020). To that end, librarians have embraced instructional assessment as a core component and a vital tool in managing curricular objectives and gauging instructional effectiveness. In the latest iteration of instructional assessment, the team implemented an asynchronous pre- and post-test model as recommended by Schilling and Applegate (2012). For this type of assessment, the librarians at UTC recommend setting clear learning objectives, considering structural constraints, targeting specific data points, and at all times remaining sensitive to the strengths and weaknesses of student assessment.

Effective pre- and post-testing requires that survey questions be based on clear, measurable learning objectives (Hufford, 2010). However, since care must be taken to avoid library jargon (Lindauer, 1998), the team at UTC held faculty focus groups with ENGL 1010 and 1020 instructors to identify appropriate learning outcomes. These outcomes tracked both ACRL Information Literacy Competency Standards as well as CCCC (Conference on College Composition and Communication) best practices.

Further structural constraints were also guiding factors. Time was a major factor, given that the standard one-shot library instruction session is only 50 minutes. To avoid time constraints, assessment was moved outside of the classroom, which introduced issues of faculty buy-in: Composition faculty would be responsible for making sure that their students took both the pre- and post-test. Finally, and by its nature, pre- and post-testing is more conducive to quantitative measurement, so qualitative assessment was kept to a minimum.

When completed, the pre-test and post-test were designed to capture specific, pre-determined data points organized around behavioral, affective, and performance indicators (Schilling and Applegate, 2012). Of 14 pre-test questions, the first four addressed common behaviors, such as whether students had ever checked out a book. The fifth question measured affect by asking students to reflect on several common research practices (e.g., evaluating a website) and rate their level of comfort on a Likert-type scale. Questions #6-#13 targeted concrete skills including database navigation and keyword searching. The final question was left open for opinion. The pre-test was designed in SurveyMonkey and a URL to the test was sent to students at least one week prior to library instruction. The post-test was administered at least two weeks after the library session and measured the same data-points, though using different questions. 

Several limitations to asynchronous pre- and post-testing were readily apparent at the end of the semester. First, post-testing received far fewer responses (n=136) than pre-testing (n=1014), which lead to a large margin of error and some unreliable results. Second, pre- and post-testing is fairly inflexible, insofar as the curriculum cannot be adjusted without discarding all assessment data to that point; as a longitudinal method, pre- and post-testing requires a consistent curriculum over time. However, the pre-and post-testing at UTC was ultimately a success, insofar as it provided actionable data that is currently being used to redesign library instruction and some aspects of composition instruction.

Outcomes Assessment and SACS Reaffirmation

-- Matt Jabaily, Integrated Library Systems Librarian, The University of Memphis

Universities and colleges in Tennessee and the rest of the Southeast must receive reaffirmation of accreditation from the Southern Association of Colleges and Schools Commission on Colleges (SACS COC) every ten years.  Part of the reaffirmation process is a compliance certification, which begins with the institution performing a self-study to determine whether and how the school meets a variety of core requirements and comprehensive standards.

Several of these requirements and standards directly mention libraries. Core Requirement 2.9 focuses on providing and supporting “student and faculty access…to adequate library collections and services and to other learning/information resources consistent with the degrees offered.” There are also three comprehensive standards.  The first, 3.8.1, deals with ensuring the institution has adequate learning and information resources.  The third, 3.8.3, is aimed at making sure the libraries employ a sufficient number of qualified staff.  The second standard, and the one most relevant to instruction, is 3.8.2: “The institution ensures that users have regular access and timely instruction in the use of the library and other learning/information resources.” (For current and complete SACS accrediting standards, including all standards referenced here, see http://www.sacscoc.org/principles.asp.)

Although the standard only explicitly mentions instruction related to information resources, there are other factors that need to be considered.  For example, SACS expects that distance students have the same resources and opportunities as on campus students, so there should be an off-campus or online analogue for all on-campus instruction.  There is also an implied expectation of assessment.

SACS reaffirmation is itself an assessment, but assessment is important to SACS reaffirmation in other ways, especially in its focus on “Institutional Effectiveness.”  SACS looks for evidence that “[t]he institution identifies expected outcomes, assesses the extent to which it achieves these outcomes, and provides evidence of improvement based on analysis of the results" (SACS Comprehensive Standard 3.3.1).  Although institutional effectiveness is a Core Requirement in itself, it should be addressed throughout the compliance certification.

A presentation by Dr. Crystal Baird, Vice President with SACS COC, is available on the SACS website and is entitled “Understanding & Responding to the Library-Related Standards” (2011).  On the topic of fulfilling standard 3.8.2, Baird suggests thinking about the following questions:

  • How do students/faculty know [online instruction] is there?
  • How do you know the instruction is effective?
  • Are they achieving the desired learning outcomes?
  • Is the instruction “Regular and Timely”?

If institutions and libraries want to answer these questions they have to do more than just list the number of classes they teach and how many students attend.  They must have clear learning outcomes and be able to document how they are achieving them.

There are consequences for providing insufficient evidence of compliance on the library related standards. In her presentation, Baird lists the percentage of “negative findings” for the library standards.  For each standard about 1/5 of institutions are found to have inadequately demonstrated compliance in their initial report.  Most of these problems are addressed in later stages of reaffirmation, and accreditation actions are rare for library related standards.  Nonetheless, it’s something of a black eye to be found deficient at any stage of the reaffirmation process.

In order to better understand the SACS reaffirmation process and its relevance to libraries, I am currently conducting research by examining examples of SACS compliance audits, specifically focusing on the library related standards.  I believe this research will be valuable for several reasons.  First, I believe it would be helpful for institutions preparing for reaffirmation to know what others have done.  Second, compliance audits can be a good source of data.  I think this data can be a valuable supplement to surveys because, unlike surveys, these compliance audits are mandatory and represent the official response of the institution rather than a single person’s perspective.

Finally, I am interested to see if the 2004 changes to the reaffirmation process are living up to expectations, specifically with regard to libraries.  Prior to 2004, the reaffirmation guidelines included 480 “must” statements that were prescriptive and specific.  The current guidelines are fewer, more open, and more subjective (Nelson, 2004).  SACS billed the changes as a way to change its role from one of overseer to one of a partner that would help institutions improve themselves (Carter, 2006).  When the new rules were released, some were pleased with the new flexibility, but others feared that the new standards were overly vague and would lead to lower standards, especially with regard to libraries.  With my study, I hope to see if the changes have led to in-depth reflection and self-improvement, as the designers of the new reaffirmation process hoped, or if the changes have led to lowered expectations for libraries with standards that are impossible to enforce.

I have begun my study with a 10 institution sample.  I have focused on Level VI institutions, which represent the highest classification, schools that have at least 4 doctoral degrees.  I have started with compliance audits that are publically available online and plan to solicit others in the future.  All of the institutions I looked at completed their reaffirmations in 2009 or later.  I have begun by reviewing the responses for Comprehensive Standard 3.8.2.

The first question one might have when addressing a standard is, “How much should I write?”  SACS does not give guidance about the number of words or pages.  Looking at the documents in my sample, there is no consensus.  The mean word count of the 3.8.2 write-ups in my sample was 1,897 words (or about five single spaced pages).  The longest was 5,372 words (more than 13 pages) and the shortest was 694 (less than two pages).  There was also a great deal of variation in the number of supporting documents.  The mean was 29.4 and they ranged from 10 to 80 documents.

Taking a look at the content of the responses, there were a few elements common to several institutions: number of classes taught, number of students in those classes, reference statistics, and mention of subject guides or online tutorials.  Although “Information Literacy” is not explicitly mentioned in standard 3.8.2, eight of the ten compliance audits made some reference to information literacy.  There was not uniformity in the responses, however, and no common core was discernible. 

With regard to assessment, there was consistency only in that all ten of the compliance audits for standard 3.8.2 mention “assessment” somewhere in the text.  Seven mentioned administering some kind of quiz or test.  Some institutions did regular testing across all areas of instruction but others only mentioned testing with regard to a single class or online tutorial.  Six institutions included some mention of outcomes, but there was variation as to how formally the institutions wrote and assessed these outcomes.  Five of the institutions said that they had done LibQUAL+ and several shared their results.

Although my results are still limited, I can offer some advice to librarians as they think about what they need to do to prepare for SACS reaffirmation.  First, I recommend starting early.  Starting at least two or three years before your reaffirmation date will give you the ability to document a history of continuous improvement.  You should also be keeping records of your assessment activities regardless of where you are in the reaffirmation cycle.  You do not want to have to scramble to document your efforts, and it is good to have baseline data.  Also, think about the big picture.  Although several of these standards mention the library explicitly, they are standards for the institution.  Think about how your instruction and assessment are important to those outside the libraries.  Be active in making sure your administrators are thinking about information literacy and don’t be afraid to remind them that successful libraries are critical to the reaffirmation process.  Finally, I would recommend viewing reaffirmation as an opportunity to formalize the type of self-assessment you should already be doing as part of your library’s efforts to provide the best possible resources and services to your patrons.

References

Baird, C. (2011). Addressing library-related standards in the compliance certification. SACSCOC 2013 Orientation Meeting. January 31, 2011. [Presentation slides]. http://www.sacscoc.org/staff/cbaird/Libraries%20presentation.pdf

Carter, D. (2006). Identifying and exploring issues of compliance: An analysis of the external peer review process for institutions seeking reaffirmation of their accreditation during 2005-2006. http://www.sacscoc.org/pdf/COC%20Research%20Project.pdf

Hufford, J. R. (2010). What are they learning? Pre- and post-assessment surveys for LIBR 1100, Introduction to Library Research. College & Research Libraries, 71(2), 139-158.

Lindauer, B. G. (1998), Defining and measuring the library’s impact on campuswide outcomes. College & Research Libraries, 59(6), 546-570.

Nelson, W. (2004). SACS standards 2004: A Compliance strategy for academic libraries. The Southeastern Librarian, (52)3. 10-18.

Oakleaf, M. (2009). Writing information literacy assessment plans: A guide to best practice.  Communications in Information Literacy 3(2), 80-89.

Schilling, K., & Applegate, R. (2012). Best methods for evaluating educational impact: a comparison of the efficacy of commonly used measures of library instruction. Journal of the Medical Library Association, 100(4), 258-269.

 

 

 

Creative Commons Attribution-NonCommercial

 

 

 

 

 

 


Membership Software Powered by YourMembership  ::  Legal