Print Page   |   Contact Us   |   Report Abuse   |   Sign In   |   Register
TL v60n2: Using the Carnegie Classification System to Compare Like Universities: A Tool for Collecti
Share |
 

 

Tennessee Libraries 

Volume 60 Number 2
 

 2010

 

 Using the Carnegie Classification System to Compare Like Universities: A Tool for Collection Development 

 by

Sanda Orr and Jim Nance

University of Tennessee, Martin

 

Current Issue | Archives | Call for Papers | Contributor Guidelines | Contact Us

 TLA 2010 Conference Abstract: Using the Carnegie Classification System, librarians at Paul Meek Library conducted a study to discover what commercial online resources like universities subscribed to. We reduced the list to the top 40% and compared PML to the list. This is a good tool for libraries to use in making collection development decisions.


Collection development has always been a task that librarians spend a lot of time doing.  More recently, the focus has shifted somewhat from book collections to electronic database collections.  We invest a large amount of time deciding which electronic databases best serve our users.  Periodical holdings are compared, coverage dates are analyzed, pdf and html formatting is scrutinized – is full color important? --and so many other nuances get factored into the decision making process.  In the past for monographic selections we had Books for College Libraries.  BCL provided a listing of a core group of titles that probably were viable to similar libraries.  BCL was just one tool in the selection process.  It would be wonderful if the library world had a similar tool for the selection of on line resources.

One method that can be used is to compare schools that are similar to ours and see what databases they subscribe to and how our library compares to theirs.  A good way to generate a list of comparable schools is by using the Carnegie Foundation Classification web site (http://classifications.carnegiefoundation.org/).  The classification system has six inclusive categories and one elective category.  There are multiple variables within each of the categories.  The Foundation’s web site provides a detailed breakdown of all of the options available.  In generating the group of comparable sites for this project the categories used were: 

1) Level (4-year or above);
2) Control (public);
3) Undergraduate Instructional Program (Balanced arts & sciences/professions, some graduate coexistence);
4) Enrollment Profile (Very high undergraduate);
5) Undergraduate Profile (Full-time four-year);
6) Size and Setting (Medium four-year, Primarily residential). 

Using the various categories and the variables available is a quick and easy way to generate a customized list of schools that share similar traits.  In a matter of minutes a list can be generated and the numbers effected with just a few clicks of the mouse.  Using the above listed criteria this study started with a pool of 27 state supported universities that were primarily undergraduate with some graduate programs.  The average enrollment was approximately 7,100 and all sites were listed as medium.  Geographic location was not available as an option, so it was not considered. 

Once the list was generated, approximately 1771 total resources were identified.  The list was generated by an IT student who typed each resource title into a spreadsheet, and each entry contained a link to the database web pages for that university.  A senior computer science major then went to each of the various sites and located the best web page that would allow a list of electronic resources that most closely matched our needs.  Of the 27 schools initially included we had to drop 5 because access was restricted to their students and staff.  Once all of the data was retrieved, the student-worker developed a number of Perl scripts that would strip out all unnecessary words.   Once he had a suitable list of titles and schools, he generated a web page for the librarians that compiled all of the data and then a list of those schools with active links so a librarian could evaluate and standardize the titles.  In order to standardize the titles on the list, and to reduce the number of entries, each entry was examined in order to discover the following:

1) Whether or not the resource provided full text (so that non-full text titles could be eliminated);
2)  Whether or not the resource was free or paid for by subscription (so that free resources could be eliminated);
3)  Whether the resource was of local interest only (local titles to be eliminated);
4) Whether two or more similar looking resources might be the same resource with slightly different wording in the titles;
5)  What vendor provided the resource (this was abandoned at some point, since the purpose of the study was to compare resources, not vendors.  Vendor information was only used to determine whether a resource was paid for by subscription).

It was generally easy to tell if the resource was full-text, as most of the database pages had some sort of icon or something like "full text, yes/no."  If it wasn't clear from the page, we used our best judgment. For instance, if the resource was titled something like "X Abstracts," we assumed it wasn't full-text.  Most sites also had an icon of some sort to indicate whether the resource was free or paid by subscription.  If there was no indication, but the resource was provided by Ebsco, then it was assumed to be via paid subscription. 

Some of the resources on the database list pages were of local interest only, such as a city newspaper other than say the New York Times.  There were also many links to local information such as city maps or city histories that probably wouldn't be of interest nationally.  Another frequent resource was a link to a university's OPAC, or a local library OPAC.  We eliminated these for consideration because they were not paid for by subscription, but also because they weren't useful in our determinations of what resources we might consider looking into.

The biggest difficulty in this standardization process was that even if 12 libraries subscribed to, for example, Academic OneFile, they might not have used the exact same wording.  We might have seen the following:

Academic OneFile
Academic One File
Academic OneFile Online
Academic OneFile (Online)
Gale Academic OneFile
Gale Academic One File
Gale Academic OneFile Online, etc.

Our student typed these all as he saw them, and it was quite a task to combine them all.  In order to standardize them, for the most part, we eliminated vendors in titles.  If the only version seen had the vendor in the title, we generally left it there.  We also eliminated words such as "online" since they were unnecessary. 

Another common difference in wording was the use of an ampersand.  For example: Wildlife and Ecology versus Wildlife & Ecology.  Each resource was examined to be certain they were the same resource, and once this was verified, the "and" was changed to the ampersand, thus making the entries a little shorter.

While investigating if two or more similar looking resources were indeed the same resource it was unable to be determined, we simply left them uncombined.

Once the titles were standardized, it was decided to remove some of the resources from consideration.  Here were our criteria for inclusion onto our final list:

  1. Must be at least partially full text (abstracts, indexes and bibliographies were excluded)
  2. Must be a commercial resource paid for by subscription (free resources were excluded)
  3. Must be subscribed to by more than one university 
  4. Must be of national interest, as opposed to a local/regional newspaper or university OPAC

Once the resources not conforming to the stated criteria were removed from the list, it was then decided to look at just the resources subscribed to by at least 40% of the universities, as looking at too many would be unwieldy and unproductive.

Using the Carnegie Foundation Classification provides a quick and easy customizable list from which to work.  This list can be used to make purchasing decisions and can also be used as justification for funding.  Our Electronic Resources Librarian uses the list as part of a larger list of areas and subjects that are underrepresented in our collection and databases that she would like to explore when money is available.  If libraries similar in size to mine have Project Muse and we don't, we can use that to help make the case to purchase it.  It can also be useful to see what other kinds of databases are out there that maybe my library hasn't considered, and it can be a way of identifying what products might be similar to what we have, but also might be better.  If most libraries subscribe to Database X, and ours is instead using Database Y, we might decide to take a closer look at both of them to see if the X database is superior.  Part of what librarians do with book management is to see what other libraries are purchasing, see what other libraries are doing in general.  Librarians can use the same strategy for our electronic collections as well, with a quick, easy and free resource.

NOTE:  The subscription information contained herein was accurate as of July 15, 2009.
 


Membership Software Powered by YourMembership.com®  ::  Legal