OLAC Conference Reports

Newsletter Issue: 
Vol 37 no. 4 December 2017

Conference Reports from the 2017 OLAC Conference, Richmond, VA

Jan Mayo, Column Editor

 

There’s More than One Way to Groom a Cat(alog) – Preconference – Presented by Annie Glerum and Kathryn Lybarger

submitted by Scott Dutkiewicz, Clemson University

This preconference on metadata manipulation tools was a step toward filling the need of many catalogers and metadata wranglers, as evidenced by the attendance, which filled every seat in the room. The presenters, Annie Glerum and Kathryn Lybarger, are eminently qualified to instruct, being, respectively, Head of Complex Cataloging at Florida State University Libraries and Head of Cataloging and Metadata at the University of Kentucky Libraries. The session opened with Lybarger explaining regular expressions, and engaged the attendees with Regular Expressions Bingo. There was also a hands- on experience with Command Line and using it, in combination with regular expressions, to ferret out sometimes illusive data such as call numbers that begin with P. Glerum followed Lybarger with instruction on how to navigate XML “trees” using the analogy to trees or genealogical tables to accurately locate nodes and relations. Unfortunately, time did not permit much expansion on the different technologies (OpenRefine, Excel, MarcEdit), or many real-world examples. Glerum encouraged participants to play with the technologies, which is probably the best way to grasp the possibilities. In light of the keynote address given by Regina Reynolds the next morning, these technologies prove to be essential as Reynolds noted that catalogers must exploit machine intelligence to reduce cataloging piecework. So while this preconference did not meet its full potential due to time constraints, the effort is greatly appreciated, and similar presentations or workshops should be on OLAC’s future agenda.

 

Video, Audio, Digital, and All That Jazz: Bibliographic Transformation in an Era of Too Much “Stuff” -- Keynote Speaker Regina Reynolds, Director of the U.S. ISSN Center and Head of the ISSN Section at the Library of Congress

submitted by Barbara Tysinger, University of North Carolina at Chapel Hill

 

Translating from Voltaire (Le mieux est l'ennemi du bien), who borrowed it from Pescetti (Il meglio è nemico del bene), Reynolds encourages us to consider the implications of the aphorism, “The perfect is the enemy of the good.” In this age of data proliferation, both real and spurious, do librarians have the luxury of crafting perfection? What things must be perfect? When is “good” “good enough”? For years we have struggled with the challenge of making “hidden” collections available and accessible when the time and resources for full, “proper” cataloging are limited. But when there is “too much stuff,” this task becomes more and more difficult, as well as more imperative. Acknowledging that the best material is of little value when the user cannot find it, how do we quickly identify “the good stuff”? We need to curate

and evaluate, rather than insist on cataloging everything as we have in the past.

 

Reynolds points out that there were no cataloging standards or guidelines for the first online resources. Based on existing practices, catalogers had to think outside the box, relying on common sense and judgment, adjusting as necessary. In essence, they were making it up as they went along.

The emerging world of linked data also demands that we think outside the box, or, outside the record, and explore new ways of making “the good” available. We need to seek new ways to do more, better.

By eliminating duplicate effort, partnering with other expert communities, incorporating machine assisted “bionic” processing, crowd sourcing metadata, and adapting and using external data sources, we have the opportunity to link previously disparate resources, providing access to more “good stuff” than is possible when restricted to the realm of “the perfect” cataloging record.

To facilitate this cooperation, metadata as we know it is evolving. The newest IFLA Library Reference Model (IFLA LRM), a consolidation of FRBR, FRAD, FRSAD, was developed to resolve inconsistencies between those three standards. IFLA LRM was specifically designed to support and promote the use of bibliographic data in linked data environments.

Not surprisingly there are several projects exploring these possibilities, and Reynolds offered examples of a few, including The Linked Jazz Project, using Linked Open Data technologies and the Library of Congress’ Labs Project, including “Beyond Words,”a crowd sourcing project to identify and index images in LC’s World War I era newspaper collection (http://labs.loc.gov/ and scroll down).

However, any database, or linked data system, is only as good as its individual elements. Bad data does not become good simply because it is linked to other data (good or bad). And some things still must be perfect. With that in mind, Reynolds issued us a new challenge. Determine what must be perfect and what can simply be good. Find the good stuff. And the other good stuff. Link. Repeat.

Reynolds opened her address with a classical aphorism, encouraging us to seek “the good” and to accept that “good” is often “good enough,” something which is not always easy. I will close my summary with a quote from pop-culture, which perhaps reflects how many of us feel in today’s deluge of data:

… times are rough And I got too much stuff Can't explain the likes of me.

-- Jimmy Buffett, One Particular Harbor

 

Applying Library of Congress Faceted Vocabularies Workshop -- Presented by Adam Schiff, Principal Cataloger, University of Washington Libraries

submitted by Sandy Roe, Illinois State University

Schiff’s presentation–one of the first on this topic–was given to a packed room of engaged attendees. Despite the two-hour time constraint that necessitated a “whirlwind tour,” there was time for questions and some debate between audience members and the speaker.

The objective to tease out separate terms for discrete elements is much different from the often compound Library of Congress Subject Headings (LCSH) with which we are more familiar. One example

Schiff provided for illustration is the Library of Congress Demographic Group terms Japanese Americans and Women and Teachers, rather than the LCSH Japanese American women teachers. If you have time for nothing else, study Schiff’s two Background slides (3-4) that describe the problem that faceted vocabularies were developed to resolve, and the new functionality that they will provide for users (slide

169) along with examples of discovery system implementations (slides 170-181).

Although a few portions were skipped due to time constraints, Schiff’s extensive slides plus exercises and their answers cover LCGFT general terms, moving image terms, music terms, and cartographic terms; LCDGT; and other facets of time period of creation, language, place of creation, country of producing entity, and music medium of performance. He described the available draft manuals and works in progress by various constituencies and where to find each one, including the Draft LCGFT Manual, the Music Library Association’s Best Practices for Using LCGFT for Music Resources, and the Library of Congress Demographic Group Terms Manual. SACO proposals for additional needed terms may be made to LCGFT and to LCDGT. The vocabularies themselves can be found in several places, and the presentation included screen shots and an explanation of each. Library of Congress Genre/Form Terms are available as authority records in OCLC and SkyRiver, through Classification Web and Catalogers Desktop (if you have a subscription), the Library of Congress Genre/Form Terms PDF files (updated annually), the LC Linked Data Service, and Library of Congress Authorities. The Library of Congress maintains the list of LCDGT’s 11 group terms and codes, e.g., age, education level, ethnic/cultural, and so forth. The LCDGT vocabulary itself is available from Classification Web and Catalogers Desktop (if you have a subscription), the LC Linked Data Service, the annually created PDF file, and Netanel Ganin’s list. New and changed terms are posted in the monthly lists. Free subscriptions to the monthly lists can be arranged. Full MARC 21 authority records are available to download. Note that LCDGT authority records are not available in OCLC Connexion or SkyRiver.

After covering those basics for each vocabulary, Schiff got down to the business of how to select and assign terms explaining, for example, the LCGFT rule of three and rule of four; and how to code the terms in MARC bibliographic or authority records as appropriate. He referenced the manuals but also indicated places where instructions are not yet finalized and suggested interim practices. The presentation was also peppered with useful group exercises.

In the final slide, Schiff calls attention to the fact that we “need critical mass of bibliographic metadata that includes these faceted attributes.” For a thoughtful look on the implementation of faceted vocabularies in current cataloging and in our legacy records so that we can provide users of our data with meaningful displays and the ability to facet results, read the ALCTS CaMMS Subject Analysis Committee’s white paper “A Brave New (Faceted) World: Towards Full Implementation of Library of Congress Faceted Vocabularies.”

The most current version of Schiff’s slides, along with his speaker notes, exercises, and answers are available from his home page.

 

Foundations in Linked Data and BIBFRAME Workshop -- Presented by Amber Billey,

Metadata Librarian, Columbia University

submitted by Jan Mayo, East Carolina University

Billey has been working with linked data in various capacities since receiving her MLIS in 2009. She realizes its concepts can be hard to digest and apply and has developed instructional materials and lectured on the topic, most recently at NASIG 2017 with colleague Robert Rendall, where they gave a more in-depth version of this OLAC presentation. It can be found here.

While professing that it might be difficult to fit that presentation into two hours, she promised to do her best. Her agenda consisted of four main areas: linked data 101, ontology basics, TURTLE tutorial, and RDA elements in BIBFRAME.

The development of linked data was prompted by Tim Berners-Lee, who invented the World Wide Web (WWW) in 1989. Linked data has four basic rules:

1) use URIs (uniform resource identifiers) as names for things,

2) use HTTP URIs so people can find those names,

3) when someone looks up a URI, they should find useful information, and 4) include links to other URIs so people can discover more useful things.

Linked open data is ranked on a five-star scale, one star means it is available on the web, two stars mean it is available as machine-readable structured data, three stars mean it is in a non-proprietary format, four stars means it has the first three and uses open standards to identify things, and five stars means the first four plus linking to other people’s data to provide context. BIBFRAME currently falls between four and five stars.

Ontology is a formal way of naming and defining elements and their interrelationships with each other. MARC tagging is a partial ontology. Billey does not think use of MARC will stop anytime soon, but libraries are trying to escape the MARC silo so our information will become accessible through WWW. RDF (resource description framework) is the data model being used. She outlined some of the differences between MARC and RDF.

BIBFRAME is a linked data vocabulary that is formally encoded for machine processing. It relies heavily on MARC so that libraries will not lose their legacy data. It is meant to accommodate FRBR and RDA, but without being wedded to a specific structure or content standard. Billey then explained the various types of standards: structure (MARC, BIBFRAME), content (RDA, AACR2, DACS), value (LCSH, NAF, LCGFT) and encoding (3 x 5 card, MARC, XML).

Through a series of illustrative slides, Billey discussed the formation of RDF triples used to link data, how they can be used in XML, and how they can be grown into more complex interrelationships. Then she reviewed other terms used in BIBFRAME in more detail, providing numerous links with additional explanation and examples.

Next, she gave a tutorial on how to use TURTLE (Terse RDF Triple Language), which incorporates BIBFRAME terms, using several slides of examples. She also listed many other tools with links that could potentially be helpful. Finally, she showed how the RDA elements in BIBFRAME could be used in linked data.

Billey was an entertaining speaker, often amusing her audience while explaining difficult ideas. She took numerous questions as she went, making for a lively workshop.

 

Fun and Games: Cataloging 3D Objects Workshop -- Presented by Scott Dutkiewicz,

Cataloger, Clemson University

submitted by Julia Frankosky, Michigan State University

 

It is not uncommon for libraries to collect 3D objects: from toys and games, to cookware, sewing machines, and educational aids such as anatomical models and rocks. Like anything in the library, if a patron cannot find them, what is the point of having them? Dutkiewicz’s workshop covered how to tackle cataloging 3D objects, which can be a rather intimidating feat, especially for oddly shaped items. The main takeaways I left with were do not agonize and do not overthink things, especially in regards to the 33x fields. Use the information that you can glean from the item, add helpful 500 notes to clarify what you are trying to describe, and remember that “a picture is worth a thousand words” so consider adding a photo of the object in the 856 field when the exact words elude you.

Before this session I never really thought about how to distinguish between a game and a toy, but when it comes to assigning these items to the appropriate type of visual material in the 008 field, this distinction matters as toys and games each have their own codes (t and g, respectively). Dutkiewicz explained that the best way to tell the difference is that games tend to be competitive in nature with a prescribed set of rules, whereas toys are designed for play but there are no explicit rules. When trying to sort this out, keep in mind its primary intent and what is its intended use.

Dutkiewicz’s workshop was peppered with images of 3D objects one might catalog, such as taxidermy scenes and sculptures, he also brought in actual items such as the board game Monopoly and a crocodile hand puppet and showed sample records for the objects while we talked through them as a group (including discussing areas where the records could be improved). The last portion of the workshop focused on how the cataloger can help patrons find this material by including appropriate MARC relator terms for subject headings and including genres terms from the many available ontologies we have at our disposal (such as LCGFT, LCDGT, ATT, and FAST) and if something is just too unique to be appropriately handled by existing ontologies, you can always create a local subject heading. The most important thing to remember is that anything can be cataloged, you just might need to think outside of the box.

 

Inclusivity through Documentation Workshop --Presented by Jessica Schomberg, Media Cataloger/Library Assessment Coordinator, and Jennifer Turner, Instructional Services Librarian, Minnesota State University, Mankato

submitted by Amanda Mack, University of California, Los Angeles

 

Schomberg and Turner conducted a workshop that encouraged attendees to be aware of accessibility issues, and the ways in which they can construct documents to make them straightforward and easy to use. For this workshop, the word “documents” encompassed a broad range of materials ranging from internal procedure works meant for staff to brochures or webpages targeted at the public.

The presenters introduced principles of instructional design and document design. The purpose is to assess the needs of users by considering such factors as the information to be communicated, characteristics of the audience being targeted, and the best ways to convey the information. Once a document is in use, it should be evaluated to determine what works, as well as what could be improved.

Accessibility concepts were also addressed. Although there are laws in place, such as the Americans with Disabilities Act, accessibility is also a matter of ethics. Attendees were encouraged to think more broadly when it comes to accessibility by considering such conditions as color blindness, dyslexia, or even depression, which can cause a person to have difficulty focusing. To be certain we are providing accessibility, it is important to evaluate our resources for problems. For example, if a web resource does not lend itself well to use with a screen reader, an entire group of people (visually impaired) is being denied access. It was also suggested that whenever possible, plan ahead and anticipate needs rather than waiting for someone to request it, as this can lower barriers to access.

Much of the presentation focused on three subsets of design concerns and how they can be applied to document creation. The first step was to introduce an illustration of the Gestalt Principle, a theory from psychology that looks at the whole picture, rather than the individual elements. The illustration showed the different ways our brains process information, such as filling in gaps, tying together things in close proximity, and being informed by past experiences.

Next, Turner and Schomberg covered the visual elements, including images, proximity of text to images, and fonts. Images (including tables and diagrams) should be helpful, and can be used to bring clarity to the text. Poor proximity of words in relation to pictures makes the brain work harder to decipher the meaning. Fonts matter as well. If a font is too small, or unfamiliar, it also takes more brain energy to process, which detracts from the content. Clear examples of these can be found in the presentation slides.

The final design concern addressed was plain language. The purpose of plain language is conveying information in language that is easy to understand, so the reader can focus on the content, rather than the language itself. This includes using short sentences in active voice, using headings and white space to make it easier to follow, and avoiding embedded clauses and parenthetical statements that break up an idea. It is important to note that plain language does not necessarily equate to a shorter document.

For a group activity, attendees were asked to select one document out of several provided. We were then supposed to evaluate that document for what it did well in terms of accessibility and plain language, and what could be improved. Attendees were then asked to share their comments with the group. Common complaints across documents were small fonts, unclear language, and confusing forms.

The presenters emphasized that it is important to evaluate your documentation and revise it as needed. It can be extremely helpful to have someone else look at it, because what is clear to us, might not be clear to someone else. There are also a variety of accessibility tools to help with his process. For example, Microsoft Office has an accessibility checker which can help look for content that might be difficult for people with disabilities to read. There are also online contrast checkers that help identify issues for people with low vision, and color blindness simulators that can output an image to show what it would look like to people with different types of color blindness.

In the end, if we work to create clear documents, the people using these documents will make less errors, which means less questions and less additional work for us.

 

Video Games Cataloging Workshop --Presented by Rachel Jaffe, Metadata Librarian,

University of California, Santa Cruz

submitted by Deborah Ryszka, University of Delaware

 

Jaffe gave a thorough and in-depth presentation on the basics of video games cataloging. Her presentation provided novice and experienced catalogers with a hands-on opportunity to learn about the specific MARC fields needed to create, upgrade, and edit bibliographic records for video games. Jaffe’s two-hour workshop covered the descriptive elements necessary to produce full and complete bibliographic records, as well as the headings needed to provide subject and genre access to these materials.

Jaffe’s learning objectives for the session were: to gain an understanding of the unique characteristics of video games, to identify RDA instructions and MARC fields used in cataloging video games, and to become familiar with OLAC’s Best Practices for Cataloging Video Games and other sources that augment the instructions given in RDA.

The workshop began with a detailed explanation of the special characteristics of video games and a description of the various platforms and formats used in gaming. Video games are published on a variety of platforms—personal computers, gaming consoles, handheld and mobile devices, and online. These games appear in an assortment of formats—cartridges, computer discs, and online. It is important to note and record these distinctions in bibliographic records, and there are specific MARC fields where these features should be mentioned. Information about platform specifics can be recorded in MARC fields 250, 538, and 753. Format characteristics and information are indicated in MARC fixed fields, as well as 3XX and 500 fields.

Jaffe gave attendees guidance on how to transcribe the title proper and other title information for video games. She noted that the source of information for the title proper of video games is the title frames. If title information from that source cannot be viewed, catalogers are instructed to take title information from a label printed on or affixed to the resource or from the container. When one of these options is followed, a note should be made indicating this.

Additionally, Jaffe discussed at length franchise titles and how these should be recoded and coded. Titles of video games that are part of franchises (e.g., Grand theft auto, The legend of Zelda, Need for speed, etc.) are particularly troublesome. Current cataloging rules have never dealt well with the modern concept of a franchise in video cataloging. For the specifics of recording titles, other title information, and franchise titles, Jaffe suggested consulting the appropriate rules in RDA and the specific guidelines for these areas in the OLAC best practices guide.

In the past, many video game catalogers have routinely recorded the platform of a video game as an edition statement. Many catalogers debate whether the platform name as presented on the resource is a true and valid edition statement. The OLAC best practices document recommends that catalogers transcribe a statement of the platform as an edition statement in the MARC 250 field. If this type of information is lacking and the cataloger deems it useful to the user, the platform designation is to be recorded in brackets in the 250 field. In addition to noting platform information in the MARC 250 field, Jaffe reminded those in attendance that platform names also can be recorded in MARC fields 538 and 753.

Jaffe thoroughly explained the values that catalogers need to use to code MARC fields 336, 337, 338. Further discussion focused on MARC fields 344, 346, and 347, and the vocabulary used in those distinct fields. Mention was made of the new controlled vocabulary lists that recently were added to MARC 21 documentation. These lists should be consulted and noted when recording data in the 3XX fields.

Applying genre headings to video games is problematic because no authorized list of these headings for video games currently exists. A task force is working on creating a list of genre headings with the hopes of having it published by the Library of Congress or OLAC. While waiting for this list, catalogers should follow the guidelines in the OLAC best practices guide for assigning genre terms. Those instructions tell catalogers to use the general genre heading “Video games” for all video games and to assign headings for specific game genres, if applicable.

The session concluded with a lively discussion on how to code for language in bibliographic records for video games. In instances where viewing the game is not possible, catalogers must rely on other sources or parts of the game to gather information about the language of the item. When this situation occurs, Jaffe advised using the label on the game, the container for the game, and/or an accompanying guide or instruction booklet to determine language content. If it would be helpful to the user or to clarify certain aspects of the bibliographic record, Jaffe urged catalogers to make a 546 note to explain the source of language information.

 

Basic Audio Recordings Cataloging Workshop --Presented by Mary Huismann, Music Catalog Librarian, St. Olaf College

submitted by Michelle Hahn, Indiana University

 

Huismann started off with the assumptions that members of the audience were already familiar with RDA and had access to the RDA Toolkit. She then initiated the presentation by showcasing the differences between AACR2r and RDA, as much of the audience is more familiar with the former. Such differences include: a greater concern for data elements; the establishment of data points to describe content, media, and carrier; the permission to transcribe inaccuracies in data such as misspellings; the prohibition of most abbreviations and modification of several to “symbol” status; and the allowance of more cataloger’s judgement.

She also reiterated that RDA is a content standard, not a prescription for use of the MARC framework, and is organized in such a way that it more closely follows the Functional Requirements for Bibliographic Records (FRBR) structure. Such structure organizes elements into three areas: Group 1 is the description of works, expressions, manifestations, and items (affectionately called WEMI); Group 2 is the description of persons and people; and Group 3 is the description of concepts, objects, events, and places. FRBR is now being paired with the other Functional Requirements (Functional Requirements for Authority Data (FRAD) and Functional Requirements for Subject Authority Data (FRSAD)) into the Library Reference Model (LRM) by the International Federation of Library Associations and Institutions (IFLA).

As Huismann entered into the practical application of RDA for audio recordings cataloging, she made sure we were familiar with the terminology found throughout the guidelines. Those terms include “alternative” which provides a substitute action to the previous guideline, “optional” which allows foregoing a guideline, “exception” which supersedes the previous guideline in a particular instance, “either/or” which invokes a specific action based on the item in hand, “agency preparing” which allows for local policy and greater cataloger’s judgment, “transcribe” meaning to take the exact wording of a descriptive element though applying certain guidelines for symbols/punctuation/capitalization, and “record” which allows us to accommodate the spirit of the content even if it is not an exact transcription of the data. She also noted particular abbreviations that are still permitted specifically for music-related content, such as choral voicing (i.e. SATB) op./no. in access points, duration (i.e. min. and sec.), and thematic index numbers.

After the more introductory and clarifying information, Huismann delved into the specific application of the RDA content standard in the MARC 21 framework, as well as decisions in record creation, such as whether to input a new record, how to determine the mode of issuance, and primary and substitute sources of information. By following the familiar order in which many of us catalog, Huismann flowed through RDA guidelines by number and explained how each was applied in the MARC 21 framework specifically for audio recordings. She also included a broad variety of audio formats and types of academic content beyond musical recordings.

Throughout the session, Huismann made sure to mention the various resources which prove to be exceptionally helpful in cataloging audio recordings, including OLAC’s own best practices, the Music Library Association’s (MLA) best practices, and the well-maintained Music Cataloging at Yale website, which music librarians have come to rely on for a multitude of information related to music organization and cataloging.

 

Advanced Audio Recordings Cataloging Workshop -- Presented by Mary Huismann,

Music Catalog Librarian, St. Olaf College

submitted by Scott Piepenburg, Valdosta State University

 

This was the second of the sessions given by Huismann on the cataloging of audio recordings; the earlier one being on the basics of audio recordings cataloging. She said that this session would not duplicate that information but start with it and build on it.

The session began with the cataloging of spoken word records, a format many catalogers are not often faced with, the vast majority of sound recordings in libraries being of music. Sources of information were discussed, including other manifestations. Of particular note was the use of OBI strips, or spine cards, which are found primarily in Japanese recordings but are also used in some non-Japanese recordings. These spine cards can be a wealth of information. Also emphasized is that edition statements are important for spoken word records to differentiate from similar titles.

Something I had never encountered before was the “date table of precedence” or the order that applicable date types should be utilized. This order is r,s,p,t,q and are covered in OCLC’s Bibliographic Formats and Standards Guides (BFS.) Dates should be recorded in standard 4-digit format for the 2XX field but the EDTF structure can be used for the 046 tag when recording date of capture/recording/performance. Other numerical information of significance is the UPC value (which is 12 digits) or the EAN value (which is 13 digits.) These are useful as many systems store these values and they can often be searched simply by scanning them with a barcode scanner.

The 336, 337 and 338 tags were covered in detail, as they are often different for spoken word than for musical recordings, as well as their usefulness to searchers. This was expanded to the use of 34X tags to describe the storage format including any “analog vs. digital” capture and storage formats, allowing for an eye-readable value as opposed to that in the 007. RDA also provides coded values for these areas.

The importance of the 546, which is required, was also stressed along with capture information in the 518 tag. Relationship indicators are in RDA 18.5 Appendix I.

The session moved along to streaming audio. A significant amount of time and energy was expended on what “streaming audio” actually is. If it is something that is “stored” or “burned” and has permanency, then it is not streaming audio. Oftentimes, these are played or accessed via a computer, thus requiring the use of 2-007 tags; one for the musical aspect and one for the computer aspect. Reference was made to the OLAC and MLA best practices guides for the cataloging of these formats. An important distinction was made for provider-neutral sources, meaning that they are not accessed via a license or proprietary login/authentication. In these circumstances, while the access URL for the file may be recorded in the 856 tag, if it requires authentication because it is purchased by a specific site, that information should not be recorded in the 856 tag.

The session moved on to cover “funky formats.” Sometimes a cataloger does not know what they have in front of them. Tools such as VSO Inspector, K-probe, DVD Identifier and VLC media player can often offer guidance in these areas. Some examples of these funky formats are SACD (Super Audio CD,) hybrid SACD, Blu-ray audio, enhanced-CD, MP-3, Dual-Discs, and USB. Along with this, analog formats such as vinyl recordings (encompassing 33 1/3, 45, and 78 rpm) along with reel-to-reel, 8-track, and cassettes as well as some obscure digital formats such as DAT and Mini-disc. Other more obscure formats were covered in less detail.

With its plethora of examples and citations, even advanced catalogers would do well to review the slides from this presentation; they stand very well on their own without the narrative.

 

Cartographic Resources Cataloging: Navigating the Basics Workshop -- Presented by Paige Andrew, Maps Cataloging Librarian, Pennsylvania State University

submitted by Xiping Liu, University of Houston

 

Paige Andrew, widely recognized as an expert in maps cataloging, gave both the basic and advanced workshops on cartographic resources cataloging. This report will focus on the basic map cataloging skills taught in the first session. Andrew started the presentation by introducing some general sources for map cataloging:

Cartographic materials: a manual of interpretation for AACR2, 2002 revision / Anglo-American Cataloging Committee for Cartographic Materials.

Maps and related materials: cataloging, classification and bibliographic control / Paige G. Andrew and Mary L. Larsgaard, eds. New York: Haworth Information Press, 1999.

RDA and cartographic resources / Paige G. Andrew, Susan M. Moore, and Mary Larsgaard. Chicago : ALA Editions, 2015.

 

He noted that even though the first book describes the rules under AACR2, it still provides a lot of illustrations for titles and physical descriptions and remains a valuable source for map cataloging.

Andrew then covered some general aspects about cartographic resources and their description. The primary type of maps is graphic. Maps convert 3-D reality into a 2-D substitute and that is why they have to include scale and projection data. Because a map may contain a main map and other ancillary maps, it is important to understand the bibliographic description is of the “main map(s)” only. Insets/ancillary maps are treated differently, usually included in a 500 note. Andrew also explained the difference between a “panel” which refers to the map itself and a “cover” which separates itself from the actual map. The two terms are often used in the source-of-title note.

Before delving into each descriptive area, Andrew briefly talked about his workflow for creating an original record. He recommended spending some time looking over the map first to get an idea of what is covered in it. This can be title, scale, projection statement, coordinates, date, statement of responsibility, publisher, and the topic, etc. He also suggested determining how many main maps are to be described before starting the description process. Authority work can be done either before or after the description is completed.

After that, Andrew began going through the main descriptive areas in the bibliographic record. For titles, it can be tricky because there can be more than one title to choose from. RDA 2.3.2.5 gives the instructions on handling more than one form of title: If the sequence, layout, and typography do not provide the basis for a clear choice, choose the most comprehensive title. In other words, the title selected should always include the geographic area and the subject matter of the cartographic item. If no subject/topic is involved, it must always include the geographic area covered. A source of title note is needed if the title does not come from the map itself, i.e. from the panel, the verso of the item, or cover, container, envelope, etc.

Andrew then moved to explain scale types and scale statements. A scale is the ratio representing the relationship between a specified distance on a map and the actual distance on the ground. Scale statement is put in 255 $a. There are three types of scale statement on a map:

 

Representative fraction (RF) form

Verbal statement

Bar or graphic scale

 

Andrew then showed a few examples of how to calculate from the verbal statement. Some basic conversion rates can come in handy in such calculations. After that Andrew showed a short video on how to use the natural scale indicator to calculate from the bar or graphic scale, followed by a few hands-on exercises. If the scale is not shown in any of the three methods mentioned above, catalogers can supply “Scale not given” in the 300 field, or “not drawn to scale” or “scales differ” if a map is not drawn to a scale. With RDA, it uses “Scale approximately 1:xxxx” instead “Scale [ca. 1:XXXX]” which is the format under AACR2.

For projection statements, if there is one on the resource, record it in 255 $b and supply the correct code in the fixed field accordingly.

The next descriptive area is the coordinates. RDA 7.4 covers the geographic coordinates element. Coordinates are expressed as set of points of longitude and latitude and are recorded in a specific order (left, right, top, bottom), in 255 $c and 034 d, e, f, g. While it is not a core element, Andrew stressed that it is important to include the coordinates in the record because of their increasing importance for user retrieval. Andrew then introduced a very handy tool called “Klokan Bounding Box,” where by searching a place on google map, it will automatically provide the coordinates in MARC formats for catalogers to grab.

The last descriptive area covered was the physical description area. RDA 3.4.1.3 instructs to record the number of units and the type of units for an extent of the manifestation, for example “1 map” or “5 profiles.” “Type of unit” list is mixed in with long list of exceptions under 3.4.1.3. For dimensions, place the map in “reading position” first and then measure from the neatline. The order is always to measure from top to bottom followed by side to side. Measurements are given in centimeters rounded up to the next highest. If the map is folded into a panel, cover, or envelope, always record the measurement of these component parts as well. If there is no neatline to measure from, Andrew said to record the size of the sheet and not to worry about the size of the map. After the general instructions, every participant at the workshop was given a map of downtown Richmond to exercise on measuring the dimensions.

Throughout the workshop, Andrew passed around physical maps to show the participants issues he mentioned in the presentation. It was a very informative workshop for catalogers who did not have much experience working with cartographic resources. Many participants stayed to take the advanced cartographic resources cataloging workshop.

 

Advanced Cartographic Resources Cataloging Workshop -- Presented by Paige Andrew, Maps Cataloging Librarian, Pennsylvania State University

submitted by Tachtorn Meier, Yale University

 

The topics covered during the Advanced Cartographic Resources Cataloging workshop were an overview of subject analysis and classification, an analysis of map reproductions (paper and digital), and a comparison of Globes and Atlases with sheet maps.

 

Andrew began the workshop with a brief overview of the basic principles of cartographic materials. One convention he pointed out is that it is common for an authorized access point to be recorded under the corporate body name when there is also an individual name appearing on the map, as it is understood that this individual created the cartographic work. Andrew commented that this is an acceptable practice so long as an access point has been created for the individual.

Next, Andrew presented a comparison of the bibliographic descriptions used for “atlases”, “globes” and “sheet maps.” Atlases are cartographic resources. However, the source of information for describing atlases is the same as when describing monographs (see RDA 2.2.2). Like describing monographs, the title and statement of responsibility must come from the same source and describe other physical information, if any, and the dimension for height only. Fixed fields specifically for atlases are: “e” for the type of record and “d” for the specific material designation (007). Other variable fields use the G Schedule from the Library of Congress Classification (G1000-G3122) for 050/052, the unit type “atlas” in 300, and include coordinates in 034 and 255.

Globes are three-dimensional representations. Typically, there is not as much textual information on globes as on sheet maps. Andrew suggested paying attention to physical details of the globe, such as meridian, cradle or outer ring. For describing bibliographic records Andrew commented on the following: In the fixed fields code “e” for the type of record and choose the appropriate code for relief types, use 007 for globes, use the G schedule from the Library of Congress (G3160-G3171) for 050/052, use unit type “globe” in 300, record other physical details such as mounted in wood, plastic meridian ring etc. in 300 subfield “b”, and record the diameter of the globe in 300 subfield “c”.

Andrew explained that globes have limited sources of information aside from the typical title, legend, etc. Often globes lack a publication date. Andrew asked that anyone interested in learning more about internet resources designed to date globes to contact him directly.

Andrew then moved on to discuss coordinates (see RDA 7.4). He confirmed that recoding coordinates is a core element for BIBCO libraries. Under AACR2, recording the coordinates is optional. However, the cataloger should record coordinates if they appear on the source. If not, the cataloger could choose to supply coordinates or leave them out. He strongly recommended including coordinates for all cartographic resources, especially when the information can be used in the form of a metadata bounding box for the GIS. Andrew also explained that geographic coordinates referred to the latitude lines, or parallels, from east to west parallel to the equator and longitude lines, or meridians, from north to south between the North Pole and the South Pole. In the bibliographic record, coordinates are expressed in hemisphere (H), degrees (DDD), minutes (MM), and seconds (SS). Andrew remarked that he is part of the group looking into changing this rule. Recording coordinates in decimal degree is still an option. Record them in the order of westernmost longitude (HDDDMMSS), easternmost longitude (HDDDMMSS), northernmost latitude (HDDDMMSS), southernmost latitude (HDDDMMSS) and supply “0” to fill any missing digits.

Next, Andrew demonstrated how to use the Klokan Technologies Bounding Box tool. Andrew cautioned that this tool is oriented to draw the bounding box from North to South while many maps appeared to be oriented from Northeast to Southeast.

Andrew shared with attendees that the Library of Congress has created online training for the Library of Congress Subject Headings (LCSH). This announcement appeared in the October 2017 issue of base line, a newsletter of the Map and Geospatial Information Round Table.

With regards to the analysis of cartographic resources, Andrew encouraged catalogers to pay attention to the title, legends, and other textual information they encounter during the cataloging process. The map title alone might not provide the whole picture about the intent of the map, but together with textual information and legend, may provide the direct implication for the subject assignment.

There are different levels of geographic name heading starting from the World followed by Country, then any place names lower than country level are arranged up to two-levels of jurisdictions. Similarly, geographic subdivisions are also arranged in no more than two-levels of jurisdictions. For example, a city name in the United States would qualify by its state name and usually the state name is expressed in an abbreviated form such as Richmond (Va.). Some states do not use an abbreviated form when they are used as a qualifier such as Ohio and Hawaii. There are three geographic free-floating terms: Region, Metropolitan Area, and Suburban Area. They can be used to bring out the specificity of a given geographic area. For example, a map showing Richmond, VA and its surrounding would be assigned “Richmond Metropolitan Area (Va.)” for its subject because “Metropolitan Area” brought out the specification of “surrounding area.” However, Andrew recommended including both “Richmond Metropolitan Area (Va.)” and Richmond (Va.)” in the subject area for this situation because generally the free-floating terms, if used, are not validated or controlled in OCLC.

There are many topical headings related to geographic resources. They are recorded in 650 fields and may be subdivided geographically by two, and in some cases, three or four level of jurisdictions. If needed, use MARC field 662 to record a hierarchical form of a place name. Topical headings related to geographic resources may or may not be allowed to be subdivided geographically. As an example, Andrew described how “Geology” can be subdivided geographically but “Geology, Stratigraphic” cannot. Andrew brought out a short list of topics commonly assigned to maps like railroads, land use, soils and agriculture, and zoning and real property. He added that currently there are seventy-five specific genre terms for cartographic materials. For the G schedule of the Library of Congress Classification, Andrew suggested that catalogers should consult with the 4th edition of “Classification: Class G; Geography, Maps, Anthropology, Recreation” published in 1976.

The last topic Andrew covered for this advanced workshop was cartographic reproduction. Andrew discussed the steps required to describe an item in hand (the reproduction) as if it is the original; code “r” in the fixed field (008/006) and provide both reproduction date and original date (if available); code “f” in the 007--type of reproduction record; in 300 subfield “b”, add information regarding the original version (if available) in 534 or 776; add subfield “v” Facsimiles in subject fields. If available, the original date should be included in 050 as the last element in subfield “a”. Finally, record the reproduction date in subfield “b” after the author cutter number. To demonstrate, Andrew used a map that was reproduced from the original blue line print also called “reverse blue line print” as an example.

Toward the end of the workshop Andrew shared his experience with providing bibliographic descriptions for map reproductions from the digital resources at his library. He then proceeded to go through MARC fields related to this project. There are some differences when describing cartographic reproductions from digital sources to paper and reproductions from print sources to paper. Publication dates and places are taken from the digital source. Catalogers should add 530 to reflect the digital resource, 533 to reflect the type of reproduction, and add 856 to link to the digital version. Andrew suggested consulting with chapter eleven of the “Cartographic Materials: A Manual of Interpretation for AACR2, 2002 Revision 2nd Edition” published in 2003 and chapter eight of the “Map cataloging manual” published in 1991.

Andrew concluded the workshops by enthusiastically answering all questions and comments. It was clear from the level of attendee participation that his highly engaging presentation had stimulated much thought and reflection on current issues in advanced map cataloging.

 

Basic Video Cataloging Workshop -- Presented by Jay Weitz, Senior Consulting Database Specialist, OCLC

submitted by Jessica Robertson, Central Rappahannock Regional Library

 

Weitz led a presentation covering many aspects of how to interpret information and catalog video recordings. He joked that although this was a basic session, nothing about video cataloging is basic. He then encouraged the audience to join MOUG and OLAC if they deal with music or video cataloging respectively. He also invited everyone to use the best practice guides available from OLAC’s Cataloging Policy Committee (CAPC) and from the Music Library Association (MLA) Cataloging and Metadata Committee. These are both available freely on the web, and the MLA Best Practices are also available through the RDA Toolkit.

 

He shared the history of the DVD and Blu-ray with a special emphasis on the first release dates of these types of video recordings. He reminded the audience that no DVD from the United States can have a publication date earlier than 1997 (for Japanese DVDs, 1996). No Blu-ray disc can have a publication date earlier than 2006. If there are earlier dates on the items than those just listed, they may very well be important dates but they are not publication dates. Determining dates of visual materials is often difficult because the resource itself has multiple dates listed and because there may be various bibliographic “events” that visual materials have such as an original production date, release date, etc. Use your cataloger’s judgment about how much new material qualifies as “substantial” when coding the date fixed fields, and use the 500 fields freely to explain all dates if more information is needed for your users.

 

The rest of the presentation moved through most of the MARC record, highlighting guidelines from RDA and the OLAC Best Practices. OLAC Best Practices 2.2.2.3, for instance, states that the preferred source of information is the “label that is permanently printed on or affixed to the manifestation (e.g. a label on the surface of a videodisc). This choice does not include labels found on any accompanying materials or container.” When the presentation moved to the 007 fixed fields there were a few moans from the audience which brought laughter from everyone who could relate. Weitz led the audience through each of these fields and noted specifically that there is still no code for 4¾ dimensions and that the 007/07 (subfield $h) will still be coded as “z” for now.

 

For most of the presentation, AACR2 rules were not discussed, but an exception was made when the RDA Content, Media, and Carrier fields were mentioned. The GMD from AACR2 was one-dimensional but the RDA 3XX fields are three-dimensional. The GMD should not be used in an RDA record, but the 3XX fields should be included in any record now regardless of whether it is cataloged according to RDA. He specifically noted that the OLAC Best Practices and OCLC recommend including both the terms in 3XX subfield $a and the codes in 3XX subfield $b. The Best Practices document further recommends that terms in subfield $a and their vocabularies in $2 be in a separate field from the corresponding codes in subfield $b and their associated RDA code list in subfield $2. Simply stated, this doubles the 3XX fields needed in the records since the $a and $b terms are from different lists and the $2 is not repeatable.

There were many more practical tips shared throughout this presentation and although there was not much time left for questions, Weitz welcomed any follow-up questions through email.

 

Advanced Video Cataloging Workshop -- Presented by Jay Weitz, Senior Consulting Database Specialist, OCLC

submitted by Beth Thompson, University of North Carolina, Wilmington

Weitz gave a lively session on advanced video cataloging. The session, building on his basic video cataloging session, provided current information on RDA instructions and the MARC 21 format on cataloging video recordings. Online media/streaming media was not covered in this session.

He started out encouraging those at the session who catalog music materials to join the Music OCLC Users Group (MOUG) and all to join the Online Audiovisual Catalogers (OLAC) group. He stated the benefit and support they provided. Weitz also shared work OLAC’s Cataloging Policy Committee (CAPC) has been doing including the best practice documents that have been created for catalogers such as the Best Practices for Cataloging DVD-Video and Blu-ray Discs, Video Language Coding: Best Practices, and Best Practices for Cataloging Video Games Using RDA and MARC 21. Other best practice documents created by the Music Library Association (MLA) include Best Practices for Music Cataloging Using RDA and MARC 21 and Supplements to Best Practices for Music Cataloging Using RDA and MARC 21. Weitz stated some of these documents have been integrated into the RDA Toolkit and that there is an initiative to compile all the OLAC Best Practices into a single document.

Short histories on DVD video and Blu-ray Discs were given by Weitz, discussing the years each format was developed and evolving or competing technology during its development. He said if we remembered anything from this session, it was that commercial DVDs were first introduced/published in March 1997, late 1996 in Japan, and that Blu-rays became available commercially in June, 2006, so no publication date for these formats can be earlier than these dates.

At this point, Weitz covered in his presentation some basic video recording fields such as the Title and Title proper, Statement of responsibility, Language, Identifiers, Duration and Award note. He used the OLAC Best Practices for Cataloging DVD-Video and Blu-ray Discs and the RDA Toolkit to help described the correct ways to catalog these fields and some differences between AACR2 and RDA rules. Weitz stated that current feature films and TV series credits no longer start a film but are delayed until much later in the film, sometimes even until the end of the film. This has added an extra challenge to cataloging this format.

A couple of interesting things I learned, Weitz used the OLAC Best Practices for Video Games document to show how to catalog a franchise title (Franchise title: individual title) such as:

 

245 04 The hunger games: Catching fire.

 

Weitz discussed accessibility content and gave a brief history on the differences between subtitles, captions, subtitles for the deaf and hard of hearing (SDH) and audio enhancements. He used a number of great examples to show these differences.

In regards to the 028, there is a newly added second indicator 6 for the distributor number. The 037 field (Sources of Acquisition) is no longer being used for video recordings, but use the 028 instead.

Each award should be in a separate 586 (Awards Note) field. This is an open-ended field, and the cataloger can compose a description or use the description from the container. The field does not use final punctuation.

Weitz covered a lot of material during this session. It was all too short as Weitz’s presentations are always informative and enjoyable.

 

Linked Data Initiatives Panel

submitted by Julia Hess, Ball State University

 

The panel opened with a presentation by Sarah Hovde, Cataloger at Folger Shakespeare Library. She dedicated her time to discussing the planning and prototyping process for putting together a digital asset platform at her institution. Beginning with a brief background of the Folger Shakespeare Library, she quickly moved onto the problem: the institution’s plan to grow its digital collections without a way to manage them. In 2016, after determining that no existing platform would suit their needs, the library decided to build their own, working with an external team of developers.

Hovde noted that the Folger Shakespeare Library’s collection is particularly suited for linked data because Shakespeare’s work has been adapted so frequently, creating many different types of relationships to manipulate. The library convened a working group to explore this and build a “prototype prototype” to look at using linked open data in the digital asset platform.

The working group faced a number of challenges: development of a data model with the few guidelines they had been given, cleaning of older data, creation of data for new digital assets, transformation of data to JSON, and communication between internal librarians and external developers. Despite all this, the library ended up with a working prototype that will be going live soon. Hovde ended with a number of recommendations for institutions that might be considering a similar project, such as prioritizing version control, thinking about scalability from the beginning, communicating clearly, documenting everything, and giving staff room to explore.

The panel continued with a presentation by Jeremy Myntti, Head of Digital Library Services at the University of Utah. He spoke about the Western Name Authority File project. Its end goal is to create a linked data ready authority file of names that can be used in conjunction with Library of Congress authorities. He opened with a short biography of Charles Savage, using his name as an example of one that is often found in unstandardized forms in records. He noted that this is caused by a combination of a number of factors, including that vendor platforms generally do not have good authority control solutions and most metadata librarians do not participate in NACO.

In 2016, the University of Utah was awarded a grant to explore the implementation of a regional controlled vocabulary for names to be used for digital collections. After investigating a number of data models and tools, they chose EAC-CPF and CollectiveAccess, respectively. They asked partner institutions to submit name authority metadata for aggregation and reconciliation against the Library of Congress authority file. This resulted in a significant amount of cleanup work and a number of names that could either be updated in the national authority file or added to it.

The researchers are currently in the implementation phase of the project, which primarily consists of working with the selected tools to evaluate their efficacy for the intended purpose. As they look forward, they are considering what it will take to move out of the pilot stage, including ensuring that the vocabulary works with multiple systems. The main advice Myntti offered to libraries preparing to move to a linked data environment is to make sure they have clean data, and specifically to be sure to match their authorities against another authority file.

The panel’s final presentation was given by Heather Pretty, Cataloging Librarian at Memorial University of Newfoundland. She began by discussing the Canadian Linked Data Initiative and her institution’s involvement in it, ranging from participation in working groups to contribution of thesis and authority records with added identifiers to editing of wikidata for Canadian musicians.

Next, Pretty spoke about plans for her year-long sabbatical, titled “Learning Linked Data.” Her primary goals for the year are to develop knowledge, to develop experience and expertise, and to participate in linked data initiatives. As part of that, she is beginning a project focusing on Newfoundland soldiers who fought in World War I, specifically those who participated in the battle of Monchy-le-Preux. She plans to use linked data to track both the soldiers and their regiment forward and backward from the date of the battle, focusing on geographic data and associated dates.

 

The panelists were all informative and insightful. In an environment in which the phrase “linked open data” is frequently tossed around and librarians often hear about the abstract promises of linked data, it was inspiring to hear about tangible projects that are underway now.

 

Lightning Talks

submitted by Thuy-Anh Dang, University of North Carolina at Chapel Hill

 

The Limits of Subject Cataloging Media in an LGBTQ Collection / Jessica L. Colbert

Colbert spoke about an audiovisual cataloging project she undertook as a graduate student at the Gerber/Hart Library and Archives in Chicago. The difficulty in the project lay in the inadequacy of Library of Congress Subject Headings to describe this particular collection, which consists of LGBTQ erotica.

Colbert found that LCSH coverage of LGBTQ topics was inadequate or even misleading (such as the current practice of using the form subdivision “Drama” which is meant for plays). Colbert also struggled to provide access to the genre of the collection, which tends to straddle the line between erotica and pornography.

 

OLAC CAPC Unified Best Practices Task Force / Bruce Evans

Evans, CAPC Chair, announced that OLAC has approved a task force to work on producing a document that unifies all of OLAC’s best practices guidelines. Marcia Barrett is the chair; the task force will work to produce a rough draft prior to the June 2018 rollout of the restructured RDA Toolkit. The task force will also make a recommendation on whether to integrate the Unified Best Practices document into the Toolkit.

An audience member raised the concern that RDA Toolkit integration means that OLAC’s Best Practices guidelines would no longer be freely accessible but instead would require a subscription to the Toolkit.

 

Adventures in 3D Printed Collections: Lessons Learned / Ann Kardos

Kardos, Metadata Librarian at the University of Massachusetts Amherst, described a pilot project in which she worked to organize, store, describe, and assess preservation needs for a small collection of 3D printed objects. The collection was created by a former graduate student and consists of 3D-printed models of biomolecules as well as the associated program files and extensive image files. Kardos spoke of the challenges in providing meaningful and useful metadata to such a collection.

 

RDA, MARC, DH: OMG, SMH / Catherine Oliver

Oliver, Assistant Professor and Metadata & Cataloging Librarian at Northern Michigan University, described the challenges she encountered in using RDA and MARC to catalog Digital Humanities projects for the university’s OPAC. Oliver found that current cataloging conventions and vocabularies tend to leave such projects under-described. Specific issues included inadequate material type, insufficient relationship designators, and uncontrolled fields.

 

Poster Sessions

submitted by Rebecca Belford, Brown University

 

The five posters addressed a variety of topics: circulation, discovery, metadata, preservation, and e- books. One common theme across the presentations—consistent with sessions throughout the meeting—was the importance of harmonized, clean, and full metadata.

 

The “Hill” Is Alive… with the Sound of Nordic Music: Collaboration in the Nordic Solo Song Collection at St. Olaf College / Kristi Bergland, University of Minnesota, and Mary Huismann, St. Olaf College

This project grew out of Bergland’s DMA project to produce an edition of Nordic songs that was not (yet another) edition of works by Grieg. Of approximately 600 songs in the Nordic Solo Song Collection, 264 scores identified as published before 1923 have been digitized. It is housed on the open-source, cloud- based digital asset management system, Elevator (https://elevator.stolaf.edu/). Elevator draws metadata directly from MARC records in the catalog, resulting in mixed description levels: rich description for those items fully cataloged in physical form and brief records for those not yet cataloged. Future plans include audio and text diction and translation files. As of the poster session date, the digital collection is not yet live.

 

Reciprocal Impacts: IU’s Media Digitization and Preservation Initiative (MDPI) and Metadata / Michelle Hahn, Indiana University

Indiana University’s media digitization efforts are well known through Variations/Avalon projects. Hahn’s poster focused on a corresponding large-scale digitization and preservation project, the MDPI. The title refers to interacting impacts of metadata and system development. Metadata issues include harmonizing records created over time with multiple rules and frameworks, including multiple MARC flavors that include the legacy 262 field. Harmonization challenges in turn affect display in Avalon.

Everything in the MDPI is being digitized, even if ahead of metadata work. Currently, the match point between digitized versions and metadata for the physical format is barcodes. Hahn offered lessons learned, particularly managing expectations, not underestimating space or time needs, the pitfalls of relying on student workers unfamiliar with subject matter, and “scrub your data!” Slides and the poster image are online at http://tinyurl.com/ReciprocalImpactsSlides and http://tinyurl.com/ReciprocalImpactsPoster, respectively.

 

Getting More out of MARC for Music and Movies with Primo: Strategies for Display, Search and Faceting / Kelley McGrath, University of Oregon

The poster described the procedure for providing better access in Primo to selection criteria important for movies and music. Steps included normalization of fields, particularly MARC field 257, country of producing entity, for movies, and configuring the display of field 382, medium of performance. Country of production is offered in a dedicated movie search. The music search offers a set of facets for musical medium, offering multiple, combinable limits. Challenges in facets include collocation of identical information drawn from different sources or presented in different configurations, and the perennial challenge of aggregates. McGrath emphasized that, as catalogers, we put so much effort into descriptions; we should insist that users can get this information back out. McGrath welcomes comments on the interface, which is live at http://tinyurl.com/ydea3veq.

 

The Ins and Outs of DVD Inventory / Jeanne Piascick and Joe Bizon, University of Central Florida

At the University of Central Florida Libraries, DVD cases are in open stacks with discs housed in slimline cases behind a service desk; DVDs and corresponding cases have identical barcodes. A DVD inventory project was created to identify any missing discs or cases. Students scanned barcodes on the shelves and on the cases. An Excel spreadsheet was then generated from Aleph. Once deduplicated, single barcodes remaining on the spreadsheet pointed to potentially missing items, which were then manually searched on the shelves. The good news? Very few missing items: out of a collection of approximately 7,000, only 139 DVDs and 54 cases were identified as missing. The collection will be fully restored after Acquisitions orders replacement DVDs for those not found.

 

Evaluating and Loading Ebook Metadata from OCLC WorldShare Collection Manager / Stacie Traill and Kelly Thompson, University of Minnesota

Metadata sources for ebook packages at the University of Minnesota Libraries are selected “by attempting to balance metadata coverage and completeness vs. effort required.” Sources may be the Alma Community Zone, WorldShare Collection Manager, or content vendors. The presenters found that in general, WorldShare Collection Manager offered the most complete metadata. MARC records are delivered to Alma via FTP from WorldShare, where they can be normalized and customized.

Maintenance—new records, updates, merges, and deletes—are facilitated by scheduled Python scripts. The presenters pointed to an in-depth discussion of the process and Python scripting in their article in Code4Lib Journal issue 38.

 

Round Table Discussions

submitted by Jan Mayo, East Carolina University

 

On the final day of the conference, much of the morning was devoted to round table discussions, which were very productive. There were five facilitated topics and one additional topic discussed and reported on to the entire group:

Advocating for Cataloging: Demonstrating Our Value, facilitated by Bruce Evans, came up with three broad themes: 1) Catalogers need to emphasize that we are a public service; 2) we need to be plugged into the broader library via open houses, combined positions, etc.; and 3) what is the value of cataloging and how does it fit into the library’s strategic goals?

Sudden Cataloger: What Do Newbies Need to Know? was facilitated by Autumn Faulkner. This topic could apply to new catalogers, as well as seasoned ones suddenly confronted with a new type of material. They need instruction in non-intuitive things, like what is a good OCLC record and how are the cataloging tools applied? Also, expected competencies and benchmarks would be useful. An attendee suggested that cataloging is an apprenticeship, which prompted a room-wide discussion.

Cataloging Horrors: How to Cope When Working With Traumatizing Material did not have a scheduled facilitator, but Barbara Tysinger offered to do it. Participants felt it was important to be factual when describing such materials and to avoid being judgmental. There was some discussion of the possible need for trigger warnings. If possible, the cataloger should pass the material on to a colleague who is not disturbed by it. Also, ways to de-stress were explored.

 

Nuts and Bolts of Systems Migration, facilitated by Scott Dutkiewicz and Scott Piepenburg, suggested keeping a backup, documenting everything, and finding analogous functions from the old system in the new one. One attendee cautioned that sales people can sometimes overpromise what those who create the product can actually deliver.

Using Wikipedia, IMDb and Other Sites for Cataloging Media, facilitated by Martin Patrick, agreed that other useful sites are CTVA, Anime News, and cartoon wiki data. Responses to their report evolved into a lively and wide-ranging discussion of moving from public to academic librarianship and tips on getting that first job.

Management of Vendor Files of MARC Records, facilitated by Ngoc-My Guidarelli, cautioned that the vendors you meet with are not the programmers, so take what they tell you with a grain of salt. It is possible to negotiate a lot of what you want.

Handouts and further information for many of the sessions can currently be found here. They will eventually be added to the Conference Archive.

 

Conference Chair Closing Remarks

Kay Johnson

The hotel staff and accommodations have been wonderful. I would like to thank everyone at the Omni, including Michele, Nancy, Tramon, Ian, Zack, Houston, SianSian, Eric, Eddie, and Brittany and many, many others.

The last and current OLAC Boards have been very supportive to the OLAC Conference Committee, especially Stacie, Autumn, who encouraged me that hosting a conference is doable, Debra, Matt, Annie, Jeremy & Haley. Debra just started as OLAC Treasurer and has made our lives a lot easier and helped us save money by figuring out transfers between accounts.

Without the program presenters, there would be no conference. A huge thanks goes to Annie, Kathryn, Regina, Jay, Paige, Amber, Jenny, Jessica, Mary, Scott, Rachel, Adam, Jessica, Bruce, Ann, Catherine, Jeanne, Stacie, Kelly, Kelly, Kristi, Michelle, Sarah, Jeremy, Heather, and the round table presenters.

And there would be no program without the OLAC Conference Program Committee, chaired by Stacie Traill, with the following members: Marcia, Thuy-An, Autumn, Jeannette, T.J., Kate, Jeremy and Jessica. Thank you for putting together an excellent program.

A final, and well-deserved thanks goes to the group that worked so hard to bring this conference together – The OLAC Conference Committee. – Autumn Faulkner, Teri Frick, My Guidarelli, Mary Beth Holm, Mei Kiu Lo, Elizabeth McCormick, Stacie Traill and honorary member, Christi Wayne. Mary Beth was unable to come to the conference, and Stacie, and Elizabeth had to leave early, but the rest of you, please come up to be acknowledged and have your picture taken. It’s been a pleasure working with all of you, and getting to know you.

Finally.

Today is Reformation Sunday. Almost 500 years ago, on October 31, 1517, Martin Luther posted 95 theses that catalyzed a revolution. Luther’s theses were a white paper of a sort about the rules of his day. His revolution was part of the bigger reformation of intellectual and creative enlightenment that we call the Renaissance. By 1517, Guttenberg’s printing press has spread throughout Europe, and printers were enthusiastic to print something besides the Bible that would have mass appeal. They eagerly printed Luther’s treatises, which Luther didn’t discover until he read one on the street, the concept of copyright being completely non-existent at the time. Luther’s writings became widespread, and he is sometimes called the first modern media star. Luther had no intent or desire to cause a revolution, but one started around him. There are many parallels between his time period and ours.

The OLAC Reformation began in 1980, when Nancy Olson formed a group to come up with the standards and practices to create MARC catalog records for audiovisual materials. At the time, the world of media existed mostly as films, cassettes, vinyl records, maps, kits, and other analog, print, and 3-D media that, if cataloged at all, were often organized by format (carrier) and accession number with a title or brief description recorded on an index card. The MARC format had been a standard for about ten years, and computerized cataloging was taking off, but still so new that the focus was on new and retrospective cataloging of books. Systems were called Online Public Access Catalogs to stress that this wasn’t your grandmother’s card catalog. Similarly, OLAC was named Online Audiovisual Catalogers to stress that these catalog records weren’t your grandmother’s accession cards. Nancy was part of a reformation that we could consider the Digital Renaissance.

Neither Luther nor Olson came from prestigious backgrounds, or had any notoriety before they became revolutionaries. Luther was a monk, and Olson was a cataloger at a former teacher’s college that was then called Mankato State University. Both were not perfect, both could be intimidating, but both also had a sense of humor, cared for others, and most-importantly had a drive and a passion for a cause and for sharing that cause with others. The passion for cataloging audiovisual materials continues today in OLAC, and is evident by the presenters and attendees at this conference. We are not perfect, and neither is our media. OLAC is on the front lines developing and revising standards and practices to improve the access for existing media, tackle new media, and adopt technology to improve the efficiency and accuracy of bibliographic description to do what librarians do best – serve our user communities. Thank you for attending this conference, thank you for supporting OLAC, and thank you for contributing to the continuing audiovisual cataloging reformation.

It’s been a great privilege and honor to serve ad your conference chair. I wish all of you safe and uneventful travels back to your homes.