Sunday, July 18, 2010

Duplicate records - a common enemy?

When I was doing the Koha assignment importing MARC records into the system, I noticed something. There are usually more than one MARC record of the same book. The content of the fields can be very similar. I wonder if this is the same problem we experience in our maintenance of the collections database, where we have duplicate records of the same artefact in our database system. Theoretically, duplicate records are a taboo - They are not supposed to be there. There are rules in place, to ensure that the system is not misused and that there are no more than 1 record of each artefact in the system. However, as a result of perhaps ill-discipline, failure to mechanize all input processes or simply the innate inconsistency of humans to follow rules (such as to check for existing records before creating new ones), this is going to be a perennial problem even though we attempted on a one-time dedicated effort to clean out all duplicate records.

After conducting the Z39.50 searches for the Koha assignment and observing the same phenomenon, I felt a little more consoled that this problem of "duplicate records" we face does not seem to be unique to us. I wonder if there is a problem if libraries around the world choose to import a different set of MARC record for the same book during copy cataloging.

I will also be interested to know if there is a dedicated team or how big is the team at the Library of Congress or OCLC who maintains the MARC records database.

No comments:

Post a Comment