Skip to main content

Posts

Documentation and Good Management in Digital Libraries

This month is all about self-evaluations for me and my employees.  Because of this, I have been thinking about how a manager is supposed to show their work and their worth. The easy answer is to say that if the employees are doing well, then the supervisor is good. It could be that the employees are doing well despite a bad supervisor. An employee doing badly is also not a sign of a bad supervisor. So what tangible thing can I say makes me a good or bad supervisor? Throughout the year, I try to focus on the actions I take to make my employees' lives at work better. I try to give them direction, advice, and help make things easier. I also try to champion them. Things do not always work but I adjust. When I sit down to write my own evaluation, though, I end up writing about documentation. To me, that is a concrete indicator of a good supervisor. They care enough about the work, and their employees, to write things down and make a record. I want to challenge everyone to write...

The Workload Iceberg for Digital Collections and Initiatives

In the last few weeks, I was asked to write a small paragraph explaining my area to others in the library.  I was happy to do this, as many people say they don’t know what my people do.  It’s sometimes hard to explain to others what we do without going into overtly technical topics and terms.  If we have done our job right, we’re practically invisible, which is the way it should be.  Anyway, writing the description made me realize why there is often a mis-match between what we do and what people think we do.  I’ll let you read the description yourself.  I’ve underlined the important bit. “Digital Resources is primarily an Open Access publisher.  We publish both born digital items (produced by students or faculty), and we scan to publish or republish old items. We curate digital collections through the whole digital life-cycle. Our work is a bit different from other departments because the more work we finish; the more work we create in having to m...

A Thought on "An Emergent Theory of Digital Library Metadata" Alemu, G. and Stevens, B. 2015- ISBN: 978-0-08-100385-5

I've been reading "An Emergent Theory of Digital Library Metadata: Enrich then filter".  I'm about 1/3rd of the way through, and so far I am convinced that libraries need to make it easier for patrons to add or suggest changes to metadata.  I'm convinced enough that I will add the functionally to my list of future directions for the collections I manage.  However, in light of the recent national conversation about fake news, I do question whether or not communities can effectively actively police incorrect content.  Wikipedia is used an example of how crowd generated information can work, but Wikipedia is also often the first search result in almost any search, meaning that it not only has a high chance to be seen, but also a high chance to be edited if it's wrong.  People online love correcting others. The problem I see about applying that model to library metadata is that there is almost no way for library data to be as popular as Wikipedia, so it will hav...

Bureau of Indian Affairs- Digital Collection

The Bureau of Indian Affairs is one of the oldest Bureaus in the United States.  It was established in 1824 by Secretary of War John C. Calhoun . While the history of the organization has been controversial, their records are open to the public.  This collection brings together letters distributed from the Bureau of Indian Affairs starting in 1832 and going on into 1966. View the rest of the collection:  http://bit.ly/2h0hKvW 

Digital Content Management Systems- Adoption Rates

As I am looking around for the next digital content management system,  I decided to gather some usage statistics about the systems I'm most interested in. These are all just estimates as I have to rely on how many institutions self report their usage. The exception is CONTENTdm, which I had to go by the number they use on their marketing materials. CONTENTdm: Used by over 2,000 (number supplied by OCLC website) DSpace  : Used by over 1,500 institutions. Fedora Commons : Used by about 300 institutions.   Hydra  : Used by about 50 institutions. Omeka: Used by over 287 In looking at moving to Fedora Commons or Hydra, I was surprised by how few people are using Hydra.  I believe it's because it's notoriously hard to set up, which is why IMLS is funding Hydra in a Box , which should be a turnkey solution.  I'll be interested to see how the numbers change after that. We are also thinking of trying out Omeka, and it's good to see that Om...

Top 5 Reasons Digitization Projects Fail

1. No one really thinks about why the item is being scanned When people talk about scanning something or putting it online, their reasons for doing so can be shockingly vague.  People will say things like "to make it more available", or "to make it easier to search", but often what they really want is something completely different.  I have had a faculty member initiate a scanning project for photographs only to find out that what they really wanted was to blow up the images so he could see them better.  After the first few test items, and him complaining about how he couldn't see the images well enough, it became obvious what he wanted.  We changed the project to scan the images at a higher resolution to make it a bit easier for him. 2. No one thinks about what, exactly needs to be captured during the scan When people first start out, they think of scanning as a pretty simple process of just taking a picture of the item and putting it online.  However, the...

DSpace Repository- Counting Citations to the IR

In the last week, I've been looking at how to make our Institutional Repository better.  One of the best ways we can provide value to our users is to help them identify the impact of the value of their work in the IR.  Along that line, I wanted to know how we could find and count citations to works in the IR.  It would be great if we could say that things published in the IR get cited more. I found this post in Stuart Lewis' Blog about displaying citation counts in DSpace from Scopus  , and in the next few months we may try to implement this.  However, this wasn't exactly what I'm looking for.  This shows the number of citations based on the DOI.  What I want is if someone cites the URI from DSpace, I want to be able to find that and record that somehow. I have not found an automatic way of doing it.  So far, what I've managed to do is do google searches for the first part of our handles, and the first part of our IR's URL. So, our IR's URL ...