It’s time to talk about standards

Look, this is a library project. You knew the s-word was going to come up at some point.

One of LAMP’s most important attributes is that it’s bigger than a single institution. While we want individual universities to be able to upload and interrogate their own data through the platform, we also want to offer them somewhere that they can aggregate with and benchmark against their peers. The tools that we build have to meet the needs of a lot of different people.

We’ve written before about some of the tricky decisions we’re taking about how we standardise and reclassify the data that we get, in order to make sure that it can work with LAMP’s systems, and can be aggregated across institutions. But a recent conference call with the team who are managing Wollongong University’s Library Cube service reminded us that there’s another way to do this: looking at the way we ask that information to be provided in the first place and creating clear standards which help institutions to collect their data the way that we want them to.

A bit of background.

The Library Cube is a pretty well-established initiative from Wollongong University Library which seeks to collect and analyse data from a number of systems to understand how libraries add value. Wollongong have been working on this service for several years and the scope is now extending beyond assessing library value to thinking about real-time data and service development. We’ve been aware of their work through the links they had made with the Huddersfield Library Impact Data Project and the opportunity came up to share progress on their project and on LAMP.

Now, previous work we’ve done on normalisation has tended to be about how we might aggregate groups that are classified differently in different organisations. Subjects are particularly tricky for this, as every university has its own way of organising courses and departments. These decision are taken locally, and it’s improbable that a university’s academic departments will be completely reorganised to meet the needs of a project on library analytics (well, we can dream!).

But the conversation with Wollongong highlighted some areas where we might have a bit more control, and could think about asserting standards and/or best practice about how data are collected and supplied. Take, for example, e-resource logins. These datasets are huge, recording every login from every student over the course of a year. To simplify our analysis for the LIDP at Huddersfield and subsequently with LAMP, we looked at how many times a student had logged in during a given hour over the course of a year, for each of the 24 hours in the day. Wollongong did the same, but their time period was ten minutes.

This means that comparing our data isn’t straightforward. There’s no intrinsic reason that we picked an hour, and that they picked ten minutes; both have advantages and disadvantages. The ten-minute data will give a more nuanced analysis, while the hour-by-hour data will be easier to process. Both choices are valid. But because we made them separately and individually, we didn’t necessarily think about the wider ramifications of our eventual decisions.

Of course, doing a project such as LAMP will begin to set some informal standards, simply because we’re asking for data in particular formats. But, as our conversation with Wollongong made clear, it’s important that we don’t allow those informal standards to evolve into more widely-accepted ones without interrogating and testing them. LAMP isn’t happening in isolation; there’s a wider set of projects especially in Australia and the US which are looking at library analytics and measurement.

Over the next few months, we hope to start talking about the best ways to collect and share data, building on our experiences and that of others, to ensure that LAMP’s collaborative ethos extends to some bigger conversations about library data and capturing library value.

1 Comment

  1. Brian Cox

    We have always held the Huddersfield team in high regard. Making this happen for one library is difficult. Making this happen for several libraries is incredibly complex, and no doubt requires profound patience and tenacity. Well done guys and girls, you are well on the way to achieving a world first and making a serious contribution to benchmarking for academic libraries across the world.

Leave a Reply