A Library Analytics and Metrics Service? Moving into the next phase of work

Although we’ve been sharing the work of the LAMP project at the UKSG conference, Jisc Digifest, and the SCONUL conference over the last few months, we realise it’s been a few months since we’ve posted on our progress and intended next steps.

Our work has amassed a lot of interest over the last 6 months, with the leadership in Jisc pointing to it as one of our exemplar projects: responding to clear demand, developed in close collaboration with the stakeholder community, and demonstrating our capability to innovate and develop services in strategically vital areas.

The project is now officially entering into its second stage, aiming to move this exploratory project forward into a fully fledged service. To make this happen, we’re going to be focusing on several areas of work:

Creating a user interface prototype that is easy and pleasurable to use.  We have already developed what we’ve affectionately called the ‘ugly’ prototype, which has allowed us to play with the data and explore the potential for the tools.

This has thrown up all sorts of questions around what level of functionality a data visualisation of this nature should incorporate, and also broader questions over data literacy, and what ‘data analysis’ takes place within the system and what is undertaken by the user herself.  After consultation with our Community Advisory and Planning Group, we have developed a set of wireframes that we feel will support users in analysing their data in different ways, supporting them to view and experiment with the data in different ways, but within a supported environment. We are presently undertaking the technical work to produce v0.1 of LAMP, which will be released in November to the seven partner institutions that have supplied their data: University of Manchester, University of Salford, University of Huddersfield, Wolverhampton University, University of Exeter, De Montfort University, and Lancaster University.

Testing and evaluating the tools. Once the user interface (UI) is released to the institutions, we will be undertaking extensive evaluation of the tools, assessing the usability of the UI, identifying data issues or opportunities, and working to get a better understanding of how tools such as these might fit within library workflows — the benefits they may help deliver, and their overall value. We are also looking at creating a UI with dummy data so that users outside the seven pilot institutions can access and meaningfully experience and experiment with the tools. Outcomes from this work will feed into future versions of the prototype, as well as the overarching business case for the service. We’ll need to understand the value and impact of the service to ensure it’s validity and sustainability.

Beyond v0.1. Bringing in NSS data, institutional profiling and other functions.
The version released in November won’t include institutional profiling (formerly referred to as benchmarking) features, National Student Satisfaction (NSS) data views,  the statistical significance layer, or the ability to look at item level data around individual or batches of resources.  These are all areas identified as priority developments by our Community Advisory and Planning Group and other stakeholders, and we’ll be exploring further how to take them forward over the next few months. Ellen Collins from the Research information Network (RIN) taking the lead and developing specifications where it is feasible to do so, for example, we need to investigate whether NSS data can interoperate with the UCAS data contributed by institutions before we can say we can easily integrate it into the final service. However, our aim is to integrate into the tools:

  • the ability to know whether data is revealing a statistically significant trend or not, i.e. is the disparity between male and female usage on a particular course of significance, or is it merely reflective of the course make up as a whole?)
  • the ability to view resource usage against NSS data, i.e. enabling users to examine the correlation between departmental/subject area usage of resources and NSS scores.
  • the ability to view item level data, so that we users can view overall usage of items or groups of items, and also dig deeper to see who is using those items (which departments, courses, and so on).
  • the ability to view usage of your institution’s resources compared to others using the system, a.k.a. institutional profiling.

 

Supporting data-driven decision-making — the need for community engagement

We know that our testing of the tools with real users on top of real data will reveal how the tools might be useful. But we also know from our engagement with librarians and bodies such as SCONUL and RLUK over the last year that we’re simultaneously opening up a range of broader questions about the role of data and visualisations in supporting library and institution’s decision-making, the skill-sets and confidence of librarians in working with data in these new ways, and the need to share stories and best practice with the broader community.  We will be developing these case studies as the tools develop, and producing guidance materials based on real use cases, and launching these in spring of 2015.  We recognise there is a need to build a community around Jisc library support and analytics tools, and are in the early stages of planning a wider event around these issues in April 2015. Here we will share the progress of the LAMP work along with similar initiatives, and promote discussion and exploration of the issues surrounding analytics and data-driven decision-making in libraries today.

Beyond measuring loans and logins. Capturing eResource data trails.

Although we can capture eResource logins from many institutions, and tie this to anonymised identifiers that enable us to view the level of eResource usage of particular cohorts, what we can’t tell is what specific eResources, databases, or articles are being viewed by those cohorts. This is an result of the current approach of the UK Access Management Federation, configured to ensure Data Protection and privacy.  However, there are questions over whether it would be feasible to gather and leverage this data in secure ways to support LAMP use cases as well as others, including Learning Analytics.

Indeed, how viable is a service like LAMP if it can only meaningfully track activity around physical items? Jisc and other stakeholders have indicated a strong interest in revisiting this territory so we can identify the opportunities and barriers, and Ben Showers and I look forward to taking this forward in behalf of the LAMP team over the next few months.