LAMP to be integrated into Jisc’s Learner and Business Analytics R&D activities

Until now, the LAMP project has run in parallel with related activities such as the joint Jisc/HESA/HESPA Business Intelligence and Effective Learner Analytics initiatives.   We are pleased to announce that LAMP will now move forward as an integral part of the overarching learning analytics R&D efforts of Jisc. Specifically, the LAMP project objectives will be combined with those of the Learning Analytics challenge, which is developing a basic student attainment and retention dashboard for all universities and colleges, developing support around navigating the ethics of using analytics about students and providing guidance to help people engage with using learning analytics. A key component of this dashboard will be concerned with the use cases identified by LAMP, and particularly the ability to view library resource usage by subject, course, social demographics and level of student attainment.

This integration with wider analytics efforts requires a more robust and scaleable approach to technical development, and Jisc has started an EU procurement exercise to identify a number of technology providers who can work with us to develop the dashboard. This procurement will also include effort to help identify universities and colleges who are in a good position to act as early adopters for the dashboard, and among these will be institutions that have already contributed to the LAMP project.

For further information and updates on LAMP and Jisc analytics R&D activity,  please now go to: http://analytics.jiscinvolve.org/wp/

 

A Library Analytics and Metrics Service? Moving into the next phase of work

Although we’ve been sharing the work of the LAMP project at the UKSG conference, Jisc Digifest, and the SCONUL conference over the last few months, we realise it’s been a few months since we’ve posted on our progress and intended next steps.

Our work has amassed a lot of interest over the last 6 months, with the leadership in Jisc pointing to it as one of our exemplar projects: responding to clear demand, developed in close collaboration with the stakeholder community, and demonstrating our capability to innovate and develop services in strategically vital areas.

The project is now officially entering into its second stage, aiming to move this exploratory project forward into a fully fledged service. To make this happen, we’re going to be focusing on several areas of work:

Creating a user interface prototype that is easy and pleasurable to use.  We have already developed what we’ve affectionately called the ‘ugly’ prototype, which has allowed us to play with the data and explore the potential for the tools.

This has thrown up all sorts of questions around what level of functionality a data visualisation of this nature should incorporate, and also broader questions over data literacy, and what ‘data analysis’ takes place within the system and what is undertaken by the user herself.  After consultation with our Community Advisory and Planning Group, we have developed a set of wireframes that we feel will support users in analysing their data in different ways, supporting them to view and experiment with the data in different ways, but within a supported environment. We are presently undertaking the technical work to produce v0.1 of LAMP, which will be released in November to the seven partner institutions that have supplied their data: University of Manchester, University of Salford, University of Huddersfield, Wolverhampton University, University of Exeter, De Montfort University, and Lancaster University.

Testing and evaluating the tools. Once the user interface (UI) is released to the institutions, we will be undertaking extensive evaluation of the tools, assessing the usability of the UI, identifying data issues or opportunities, and working to get a better understanding of how tools such as these might fit within library workflows — the benefits they may help deliver, and their overall value. We are also looking at creating a UI with dummy data so that users outside the seven pilot institutions can access and meaningfully experience and experiment with the tools. Outcomes from this work will feed into future versions of the prototype, as well as the overarching business case for the service. We’ll need to understand the value and impact of the service to ensure it’s validity and sustainability.

Beyond v0.1. Bringing in NSS data, institutional profiling and other functions.
The version released in November won’t include institutional profiling (formerly referred to as benchmarking) features, National Student Satisfaction (NSS) data views,  the statistical significance layer, or the ability to look at item level data around individual or batches of resources.  These are all areas identified as priority developments by our Community Advisory and Planning Group and other stakeholders, and we’ll be exploring further how to take them forward over the next few months. Ellen Collins from the Research information Network (RIN) taking the lead and developing specifications where it is feasible to do so, for example, we need to investigate whether NSS data can interoperate with the UCAS data contributed by institutions before we can say we can easily integrate it into the final service. However, our aim is to integrate into the tools:

  • the ability to know whether data is revealing a statistically significant trend or not, i.e. is the disparity between male and female usage on a particular course of significance, or is it merely reflective of the course make up as a whole?)
  • the ability to view resource usage against NSS data, i.e. enabling users to examine the correlation between departmental/subject area usage of resources and NSS scores.
  • the ability to view item level data, so that we users can view overall usage of items or groups of items, and also dig deeper to see who is using those items (which departments, courses, and so on).
  • the ability to view usage of your institution’s resources compared to others using the system, a.k.a. institutional profiling.

 

Supporting data-driven decision-making — the need for community engagement

We know that our testing of the tools with real users on top of real data will reveal how the tools might be useful. But we also know from our engagement with librarians and bodies such as SCONUL and RLUK over the last year that we’re simultaneously opening up a range of broader questions about the role of data and visualisations in supporting library and institution’s decision-making, the skill-sets and confidence of librarians in working with data in these new ways, and the need to share stories and best practice with the broader community.  We will be developing these case studies as the tools develop, and producing guidance materials based on real use cases, and launching these in spring of 2015.  We recognise there is a need to build a community around Jisc library support and analytics tools, and are in the early stages of planning a wider event around these issues in April 2015. Here we will share the progress of the LAMP work along with similar initiatives, and promote discussion and exploration of the issues surrounding analytics and data-driven decision-making in libraries today.

Beyond measuring loans and logins. Capturing eResource data trails.

Although we can capture eResource logins from many institutions, and tie this to anonymised identifiers that enable us to view the level of eResource usage of particular cohorts, what we can’t tell is what specific eResources, databases, or articles are being viewed by those cohorts. This is an result of the current approach of the UK Access Management Federation, configured to ensure Data Protection and privacy.  However, there are questions over whether it would be feasible to gather and leverage this data in secure ways to support LAMP use cases as well as others, including Learning Analytics.

Indeed, how viable is a service like LAMP if it can only meaningfully track activity around physical items? Jisc and other stakeholders have indicated a strong interest in revisiting this territory so we can identify the opportunities and barriers, and Ben Showers and I look forward to taking this forward in behalf of the LAMP team over the next few months.

 

 

 

 

 

 

 

 

It’s time to talk about standards

Look, this is a library project. You knew the s-word was going to come up at some point.

One of LAMP’s most important attributes is that it’s bigger than a single institution. While we want individual universities to be able to upload and interrogate their own data through the platform, we also want to offer them somewhere that they can aggregate with and benchmark against their peers. The tools that we build have to meet the needs of a lot of different people.

We’ve written before about some of the tricky decisions we’re taking about how we standardise and reclassify the data that we get, in order to make sure that it can work with LAMP’s systems, and can be aggregated across institutions. But a recent conference call with the team who are managing Wollongong University’s Library Cube service reminded us that there’s another way to do this: looking at the way we ask that information to be provided in the first place and creating clear standards which help institutions to collect their data the way that we want them to.

A bit of background.

The Library Cube is a pretty well-established initiative from Wollongong University Library which seeks to collect and analyse data from a number of systems to understand how libraries add value. Wollongong have been working on this service for several years and the scope is now extending beyond assessing library value to thinking about real-time data and service development. We’ve been aware of their work through the links they had made with the Huddersfield Library Impact Data Project and the opportunity came up to share progress on their project and on LAMP.

Now, previous work we’ve done on normalisation has tended to be about how we might aggregate groups that are classified differently in different organisations. Subjects are particularly tricky for this, as every university has its own way of organising courses and departments. These decision are taken locally, and it’s improbable that a university’s academic departments will be completely reorganised to meet the needs of a project on library analytics (well, we can dream!).

But the conversation with Wollongong highlighted some areas where we might have a bit more control, and could think about asserting standards and/or best practice about how data are collected and supplied. Take, for example, e-resource logins. These datasets are huge, recording every login from every student over the course of a year. To simplify our analysis for the LIDP at Huddersfield and subsequently with LAMP, we looked at how many times a student had logged in during a given hour over the course of a year, for each of the 24 hours in the day. Wollongong did the same, but their time period was ten minutes.

This means that comparing our data isn’t straightforward. There’s no intrinsic reason that we picked an hour, and that they picked ten minutes; both have advantages and disadvantages. The ten-minute data will give a more nuanced analysis, while the hour-by-hour data will be easier to process. Both choices are valid. But because we made them separately and individually, we didn’t necessarily think about the wider ramifications of our eventual decisions.

Of course, doing a project such as LAMP will begin to set some informal standards, simply because we’re asking for data in particular formats. But, as our conversation with Wollongong made clear, it’s important that we don’t allow those informal standards to evolve into more widely-accepted ones without interrogating and testing them. LAMP isn’t happening in isolation; there’s a wider set of projects especially in Australia and the US which are looking at library analytics and measurement.

Over the next few months, we hope to start talking about the best ways to collect and share data, building on our experiences and that of others, to ensure that LAMP’s collaborative ethos extends to some bigger conversations about library data and capturing library value.

Library Analytics – Community Survey Results

The team is currently prepping for our first Community Advisory Board (CAB) meeting for the Jisc LAMP project. There’s a great deal to discuss, not least the use case ideas we have been drafting for feedback.  Ben Showers and I met last week to talk about setting the context for the meeting, and we agreed that it would be useful to more broadly share the findings of the survey we ran back in November 2012.  With the support and input of RLUK and SCONUL, Mimas worked with Jisc to run a community wide survey. We wanted to gauge the potential demand for data analytics services that could an enhance business intelligence at the institutional level and so support strategic decision-making within libraries and more broadly.  Below is a summary of the results available through slideshare.

Library Analytics – Community Survey Results (Nov 2012) from joypalmer
We wanted to get a better handle on how important analytics will be to academic libraries now and in the future, and what demand might be for a service in this area, for example, a shared service that centrally ingests and processes raw usage data and data visualisations back to local institutions (and this, of course, is what LAMP is exploring further in more practical detail).  We had response from 66 UK HE institutions, and asked a good number of questions. For example, we asked whether the following functions might be potentially useful:
  • Automated provision of analytics demonstrating the relationship between student attainment and resource/library usage within institutions
  • Automated provision of analytics demonstrating e-resource and collections (e.g. monographs) usage according to demographics (e.g. discipline, year, age, nationality, grade)
  • Resource recommendation functions for discovery services

Perhaps not surprisingly, the overwhelming response was positive – these tools would be valuable, yes (over 90 % ‘yes’ rate each time). But we also asked respondents to indicate which strategic drivers were informing their responses, i.e. supporting research excellence, enhancing the student experience, collection management, creating business efficiencies, demonstrating value for money, and others. What we found (based on our sample) was that the dominant driver was ‘enhancing the student experience,’ closely followed by the ability to demonstrate value for money, and then to support research excellence.

We also asked whether institutions would find the ability to compare and benchmark against other institutions would be of value. Whilst there was general consensus that this would be useful, respondents also indicated a strong preference to share data to be used as a benchmark for other institutions if it were anonymised and made available by a category such as Jisc Band (91%) (This compared to a 47% ‘yes’ rate when asked if they would, in principle, be willing to make this data available where users could see the source institution’s name).  So, there is appears to be a strong willingness to share business intelligence data with the wider community, so long as this is done in a carefully managed way that does not potentially expose too much about individual institutions. In addition, there was far more hesitation over sharing UCAS and student data than other forms of transactional data (again, not surprising).

Are analytics a current strategic priority for institutions?  Only nine respondents said yes it was a top priority at the present moment, with 39 stating that it was important but not essential. However, when asked whether it would become a strategic priority in the next five years, 40 respondents indicated it would become a ‘top priority.’

However, the question of where the decision-making in this area would reside evoked a wide range of different responses, indicating the organisational complexities we’d be dealing with here. Clearly the situation at each institution is complex and highly variable. Overall Library Directors and IT Directors are seen as the key decision-makers, but respondents also referenced Vice Chancellors, Registrars, Deputy Vice Chancellors. At certain individual institutions, the University Planning Office would need to be involved, or at another, the Director of Finance.

Other potential barriers to sharing include concerns over data privacy and sharing business intelligence, and our results revealed a mixed picture in terms of concerns over data quality, lack of technical expertise, and the fact that there are strong competing demands at the institutional level.

The LAMP project is now working to build on these findings and develop live prototypes to fully test out these use cases, working with data from several volunteer institutions.  Our major challenge will be to ascertain to what extent the data available can help us support these functions, and that’s very much what the next six months is going to be focused on.

 

Lighting-up time

It’s becoming increasingly important for libraries and institutions to capitalise on the data they’re collecting as part of their day-to-day activities. The Jisc Library Analytics and Metrics Project (JiscLAMP) aims to help libraries bring together this data into a prototype shared library analytics service for UK academic libraries.

We want to help libraries bring together disparate data sets in an attractive and informative dashboard, allowing them to make connections, and use these connections and insights to inform strategy and highlight the role the library is playing in the success of the institution. For more details of what we hope the project will achieve, see Ben Showers’s introductory blog post.

LAMP is a partnership project between Jisc, The University of Huddersfield, and Mimas at the University of Manchester. It is funded under the Jisc Digital Infrastructure: information and library infrastructure programme.