Diagnosing uncertainty: A premortem for LAMP

A few weeks ago (okay, it was early July!), LAMP had the second meeting of its community advisory and planning (CAP) group.

The meeting started with an update on the work of the project so far, and sort advice and input from the group on a number of challenges.

A number of these challenges have already been blogged about, including the technical architecture; the use of unique identifiers associated with the data; data normalisation and statistical analysis, and; designing the database.

Importantly, the project has also drafted Terms and Conditions for institutions submitting data to the project. This is a small, but critical part of the project being able to get data from institutions.

There is a lot happening with LAMP at the moment (as these posts highlight); but what of the future?

The LAMP Premortem

In the afternoon the group undertook a premortem of the project, facilitated by Andy McGregor (of Jisc, but also a member of the CAP group).

The premortem imagines a future where the project has been a failure, and participants must work backwards to understand what contributed to the project’s failure.






Despite the slightly gloomy sounding description, the exercise is actually a huge amount of fun, and generated some really useful insights and ideas for the project team to take away.

What follows is a brief outline of some of the main themes that emerged during the premortem and specific ideas for the project team (and CAP group) to work on.


It was clear that the technical side of things could result in a number of significant risks. The majority of the technical risks actually related to the expectations libraries, our potential users, may have of the prototype service.

It was therefore clear that the project would need to be careful to not over-sell the service; making it clear this project is about collaboration and a large amount of learning as we progress (both the project and the libraries).Some of the possible ways to address these challenges included:

  • Expect some failure in certain areas – a complex project like this may mean not everything will work as expected;
  • Logging and learning as we go, and seek help from institutions/CAP group.
  • Guest blog posts from the community group (maybe around each of the categories identified).


The project will need to expend considerable energy on understanding user requirements; testing the prototypes with different user groups (librarians, reistrars etc).

This also means we need to be able to show users the prototype when it’s still rough and messy, so they have no qualms about providing critical and immediate feedback.

Fortunately we have our Community group to help us test the prototypes and to constantly challenge our assumptions and ideas.

Legal and Ethical

Legal and ethical issues were another significant concern that emerged during the premortem.

Many of the issues revolved around being able to reassure institutional registrars and CIOs about the way the data will be used, and ensure there is no possibility of damage to institutional reputations.

In many ways this is a subtle problem, requiring the project to deal with legal, ethical and reputational issues.

Some possible ways to address these problems included:

  • Use Jisc Legal: Discuss potential issues associated with the project and develop some pre-emptive resources and guidance for institutions;
  •  Produce a legal ‘toolkit’ for institutions and libraries – this might include advice and guidance as well as best practice.

Finally, there was a suggestion that the project, or rather the prototype service, provide the ability for institutions to ‘opt-out‘. This might be an out out clause in any agreement, that also makes it clear how libraries can disengage from the service and what happens to their data – how it is given back to them.

This is an interesting issue, and reminds me of the ‘right to be forgotten’ debate, and is critical legal and ethical issue for the project to consider.


This particular concern is not about things like competitive advantage (the project is very clear that it is meeting a need that falls outside the ability of commercial vendors to meet – an explicit principle of the project is to not duplicate existing product functionality).

Rather, the project needs to ensure it is aware of vendor developments for reasons of interoperability and the possibility of additional functionality for existing systems.

It will be important that LAMP’s API can feed into commercial vendor products. 

Cost and Complexity

This is a critical issue for institutions: The benefits of the service must outway the costs of participation.

Initially, as the prototype is developed the balance of benefits may be outweighed by the challenges of providing the project with data: The complexities of engaging are largely borne by the institutions.

But this will have to rapidly evolve, so that the service is able to absorb much of this complexity and make institutional engagement simple and worthwhile.

Ways the project can start to address this concern includes:

  • Develop some best practice and guidance for participating institutions. Make it clear what they need to do and how (a LAMP manual!);
  • Tools for making the submission of data simple – the service should do the heavy-lifting for institutions;
  • Where possible, link to other institutional systems and data services, or enable these links to be made as easily as possible;
  • Clearly articulate the benefits for the participating institutions – almost a service level agreement (almost!). This might also be done through case-studies with some of the early adopter institutions.


This was a popular challenge for the project – unsurprisingly!

However, in a clever and possibly illegal move, we simply parked it with the following rationale:

Such a risk/challenge is almost always inherited by a project; it’s not simply going to go away. We can park this issue for now, and focus on those risks that are likely to blind-side us.

Of course, that’s not to say it’s not a critical issue that needs addressing. But we can keep in mind that this phase of the project is about demonstrating the feasibility of the prototype. Indeed, this feasibility phase may not succeed – which will require us to think carefully about how the project might be wrapped up or changed.


This is just a very brief overview of the issues and risks that surfaced during the premortem. The exercise was incredibly useful in providing the project with both the key challenges it needs to address, but also an opportunity to crowd-source some of the potential solutions and actions to address those issues.

What, at first glance, appears to be a slightly pessimistic and gloomy activity turned out to be a vibrant session with some useful concrete outcomes.

Having said that, there were one or two ‘doomsday’ scenarios described, including:

  • The Internet ‘goes down’ and there’s no way to get access to the service.

Fingers crossed this won’t happen – but it makes it clear we should double-check on our disaster planning protocols.


Two of the CAP group members also blogged about the meeting and the premortem exercise:

Paul Stainthorp (Lincoln): LAMP project: A lets pretend post-it note post-mortem

Richard Nurse (OU): The Pre-mortem


Community Advisory and Planning Group – Meeting Notes

We recently had the first LAMP community advisory and planning (CAP) group meeting.

The meeting was roughly divided into two parts: The first covered many of the agenda items you’d expect to see on a project group meeting, such as: Discussion and agreement of the groups terms of reference, review of workpackages and so on.

The second part was focussed on presenting the group with the initial use-cases and design sketches that the project team had developed. The idea was to present this initial work as a way to stimulate ideas and new use-cases as well as sense-check the focus of the project so far.

cap notes

The discussions were also framed by a few assumptions: That the designs are simply sketches, not wireframes; an assumption that the types of data implied by the use-cases would be available and useable, and; all feedback was welcome.

The discussions were rich and varied, and provided plenty for the project to take-away and think about. Listed below are some of the discussion themes and issues raised:

Flexible Configuration

There was a significant amount of discussion around the need for flexibility when it comes to the data and the ‘views’ on that data.

Data might be fed in from various and disparate sources, including UCAS, NSS, KIS and even employment data once students have left the university. There was a sense that LAMP could serve a number of interesting new use-cases.

In addition to this variety of external sources, there was also a feeling that the service should enable local configuration. This would allow local, closed data sets to be fed into the institutional view.

This ability to tailor the data is also reflected in the need for tailored views on that data. The audiences and use-cases for the data are multiple, so it makes sense to provide flexible views and outputs. These may be reports for directors to make business cases and strategic decisions to daily service reports or occasional flags (like a fuel gauge on a car dashboard).


Connected to the flexibility of the system configuration is the consideration of who will be the primary audience(s) for the service.

It quickly became clear that an analytics service like LAMP would potentially have multiple audiences, from librarians to library directors to external users as well.  The LAMP data may well be surfaced in existing systems across the campus, with entirely new users interacting with its data.

Ultimately, the project needs to understand who is this decision making tool for? This audience may expand and morph as the project develops, but it needs to ensure it doesn’t fail its primary audience(s) by trying to serve the needs of everyone.

Intuitive layers

A few times the issue of intuitive interfaces and visualisations bubbled-up during various conversations. There seemed to be two particular issues that emerged:

  1. An intuitive view over the data – so you may not be interacting directly with the data itself but with visualisations of that data. This does however bring up interesting ideas of data and the UI, etc.

  2. The possibility that the visualisations and the tools used to interact with the data should already be doing some of the interpretive work (or at least displaying it in such a way as to make analysis and interpretation easier).

This is potentially a very rich area, and one where a clear understanding of what users might want to do will be critical.

Preserving context

An interesting point was raised about the importance of ensuring context is captured and preserved within the service. While this is a relatively simple piece of functionality: You can imagine a notes field or similar, the implications are interesting.

Ensuring the context for certain data points would ensure that any future analysis is able to take into account extraordinary events or circumstances that would affect the interpretation of data. Such events might be a fire or building closure, which in the short term would be accounted for in any analysis, but over the long term might be forgotten.


In discussing the use-cases the potential for the service to support benchmarking was something that interested the group. In particular, benchmarking that extended beyond the boundaries of the UK to international institutions.

Increasingly, UK academic institutions are judging their performance in an international or global context, not just a national one.

There was also an interesting discussion about the possibility of ‘internal benchmarking’: comparing the performance of departments and subjects within the local institution.

What’s next…

This first meeting of the community group was very rich, and resulted in a lot of ideas and potential ways forward. So, given the limits of resource and time here are a few of the next steps the project will take:

  • Continue to develop the prototype as a way to get solid feedback on the potential use-cases and functionality. The community meeting made it clear that it would be useful to have something people can actually interact with, test our assumptions and refine the kinds of functionality users require. data is the key component in this next step – both the existing data sets we might be able to use and the institutional data that will help drive some of the impact type use-cases.

  • Sketch out a development roadmap. This seems to be a way to both manage expectations (i.e., we’re not going to be able to deliver everything by December), and a way for us to prioritise the design as we progress.

  • User testing – make sure we are able to call upon small samples of potential users to test and refine the prototypes inbetween the CAP group meetings. These will likely be small, guerilla in nature, and aimed at ensuring a very iterative approach to the development.

 Our next meeting is planned for July, so we’ll have plenty to do between now and then!

The full minutes from the meeting can be found here: LAMP CAP meeting minutes 16 April 2013.