Screen flows, wireframes, prototypes and guts.

Following on from my last post where we were looking at gathering user research and requirements, here’s an update on the recent  team workshop where we focussed on the structure and skeleton of the LAMP.

1. The 20 second “Gut” test

We kicked off the session with a 20 Second “Gut” Test, which is technique used to clarify preferences, and better understand the team’s views on the aesthetics of visual design. The test showed a screen capture of 20 different analytics dashboards / user interfaces / visual elements, for 20 seconds only. Each participant has to score their gut reaction to the slide 1-5 (5 being the highest) and make any notes.

This was a really enlightening exercise, which really helped the team to articulate what they did and didn’t like. Looking at our top /bottom 5 there were very apparent themes running between them:

  • Clean and simple style, with space to let the content breath
  • Informative charts with the right balance of detail
  • Modular blocks of content held within frames
  • Restricted colour palette
  • Visualisations of the data were honest and unadorned.

2. Developing the screen flows and navigation model

The navigation model is the big picture, or the “birds eye view” of the system. It considers where users start, how they get from here to there, and what all of the major elements will be. This can then be summarised as a flow diagram that the models the user journeys.

Storyboarding the user experience

  • We started off as a group using our understanding of the tool to build a prototype screen flow.
  • We then validated this against the real life use cases / job stories.
  • We then tried to break the system we’d created. What were the extreme limitations of the system, and had these been taken into account.

LAMP-Screen-flow

This followed an iterative design process: Sketch > Prototype > Present > Validate > Repeat until we had exhausted our time. Collectively  we had formulated a solid idea that, that has been validated against our user research. This can now be taken away and explored in more detail by the UX team, modelled and then presented to the CAB for feedback.

3. Getting into the details

Low fidelity prototyping

Based on the screen flows and navigation model we had created it was clear that there were 2 key areas of action to focus on; the chart creation screen and the Dashboard area where charts are stored. We wanted to start understanding these in more detail and begin to wireframe the user journey, the interactions and functions of these screens. We had generated loads of ideas for these screens and we needed to capture them. We’re not intending hammer down every detail but rather to create a consensus that can be refined outside of the workshop.

The eight guiding principles of prototyping

  1. Understand your audience and intent
  2. Plan a little – prototype the rest
  3. Set expectations
  4. You can sketch
  5. It’s a prototype — Not the Mona Lisa!
  6. If you cant make it, fake it
  7. Prototype only what you need
  8. Reduce risk prototype early and often.

6-8-5 Design studio.

The first iteration of wireframing follows a 6-8-5 rule – do 6-8 sketches, on an 8-up grid in 5 minutes. The sketches can be different versions of a particular aspect you’re working on or a storyboard workflow (before, during and after login) or mix and match! Keep it high level, and get just enough detail down, to convey your concept. When the 5 min is up each person presents their ideas and the group critiques the ideas.

Quantity trumps quality at first.

The idea here is to get a large quantity of ideas rather than quality.  Here’s a short example to illustrate what we mean by this.

“A ceramics teacher announced he was dividing his class into two groups. All those on the left side of the studio would be graded solely on the quantity of work they produced, all those on the right graded solely on its quality.

His procedure was simple: on the final day of class he would weigh the work of the “quantity” group: 50 pounds of pots rated an A, 40 pounds a B, and so on. Those being graded on “quality”, however, needed to produce only one pot – albeit a perfect one – to get an A.

Well, come grading time and a curious fact emerged: the works of highest quality were all produced by the group being graded for quantity!

It seems that while the “quantity” group was busily churning out piles of work – and learning from their mistakes – the “quality” group had sat theorizing about perfection, and in the end had little more to show for their efforts than grandiose theories and a pile of dead clay.”

http://www.jeffgothelf.com/blog/quantity-trumps-quality/

This story perfectly articulates one of the fundamental Lean UX principles: prioritize making over analysis. Instead of sitting around, debating ad nauseam which direction to go in, what features make sense, which colors perfectly reflect your brand values or which words will get your customers to convert, just make something. It won’t be perfect. It won’t work as well as you’d hoped at first but it will teach you something. You’ll get some feedback, some insight on how building your product can be better and you’ll do a better job the second time around.

A lot of the methods and ideas we’ve used in this workshop have been taken from ‘Prototyping – A practitioner’s guide’  by Todd Zaki Warfel (http://rosenfeldmedia.com/books/prototyping/). In his book he talks about the value of prototyping, the value of show, tell and experience.

Prototyping reduces misinterpretation

Take a 60 page requirements document. Bring in 15 people into a room. Hand it out. Let them read it all. Now ask them what you’re building. You’re going to get 15 different answers. Prototypes are a more concrete and tactile representation of the system you’re building. They provide tangible experiences.

He then goes on to say moving from  requirements-dependant process to a prototype-dependant process has increased consensus on interpretation to 60-80% to +90%. It also requires far less effort and time for everyone involved. Taking this user centred design approach is essential for LAMP as the system is still being explored, designed and interpreted. Manifesting the development work in a physical form helps to generate hundreds of ideas, some will be great, some will be not so. But even these not so great ideas can be the catalyst for great solutions.

The ideas that were generated will be explored in more detail by the UX team, modelled, and validated. We are then meeting again for another wire framing session with the LAMP team to work through more of these details ready to presented to the CAB for feedback.

Sketch > Prototype > Present > Validate > Repeat

 

So what do we mean when we say ‘Analytics’?

This is a guest post by David Kay of Sero Consulting who describes some of the project’s work to develop user stories and enable a better understanding of the kinds of functionality any shared analytics service would need to have

Analytics has become quite a buzzword over the past couple of years. Awareness has been promoted by general talk of ‘big data’ as well as by increasing emphasis in the sector on the student experience and success factors (linked to ‘Learning Analytics’) and on resource optimisation driven by economic constraints.

Furthermore EDUCAUSE and the Gates Foundation in the US and Jisc in the UK have enabled notable exploratory work.

And now every new generation library systems offering needs the ‘Analytics’ badge.

But what does analytics mean to library professionals UK Higher Education? Is analytics ‘all things to all men’ or simply ‘the emperor’s new clothes’ (formerly known as management reporting or the director’s dashboard)?

So in Autumn 2013 the LAMP project set out to discover what library teams really have on their minds. Whilst LAMP is specifically focussed on the opportunities for shared services around library analytics, we stepped back to the underlying question of ‘What do libraries want to achieve with analytics?’ regardless of whether a shared service can help (our second order question as a project being to identify the cases where LAMP might help).

A total of eleven libraries working with the LAMP project agreed to develop a set of User Stories describing what they would like to achieve with analytics. We agreed to a two-step process whereby seven libraries were interviewed to source a set of stories and then the wider group (the original seven and four more) voted on the relevance of the stories (around 90) from their local perspective.

Thanks go to the library teams at the universities of Birmingham, De Montfort, Exeter, Huddersfield, Hull, Manchester, Warwick, Wolverhampton, York, the London Business School and the Open University.

About User Stories

User Stories are recognised as a valuable tool for capturing clearly focused requirements to inform software development. For the purpose of this investigation, a user story was taken to be a statement in the form of:

As a (role),
I want (a thing)
in order to (achieve an outcome)

For example

As a (late riser),
I want (to get my breakfast quickly)
in order (to catch the train)

We’d consider that to be an ‘epic’ story, from which a number of more detailed stories about real requirements might be teased out; for example

As a (late riser),
I want (a four-slice toaster)
in order (to make breakfast quicker)

and
I want (a folding bike)
in order (to get to the station quicker)

The stories we collected from library teams fell in to both these categories – epic stories that described the mission to which analytics might contribute and lower level descriptions of how particular analytic activities might deliver or contribute to key outcomes; for example, the mission might be

As a (library manager)
I want (more comprehensive activity data)
in order (to improve student satisfaction)

That mission might be unpacked in to

I want (front desk enquiry analysis)
in order (to improve first level resolution)

and

I want (short loan turn away data)
in order (to expand the collection to meet demand)

What‘s analytics about? Our Library Stories

So what did our 11 libraries consider the most important contributions to be made by analytics?

As described above, we collected around 90 stories and then put them to the vote! Our voting system allowed a library to allocate 2 points for any story they regarded as ‘important’ and 1 point for a ‘useful’ story. Therefore a story regarded as ‘important’ by everyone could gain 22 points (11 libraries x 2 points). The 49 stories that gained over one third of the maximum points (i.e. 8/22) are listed here.

We classified 19 stories of those 49 as ‘epic’ or ‘mission’ stories – very interesting because they indicate the management and strategic purposes that library analytics need to serve. They are as follows:

We classified 30 of the 49 as ‘activity’ stories – more detailed things that librarians want to do with analytics. They are as follows:

Some reflections

You’ll see from the listings above that we categorised each statement in terms of its broad intent:

  • Mission – High level ‘mission’ statements that are ‘epic’ user stories
  • Data – Stories about the range of data available for analysis
  • Collection – Use of analytics for collection management
  • Service – Use of analytics for service improvement, including enquiries
  • Teaching & Learning – Use of analytics to enhance the learning experience and student success
  • Recommendation – Use of analytics to provide recommender services

It is important to observe that the principal focus of the ‘mission’ stories is collection management (AN) and its contribution to each of value (M), satisfaction (D) and impact (C). There is also strong recognition of analytics as a tool in:

  • Supporting dialogue with faculty (K)
  • Evidencing and positioning library business cases (A, F)
  • Proactively enabling support activity such as skills development to be better designed and targeted (V, AB, AS, AD)

Whilst the ‘activity’ stories mainly speak for themselves, the challenge for libraries and for systems providers is to identify what data is required to support these requirements and how it might feasibly be collected within and across systems.

  • The focus on e-resources emphasises this challenge as represented in two of the top three activity stories (38, 4, also 19) – especially linking e-resource activity to users just as we are accustomed to doing with print.
  • There is a persistent recognition that insightful analytics need to combine data from more than just a single vendor system (2, 29, 32, 1).
  • More firmly within grasp is the use of analytics to respond more effectively to differentiations in terms of faculty (14, 9) and user demographics (33).
  • Analytics relating to enquiry management and related service improvement is an important dimension (29, 48, 54)
  • Whilst clearly recognised as an opportunity (61, 62, 34), there is less emphasis on using analytics for recommendation, surfacing reading options for users as popularised by such as Amazon.
  • Last but not least, we shouldn’t underestimate that presentation is a critical part of the challenge (8, 9)

There is much food for thought here, hopefully informing how services might be developed to exploit the data pool in which ‘no system is an island’!

Whilst JUSP and LAMP are partnering with UK academic libraries to develop responses in specific areas, it is clear from our User Stories that library analytics will demand considerable thought and may reveal even greater potential as our understanding and practices mature.