Since relaunching in 2016, the amount of content on the Wellcome website had ballooned. The navigation and information architecture hadn’t kept up. Content was hard to find, and users were confused. Nobody was happy.
On top of that, the organisation was trying to become more active in shaping government policy. The website had to work for a new audience, who had a very different mental model of what Wellcome did.
I led the research on a project to make content easier to find for all users – old and new.
The Problem: A site that lacked structure
After the Gates Foundation,The Wellcome Trust is the second biggest foundation in the World. It funds international bio-medical research, including Ebola vaccines and work on the Coronavirus. The website is the key source of information for funding seekers.
The Wellcome website had been relaunched in 2016 and had grown in size since then. The amount of content was huge, but the navigation and IA had been designed for a smaller site with a lot less content. The site was bulging at the seams. Teams were dealing with this in different ways in different sections of the site. This meant misleading labels, confusing content groupings and inconsistent navigation patterns.
Users were often unable to find key information. Often teams were creating more content to try to solve this problem, which merely made things worse. A wholescale restructure was needed.
Wellcome’s organisational strategy had also changed. They wanted to influence more government policy, which meant a new audience for the website. This included policy teams, journalists, and civil servants. The site needed to work for them as well as Wellcome’s traditional academic audience.
Make up of the team
The team consisted of a Product Manager (who left 4 months in), a Delivery Manager, one Front End Developer and one Back End Developer. There was also a UI designer and a IA/UX Consultant, who was replaced midway through the project by a full-time UX Designer.
I was the lead user researcher for the project handling recruitment, research strategy, conducting the research, and creating and presenting findings documents back.
Understanding the user
We had two key user groups:
1. The researcher audience
Academics focused on health. They are either looking for research funding from Wellcome or currently receive it. They see everything through the lens of funding.
2. The influence audience
This consists of people involved in influencing public policy around health. It includes journalists, people working in health communications and PR (private and public sector), as well as policy consultants and people working within policy teams.
They are looking to understand Wellcome’s position on key policy issues such as science funding post Brexit.
Both of these groups are very different, with very specific mental models, ways of working and technical language. This meant that finding something that worked for everybody was a challenge. They are also incredibly time poor, which means finding time to do research with them was tricky.
High Level Timeline
This project took place over six months, and involved multiple rounds of research including:
- ‘Top Tasks’ survey and usability testing
- A treejack to evaluate new navigation
- Three rounds of usability testing of key landing pages
- An open card sort
- A closed sort to understand potential new top level labels
- A hybrid card sort to evaluate top level navigation labels
- Comprehension testing of navigation labels.
Breaking down the process
The work to improve the navigation of the site was the result of findings from earlier research. I had conducted a top tasks survey for the research audience, followed by remote unmoderated usability testing. This had shown that people were getting lost on the site.
We had also been asked to redesign the What We Do and About Us pages. Initial usability testing to evaluate what was wrong with the pages also resulted in further evidence that finding things was a major issue on the site.
As a result of my research the work was prioritised on the product roadmap and an external consultant was brought in to help redesign the Information architecture (IA) of the site.
Creating an Initial IA
The initial IA was created by the consultant. There was a lot of internal opinion and pressure on what it should look like, with multiple rounds of iteration.
The major change was adding a ‘Key Issues’ section to the top level navigation. This attempted to solve the problem of users not understanding what Wellcome was prioritising due to confusion between the content on the ‘What We Do’ and ‘About us’ page. There was also a restructuring of the lower levels of the navigation.
I was then asked to validate the new design. I convinced the PM that we should also test the current IA, to see if the new one was an improvement.
We agreed the key tasks as a team, and created separate treejacks for each user group.
Unfortunately, that didn’t give us a clear direction, other than revealing that there were elements that worked about the current IA, and elements that didn’t work about the new IA.
I did some further usability testing with the potential new navigation to help understand the issues we had seen in the treejack.
We found that ‘Key Issues’ was confusing researchers, because they thought it reflected work that Wellcome was more likely to fund. This was not the case, and was problematic. There was potential for it to impact the types of funding applications being received, as well as discouraging other people from applying when their work was valid.
Despite this, the decision was made to go live with the new IA. We also added a footer with a more expanded menu that made a big difference to findability.
Continuing to iterate
We continued to work on the IA, as the new UX designer came in, as well as the problems with the Key Issues label.
I did some research using a new method I devised, which involves first showing users the navigation titles on a Trello board, and then showing them the title labels with the labels of the first level of navigation. Users then talked about what they understood by each label, and if the groups made sense.
They were allowed to change titles, and move items around to create their own navigation groups. I was then also able to change the title cards and get different opinions on how they changed users’ perception of the group.
Doing this allowed us to rapidly gain some insight and stimulus around what labels were clear to users, as well as what they suggested about Wellcome as a whole. This gave us clear signals about what worked for users.
I did some usability testing on the homepage using the new top level labels, and asked users to tell us what they thought each label was about. This helped give more weight to our findings and reassure the team of the direction.
Finally, I did tree testing to confirm that the new IA performed better. We saw a significant increase in findability as a result of the new IA.
The existing top navigation lacked any hints as to what users might find in them
The existing labels were ‘Funding’, ‘Our Work’, ‘About Us’ and ‘News’. Our Work was particularly problematic, because it contained around 60% of the content on the site.
The lack of depth in terms of navigation was a huge barrier to users finding things
There was a large amount of content that users just didn’t know was there, because it wasn’t signposted clearly. There was no visible site structure. There was also a lack of consistency in the way links were used. By adding this, users were much more able to find things on the site.
There were two very distinct mental models in terms of how things should be organised
Users organised things either by audience, or by putting everything in similar structure to the existing IA, with an ‘Our Work’ section.
Organising by audience was the most effective way of structuring the content
This meant researchers saw everything as related to research funding. Because there was a perceived lack of openness about how funding was determined, people were looking for any kind of clue to give them an edge. This meant that the navigation and the ordering had a huge impact on what was seen as important to Wellcome. The two user groups always saw labels through the lens of their own work and aims. We therefore found the most effective way to order things was to create a section clearly not aimed at researchers called Policy and Advocacy. They didnt understand what was in ithe section, just that it wasn’t for them.
Impact of the research
One of the major impacts of the research was that we had an IA everybody was happy with, because it worked for users.
At the start of the process there had been a lot of opinion-driven decision making, which was tough and also meant that there was a lot of personal investment in certain solutions being right. It also meant a lot of circular discussions.
As this research progressed this style of decision making started to shift. We received more input from users and we shifted towards testing multiple variants rather than deciding on what was perfect upfront. It really helped slowly shift the organisation towards a more user centred way of working.
We delivered a new information architecture, a new navigation, and a new footer. We saw consistent improvements in usability tests and a reduction in complaints to the team about not being able to find things.
I would have liked to have revisited the top tasks survey to benchmark, but left before we were scheduled to redo the survey.
What I learnt
Be mindful of who has to own and defend the changes
We effectively took twice as long as we needed to with this research because we changed Product Manager and Designer. Even though we knew that both of these changes were happening, I think we underestimated the impact that would have. and the critical roles of those team members in justifying changes .
Research can’t always be used to solve internal conflict
I tried to use research to resolve internal conflict in the team.
Having two freelancers at the start meant there was a drive to move quickly to show results, but there was also a lot of tension about who was responsible for what. I tried to solve this problem by running workshops to find agreement, so I could move forward with the research. We would reach initial agreement, but then closer to the kick-off things would be changed and there would be a last minute panic to change the research.
This meant a lot of last minute changes to the research, reducing the impact I hoped the research would have. In retrospect, I think it would have been more successful for me to take a stronger stance on getting the team to come to agreement before we began the research, because it created a perception that the research wasn’t effective, when the outstanding questions weren’t questions that research could answer.
When there is a lot of disagreement and things are changing, write things down
One of the problems was that lots of people had different opinions on what success looked like, and the relative importance of various problems. This meant that a lot of discussions were driven by opinion rather than an agreed measure of success.
One of the things that I think helped shift perspective and move things forward was running a workshop where I asked everybody to agree what success looked like. We were then able to refer back to this document after each round of research, and move things forward in a much calmer, purposeful way.