A full contextual inquiry into academic research unearthed an opportunity
to design collaborative annotation software for literature review.
6 person team
Walking the wall
Our client, a search support and analytics company, wanted to explore new features that could be of use to academic researchers. We were asked to keep the theme of textual analysis in mind as we conducted user research, but otherwise, our brief was very open. Over seven weeks, our team conducted a full contextual inquiry, including stakeholder interviews, data consolidation, ideation and a final design proposal for the client.
Stakeholder Interviews and interpretation
In pairs, we conducted in-depth interviews with academic researchers (one professor and two PhD students) for an hour each to gain insight into their research process and understand the pain points they routinely face. Our interview plan covered all aspects of the research process, including literature search, data work (collection, cleaning, and analysis) and the tools used for team communication and collaboration.
Following the interviews, the team consolidated our notes into discrete, concise statements that we could use for affinity diagramming.
Our team next consolidated our interpretation session notes with a second team who also interviewed a set of academic researchers. We mixed the individual insights into a pile and put them up one by one onto a whiteboard. As notes was put up, we iteratively arranged and rearranged notes into thematically similar clusters to form broader categories from the bottom up.
During the affinity diagramming process, several more key insights emerged. We noticed the value of previous experience in conducting background research, the uncertainty involved in assessing the quality and relevance of papers, the difficulty in finding relevant papers in other fields, the need for collaboration and collaborative tools and the strategies (and workarounds) for the organization of literature review papers. Common sentiments included frustration with finding quality, relevant papers using current literature search algorithms, as well as difficulty with annotating and organizing relevant papers.
Sequence flow models
In the same time period as our affinity diagramming sessions, we built sequence flow models depicting key roles and workflows throughout the academic research process. Our first version of these diagrams helped to uncover core breakdowns our interviewees faced.
In keeping with our client's theme of textual analysis, we honed in on the literature review phase that occurs in the initial stages of a research project. This decision helped us to scope our design space by omitting tasks occurring later in the cycle, including data analysis and usability testing.
Consolidated flow model
We consolidated the literature-review-specific sequence flow models from each interpretation session into one cohesive picture. We iterated through the sequence flow design a few times until settling on a final consolidated model representative of all six interviewees to whose interpretation notes we had access.
Using information from the affinity diagram and individual sequence flow diagrams, we created a cultural model of key players that interact with and influence each other during the creation of an academic research project. The model represents the environment and relationships between stakeholders at Carnegie Mellon, as well as what different parties in this domain expect from one another in the research setting. Common themes involved prestige, collaboration, and competition.
Walking the wall
Armed with the consolidated sequence flow diagram, cultural model, affinity diagram, a mass of post-it notes, and some sharpies, the team carefully reviewed everything we had come to understand to prepare for visioning. We spent a few hours covering our diagrams and models with key issued, design ideas, and questions pertaining to the data. After coming back together, we drafted a list of everything our team wrote down, condensing similar ideas in the process. This step gave us the opportunity to immerse ourselves in all parts of the data to begin directly considering viable design solutions.
Having walked the wall, we were ready to start brainstorming potential solutions from our design ideas. We started with visioning: within reason, no ideas were off limits. Some methods we used included brainstorming, bodystorming, and sketching. Especially with bodystorming, our team was able to build the simplest of paper prototypes on the fly, and explore their interactions through short acting sessions.
Our most promising design ideas involved fostering communication at conferences, collaborative annotation software, and mixed reality applications to academic paper annotations. A major takeaway from this process was that the suspension of judgment allowed team members to both physically act freely and suggest crazy ideas that eventually turned into our final design proposal.
The storyboarding process involved condensing our unadulterated ideas into more technologically feasible concepts we could further discuss with PhD researchers. For example, our “virtual reality annotation world” was distilled to an augmented reality tabletop program.
We next conducted speed dating sessions with graduate students at Carnegie Mellon, walking them through our storyboards, and noting their feedback. After each session, we iterated on our storyboards as necessary before talking to the next researcher. This rapid interaction with users helped to smooth out areas of our storyboards that were either technologically unrealistic or inappropriate for formal contexts. For example, phone usage at a conference might be seen as disrespectful, so using conference nametags instead of a phone case was realized as a functional compromise.
Ultimately, collaborative annotation software was the unanimous favorite amongst PhD researchers for both its utility and feasibility, and we proceeded with that idea for our final design proposal.
Our contextual inquiry guided us to address the need for research teams to have better tools for collaboration. Our proposed product, LiteraShare, starts by allowing researchers to annotate directly on papers. From there, teams can sync their copies of papers with each other to quickly share insights and answer questions. If annotations become distracting, or a professor wants to focus on a specific student, a layer system allows users to quickly toggle teammate insights on and off.
Although we had to limit the scope of our project to the context of our client, our team was really fascinated with future extensions of our product. We envision a "StackOverflow for academia", where researchers can ask and answer literature questions from their peers around the world. The implementation would be more complex than the research team microsystem, but we are very curious to explore this space.