The following post is derived from a precis I wrote in Foundations of HCI over a year ago.
Dearman and Truong claim that YA Top and regular contributors don’t respond to questions for the same basic reasons:
- Nature or content of the questions
- Responder’s answer will get lost in the “crowd” due to many answers on the question already
- Responder’s perception that the asker might react or interpret the answer negatively.
- Question might require more time, effort, or expertise than the responder is willing to give
The authors support these arguments with actual data collected from YA Top and regular contributors. The authors found that the number one reason why questions go unanswered and why responders often skip over questions is due to the nature or content of the question. If the question is insincere and the asker seems to be stirring thing up, many times the question will be ignored. Another reason why responders often skip over questions is if the question has many answers already, especially if there is an answer that suffices the questions. The authors mention that responders want to be recognized; they do not want to be lost in the sea of answers.
Responders will not answer a question if they sense that the answer might be misinterpreted or create negative reactions from the asker and/or the community. Responders do not want to be reported or flagged simply for having a different opinion; therefore, not answering a question is sometimes a better approach. The last reason why questions go unanswered is if the question requires more time, effort or expertise than the responder is wiling to give at the moment.
Responders often don’t answer questions if they do not know the answer or if the question is so trivial that it can easily be answered by the asker him/herself.
Dearman and Truong offer suggestions for solving the problems as outlined above through research and design. To encourage responders to answer questions despite of their belief that their answer will get lost in the sea of previous answers, the authors propose creating a visualization of accumulated responses. For example, top contributors’ responses could be elevated, distinguishing them out from regular responders. Responses could also be grouped to make them seem less and through the grouping display a pattern of answers for that specific question.
The authors suggest giving responders knowledge of the asker’s behavior in YA to decrease the fear responders have of being reported for answering certain questions from a different perspective (with a different opinion). Creating this transparency would give members the ability to make a decision as to how an asker would react. The authors mention, “It is important to ensure that the crying ‘wolf’ does not impact the community’s efficacy or ostracize specific members (pg. 3).”
The authors suggest incorporating into the user interface a mechanism with which users could see if the question they are about to ask has been asked and answered already. This would eliminate duplicate questions and would also save responders time because askers could get immediate answers by referring to similar questions.
Last but not least, the authors suggest improving question complexity and length so that the asker’s question is always framed in a way that any prospective responder would answer it with ease. The authors found that responders avoid questions that are too long and complex; therefore, providing feedback about the question before it is posted would address this. For example, “the interface could disclose how many questions of a similar length or complexity are answered, and how many responses they receive (pg. 4).” Also, the interface could suggest breaking down the question apart or it could do the task for the asker.
The source of evidence that the authors base their claims on comes from a 15 week qualitative research on regular and top YA contributors. Through this research, the authors were able to gather data on the reasons why YA contributors don’t answer questions. In addition, the authors used research from other sources and their own expertise to propose solutions to the data gathered in the YA research.
The authors conducted an online survey of 135 top and regular YA contributors. The 15-week experiment consisted of tracking YA’s top weekly and regular contributors in the United States and Canada, and then contacting 731 of these users. In total, 135 responded and finished the online questionnaire/survey. The survey consisted of 8 short close-ended and 6 open-ended questions. The authors also did a statistical analysis of the data, which brought them to the conclusion that contributors in the Canada and the U.S., and regular and top contributors both showed a similar pattern in the survey data.
Reflecting on the Dearman and Truong's findings, this paper points to Online Communities and Social Computing sub-field of HCI. It also shines a spotlight more specifically towards online community trust. Communities such as Yahoo! Answers are primarily built to ensure and maintain a sense of trust between users. As with other online communities, trust is one of the factors in user engagement and as designers and researchers, we have to be intentional in understanding and creating designs with imbued value (such as trust) inorder to maximize (that's a big word) user engagement and experience.
To engage your audience, the user(s) should be able trust the system, and the experience it provides at all levels--the interface and beyond.
Source: Why Users of Yahoo! Answers Do Not Answer Questions by David Dearman and Khai N. Truong