Thoughts on Assigned Reading: Chapter 7 Usability Assessment Methods Beyond Testing

There are a few things regarding the chapter’s coverage of questionnaires that I find curious.
• It makes mention that questionnaires are usually administered by mail and goes on to suggest that the response rate can be improved by including a prepaid, preaddressed reply envelope. Is this the result of the text being written a while ago? Would not email be the preferred distribution method? Would not the date of an electronic questionnaire be easier and more accurate to capture? Or is the response rate even worse for an emailed questionnaire that that of a “snail-mailed” one and as a result, is still the preferred method?
• In addition, can the response rate really be as high as 90%? I know that at Stevenson, we are having a terrible time getting student’s to fill out faculty evaluations ever since we move them to off-time/on-line status. When they were administered during class time, students were a captive audience and tended to “take their time” and fill out more free response as it cut into lecture time. Now there is no “time” economic incentive. They must use their free time and as a result, we are finding that the response rates are plummeting to the point where even economic incentives are doing little to improve the rate. I cannot see how an unincentivised and unmotivated population would be able to reach a response rate of 90% when there is no “why” value perceived by them.
• I love the comment, “Essentially, a questionnaire is a user interface…”. Brilliant insight!

Also (this actually applies to all the methods discussed in the chapter other than “logging”), but in regard to the comment about both questionnaire and interview not always being “trustworthy“. True enough, but related to that is there a bias related to both from the fact that we only see the opinions of those who chose to fill out the questionnaires or participate in the interview? I have always wondered how poll results would be effected if we were able to know the opinions of those who have chosen not to participate. (Similar to the bias discussed in the “Focus Group” section when referring to the use of “discussion groups” and the users who frequent them.

User feedback seems to exist as a yin/yang “pair of opposites” issue. If the feedback is anonymous, the user can “speak their mind” without fear of reprisal. The lack of anonymity can lead to less than ideal results from self-filtering. But this freedom can lead to what is sometimes seen in public blogs or comments sections of web articles. Anonymity could lead to “flaming” which also can lead to less than desired result due to not enough self-filtering.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s