There is a concept in statistics, and especially useful in social science when dealing with populations, of “selection bias”: that sometimes the way that you choose your sample affects the results you get, and may mean that what you find is not, in fact, representative of the population as a whole.
For instance, if you include a question in a questionnaire, “Are you the sort of person who responds to questionnaires?” you can expect to find close to 100% of respondents answer “yes” to that question. It would be a mistake to assume from that, that people in general like filling in questionnaires; it might make more sense to look at what proportion of those who received the questionnaire bothered to respond.
ITV.com reports that The Samaritans have claimed (the claims apparently come from the Updates page) that their Samaritans Radar app, the subject of much criticism on twitter and the blogosphere (included my article at the weekend):
was tested by “young people with mental health problems, Samaritans’ volunteers, social media platforms and other organisations”
Now, the more astute reader may have leapt ahead here and already drawn the connection between answering questionnaires, and testing the Samaritans’ new app.
That connection is, of course, that there may just possibly be a selection bias in the testing of a new app designed to spy on people. The questions unanswered are how the Samaritans recruited their “young people with mental health problems” to the testing of the app; how feedback from them was conducted; and how it was tested (who the users were, assuming the “young people with mental health problems” were the subjects).
One would assume that it would be considered unethical to test the app without informing the test subjects of the nature of the app being tested on them (subjects here as distinct from users: a person who signs up for the app is a “user”; a person the user spies on is a “subject”). If a person agrees to being part of such a test, and is aware that the test is for an app that will spy on their twitter account in the way that Samaritans Radar does, then it seems natural to assume that this is a person who has fewer concerns about how that spying affects them, than someone who on hearing what it’s about, declined to participate in the test.
Alternatively, if test subjects are not told what is being tested, then feedback is going to miss entirely the sorts of concerns that have been raised in terms of the “chilling effect” that many twitter users have described as a consequence of knowing that the Samaritans Radar app exists.
Similarly, if, when they say it’s tested by, “young people with mental health problems,” and “Samaritans’ volunteers,” they mean the test users were all Samaritans volunteers, then it follows that the test users were all well-trained in how to handle situations where a mental health crisis come to their attention, and how to help a person in need. None of their test users were people who are well-meaning but unhelpful, and none of them were likely to be exploitative or abusive people stalking the test subjects (except inasmuch as the app itself is a form of stalking).
There’s nowhere in that testing process for the problems to have been identified. It sounds as though the Samaritans only wanted to know if the app would do what they wanted it to; they didn’t set up their test to highlight problems or unintended consequences. Only now, after its launch, have these issues been brought to their attention, but because they’ve “tested” the app, they ignore them.