Jump to content

Opinion: The Samaritans Radar debacle proved Twitter needs to better understand its users


Recommended Posts

Opinion: The Samaritans Radar debacle proved Twitter needs to better understand its users

How Samaritans Radar went wrong

It took only ten days for the Samaritans Radar Twitter app to fall from grace. What was initially touted as a revolutionary and positive idea, something that would save lives, became a creepy, privacy invasive tool for stalkers and trolls, potentially in breach of data protection law, and something that distressed and alarmed exactly those people who it purported to help.

Why and how this fall happened has distinct and potentially important implications for the way that we look at Twitter. Some people will be looking on aghast – why should such an innovative idea be destroyed by a bunch of out-of-touch privacy-obsessed keyboard warriors? Others will be thinking that for once things have gone the right way – people have been able to fight and win against something they saw as deeply intrusive. Both sides have their points.

What the Samaritans Radar app was intended to do was relatively simple. If you authorised it, it would scan the tweets of all the people that you followed, perform what appears to have been a fairly crude form of sentiment analysis.

When it found a tweet that appeared to indicate that the tweeter might be having mental health difficulties or feeling suicidal, it would send you an email alerting you to that fact.

The intent was positive: if your friends are feeling vulnerable, you would like to make sure you don't miss one of their tweets, and hence not realise that they might need help. The problems, however, were manifold – starting with the assumption that the people you follow (and hence whose tweets are scanned) are your friends, and conversely that the people who follow you do so with your best interests at heart.

There seemed to be confusion in the app-backers' minds between the Facebook idea of 'friends' and the Twitter concept of following – as well as a general sense that it's 'OK' to analyse and process everyone's tweets because tweets are 'public'. That, in some ways, is the crux of the matter – and why the fall of the Samaritans Radar app has huge potential ramifications.

The Samaritans Radar app was launched with some fanfare on 29th October 2014 with generally positive press coverage, but the reaction on Twitter itself was very different. The overwhelming majority of tweets on the #SamaritansRadar hashtag were very negative.

Many people didn't like the idea of their tweets being scanned. They were aware that they weren't being asked for their consent to have their tweets scanned. Others recognised immediately how the app could be used by stalkers and trolls to find out when their potential victims were vulnerable and to immediately target them with more abuse.

Vulnerable people felt more vulnerable – but the community reaction was remarkable both in its swiftness and its power.

The first reaction came from what might loosely be called the 'mental health community' on Twitter – people who have experience of mental health issues, people who work in mental health, people with friends or relatives in those positions. But it soon spread as those people brought in experts in many different aspects of the story.

The campaign grew, including a vast number of blog posts, interviews and articles in the mainstream and online media, an online petition and more. There were legal objections – the app seemed certain to process sensitive personal data without any consent, but even the Samaritans weren't clear who the data controller would be – as well as practical and ethical ones.

The campaign was strong and sustained – and on the 7th November, just ten days after the launch, the Samaritans suspended the app, pending further review. It is hard to imagine, given the weight and breadth of the objections, that it will reappear in a similar form.

The lessons to learn from this app failure

Among the many lessons to learn from this, two stand out. Firstly, that online communities are stronger and better able to organise and resist interference than developers of apps like this might imagine. Initially those resisting were portrayed as irrelevant, almost as Luddites, complaining from some esoteric idea of privacy that has no place in the modern, online world – but they proved themselves to be something far more than that. The resistance came from the very people that the app was targeted at, and they were far more able, wilful and direct than the app-backers had anticipated. Indeed, they seemed to understand the implications of the app far more than the creators. That in itself should give app developers pause for thought – communities on Twitter can be tough and resistant, and have very strong views about how they like to use Twitter.

That brings out the second point – though the app developers had a simple view of how privacy worked on Twitter, the users had a more complex and nuanced view. To the developers, it was simple: tweets are 'public', so they're fair game for every kind of analysis, and people both couldn't and wouldn't complain if this kind of analysis took place. You always have the option of locking your account, to make it private, they told people, as though this wouldn't have other, negative consequences.

To the mental health community on Twitter, their tweets were in some senses 'theirs', and interference and analysis of them was not always OK. They looked at how they actually used Twitter – yes, for public pronouncements', but also for conversations, casual chat, and discussions about very personal issues, such as their own mental states. Those conversations, though in the technically 'public' domain of Twitter, they considered personal enough that if they knew they were being scanned and analysed, they didn't like it. It felt creepy – and, potentially, it would stop them saying so much. Indeed, a number of people decided not to use Twitter as a result.

So which is right? In one way – according to Twitter's terms and conditions, and indeed the law – tweets are clearly public. But the way that many people use Twitter, means they feel private – like having an intimate conversation with a friend at the pub. Private conversations in public places.

Whichever side of the argument wins, there are implications. If we can be assumed to have no privacy at all – that all our Tweets are entirely fair game, and anyone can do whatever they want with them without any kind of consent, then the effect could well be chilling, making people less willing to use Twitter or to use Twitter less, and in less interesting ways, making Twitter itself less attractive.

But if tweets aren't fair game, and people do have at least some sort of privacy – the kind, for example, that would mean apps need to get permission from the tweeter to analyse their tweets, then that places significant limits on the kinds of apps that can be developed.

More importantly, perhaps, it could cut off a potentially lucrative stream of income for Twitter. The Samaritans Radar app was developed for a charity, and on the surface seems not to be about money – but it was developed by a company from the world of marketing. If the app had succeeded, similar apps would have followed – but this time apps that make money from tweet analysis.

As it has not, it could be much harder for other such apps to succeed. With Twitter under pressure to find revenue sources – Standard and Poors have just given Twitter a junk credit rating – this could be bad news. Twitter has to find new ways to monetise its major asset – our Tweets – and the failure of the Samaritans Radar app could leave it scratching its head as to how to make this happen.

There is no easy way out of this – but one thing that the Samaritans Radar saga seems to have made clear is that developers, and indeed Twitter itself, need to become better at listening to and understanding not just the technicalities of the environment, nor even the laws that might apply, but the ways in which people use these systems. People matter – and in more ways than might immediately appear.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Create New...

Important Information

By using this site, you agree to our Terms of Use.