Samaritans Radar – the app that cares

The Samaritans have designed a new app that scans your friends’ twitter feeds and lets you know when one or other of them might be vulnerable so you can call them and may be prevent them from committing suicide.

It has caused a lot of discussion, and publicly at least, the feedback to the organisation, is not massively positive. I have problems with it on several fronts.

By definition, it sounds like it is basically doing some sort of sentiment analysis. Before we ever get into the details of privacy, consent, and all that, I would say “stop right there“.

Sentiment analysis is highly popular at the moment. My twitter feed gets littered in promoted tweets for text mining subjects. It is also fair to say that its accuracy is not guaranteed. Before I’d even look at this application, I would want to know on what basis, the application is assessing tweets as evidence of problems. We’ve seen some rather superficial sentiment analysis done in high profile (and controversial as a result) studies in the past, including that study by Facebook, for example. Accuracy in something like this is massively important and unfortunately, I have absolutely no faith that we can rely on this to work.

According to the Samaritans:

Our App searches for specific words and phrases that may indicate someone is struggling to cope and alerts you via email if you are following that person on Twitter.

The Samaritans include a list of phrases which may cause concern, and, on their own, yes, they are the type of phrases which you would expect to cause concern. But it’s not clear how much more granular the underlying text analysis is, or, on what basis their algorithm works. This is something which Jam, the digital agency responsible for this product really, really should be far more open about.

In principle, here is how the application works. A person signs up to it, and their incoming feed is constantly scanned until a tweet from one of their contacts trips the algorithm and the app generates an email to the person who signed up to say one of their contacts may be vulnerable.

This may come across as fluffy and nice and helpful and it is if you avoid thinking about several unpleasant factors.

  1. Just because a person signs up to Radar does not mean their friends signed up to have their tweets processed and acted upon in this way.
  2. Textual analysis is not always correct and there is a risk of false positives and false negatives.

Ultimately, my twitter account is public and always has been. You will find stuff about photographs, machine learning, data analysis, the tech industry, the weather, knitting, lace making, general chat with friends. I’m aware people may scan it for marketing reasons. I’m less enthusiastic about the idea of people a) scanning it to check my mental health and b) enabling a decision being taken without any consideration of whether I agree to such decisions to cause a friend or acquaintance to act on it.

It also assumes that everyone who actually follows me on Twitter is a close friend. This is an incredibly naive assumption given the nature of Twitter. 1200 people follow me from various worlds on which my life touches including data analysis and machine learning. Many of them are people I have never, ever met.

One of the comments on the Samaritans’ site about this is telling:

Unfortunately, we can’t remove individuals as it’s important that Radar is able to identify their Tweets if they need support.

Actually this isn’t true any more because a lot of people on Twitter made it clear they weren’t happy about having their tweets processed in this way.

Effectively, someone thought it was a good idea to opt a lot of people into a warning system without their consent. I can’t understand who would be so missing the point.

Anyway, now there is a whitelist you can use to opt out. Here’s how that works.

Radar has a whitelist of Twitter handles for those who would like to opt out of alerts being sent to their Twitter followers. To add yourself to the Samaritans Radar whitelist, you can send a direct message on Twitter to @samaritans. We have enabled the function that allows anyone to direct message us on Twitter, however, if you’re experiencing problems, please email:

So, I’ve never downloaded Radar, I want nothing to do with it, but to ensure that I have nothing to do with it, I have to get my Twitter ID put on a list.

In technical terms, this is a beyond STUPID way of doing things. There’s a reason people do not like automatic opt-in on marketing mail and that’s with companies they’ve dealt with. I have no reason to deal with the Samaritans but now I’m expected to tell them they must not check my tweets for being suicidal otherwise they’ll do it if just one of my friends signs up to Radar? And now, the app, how does it work, check the text or the userid first? If the app resides on a phone, does it have to call home to the Samaritans every single time to check an updated list? What impact will that have on data usage?

Ultimately, the first problem I have with this is I’m dubious about relying on text analysis for anything at all, never mind mental health matters, and the second problem I have is that the Samaritans don’t appear to understand that just because my tweets are public does not mean I want an email sent to one of my friend suggesting they need to take action re my state of mental well being.

The Samaritans have received a lot of negative feedback on twitter about it. Various other blogs have pointed out that the Samaritans really should have asked people’s permission before signing them up to some early warning system that they might not even know exists plus the annotating of tweets generates data about users which they didn’t give permission to be generated.

So they issued an updated piece of text trying to do what I call the “there there” act on people who are unhappy about this. It does nothing to calm the waters.

We want to reassure Twitter users that Samaritans does not receive alerts about people’s Tweets. The only people who will be able to see the alerts, and the tweets flagged in them, are followers who would have received these Tweets in their current feed already.

Sorry, not good enough. I don’t want alerts generated off the back of my tweets. Don’t do it. It’s bold. Also, don’t ask me to stop it happening because I never asked for it to happen in the first place. It’s, a bit, Big Brother Is Watching You. It’s why at some point, people will get very antsy about big data.

Having heard people’s feedback since launch, we would like to make clear that the app has a whitelist function. This can be used by organisations and we are now extending this to individuals who would not like their Tweets to appear in Samaritans Radar alerts.

Allowing individuals to opt out of this invasive drivel was not there by default (in fact they made it clear they didn’t want it) and now to get out of it, they expect twitter users to opt out. I have to make the effort to get me out of the spider’s web of stupidity. The existence of a whitelist is not a solution to this problem. People should not have to opt out of something that they never opted into in the first place. Defaulting the entirety of twitter into this was a crazy design decision. I’m stunned that Twitter didn’t pull them up on this.

It’s important to clarify that Samaritans Radar has been in development for well over a year and has been tested with several different user groups who have contributed to its creation, as have academic experts through their research. In developing the app we have rigorously checked the functionality and approach taken and believe that this app does not breach data protection legislation.

  • I want to see the test plans and reports. It sounds to me like it never included checking whether people wanted this in the first place
  • Name the academics.
  • They cannot possibly have claimed to have checked the functionality and approach when almost the first change they’ve had to make is broaden access to the whitelist
  • Presumably the app is only available in the UK but does it check whether the contacts are in the UK?

Those who sign up to the app don’t necessarily need to act on any of the alerts they receive, in the same way that people may not respond to a comment made in the physical world. However, we strongly believe people who have signed up to Samaritans Radar do truly want to be able to help their friends who may be struggling to cope.

Yes but the point is that the app may not be fully accurate – I would love to know how they tested its accuracy rates to be frank – and additionally, the people whose permission they are not the people who sign up to Radar, but the people whose tweets get acted on. Suggesting “People may not do anything” is logically a stupid justification for this: the app is theoretically predicated on the idea that they will.

So here are two questions:

Do I want my friends getting email alerts in case I’m unlucky enough to post something which trips a text analysis tool which may or may not be accurate? The answer to that question is no.

Do I want to give my name to the Samaritans to go on a list of people who are dumb enough not to want their friends to check up on them in case things are down? The answer to that question is no.

I’m deeply disappointed in the Samaritans about this. For all their wailing that they talked to this expert and that expert, it’s abundantly clear that they don’t appear to have planned for any negative fall out. They claim to be listening and yet there’s very limited evidence of that.

You could argue that there needs to be serious research into examining how accurate the tool is in identifying people who need help; there also needs to be understanding that even if, to the letter of the law in the UK, it doesn’t break data protection, there are serious ethical concerns in this. I’d be stunned if any mental health professional thought that relying on textual analysis of texts of 140 characters was a viable way of classifying a person as being in need of help or not, even if you could rely on textual analysis. This application, after all, is credited to a digital agency, not a set of health professionals.

If I were someone senior in the Samaritans, I’d pull the app immediately. It is brand damaging – and that may ultimately have fundraising issues as well. I would also talk to someone seriously to understand how such a public relations mess could have been created. And I would also ask for serious, serious research on the use of textual analysis in terms of identifying mental health states and without it, I would not have released this.

It is one of the most stupid campaigns I have seen in a long time. It is creepy and invasive and it depends on a technology which is not without its inadequacies here.

Someone should have called a halt before it ever reached the public.


Leave a Reply

Your email address will not be published. Required fields are marked *