Samaritans Radar, again

The furore refuses to die down and to be honest, I do not think the Samaritans are helping their own case here. This is massively important, not just in the context of the Samaritans’ application, but in the case of data analysis in the health sector in general. In my view, the Samaritans have got this terribly wrong.

If you’re not familiar with Samaritans Radar, here is how it works.

  • You may be on twitter, and your account may have any number of followers.
  • Any one of those followers may decide that they like the idea of getting a warning in case any of the people THEY follow are suicidal.
  • Without obtaining permission from the people they follow, they download/install/sign up for Samaritans Radar which will read the tweets that the people they follow post, run a machine learning algorithm against it, and tag the tweets as potentially a cause for concern regarding a possible suicide attempt if it trips on their algorithm.
  • The app will then generate an email to the person who installed it.

In their blurb, the Samaritans make it clear that at no point will the person whose tweets are being processed be asked, or potentially even know that this is happening. As an added bonus, at the outset, their FAQ made it clear they didn’t want to let people out of having their tweets processed in this way without their consent or even knowledge. They had a whitelist for the occasional organisation whose language might trip the filter, but after that, if your friend or contact installed the application, you had no way out.

That last part didn’t last for long. They now accept requests to put your twitter id on what they call a whitelist but what is effectively an opt out list. And their performance target for getting you opted out is 72 hours. So you can be opted in instantly without your permission, but it may take three days to complete your request to get opted out, plus you get entered on a list. Despite not wanting anything to do with this.

There is a lot of emotive nonsense running around with this application, including the utterly depressing blackmailing line of “If it saves even one life, it’ll be worth it”. I’m not sure how you prove it saves even one life and against that, given the criticism about it, you’d have to wonder what happens if it costs even one life. And this is the flipside of the coin. As implemented, it could.

When I used to design software, I did so on the premise that software design should also mitigate against things going wrong. There are a number of serious issues with the current implementation of Samaritans Radar, and a lot of things which are unclear in terms of what they are doing.

  • As implemented, it seems to assume that the only people who will be affected by this are their target audience of 18-35 year olds. This is naive.
  • As implemented, it seems to assume that there is an actually friendship connection between followers and followees. Anyone who uses Twitter for any reason at all knows that this is wrong as well.
  • As implemented it defaults all followees into being monitored while simultaneously guaranteeing data protection rights not to them but to their followers.
  • As implemented, it is absolutely unclear whether there are any geographically limitations on the reach of this mess. This matters because of the different data protection regulations in different markets. And that’s before you get to some of the criticisms regarding whether the app is compliant with UK data protection regulations.

So, first up, what’s the difference between what this app is doing versus any, for example, market research analysis being done against twitter feeds.

This app creates data about a user and it uses that data to decide whether to send a message to a third party or not.

Twitter is open – surely if you tweet in public, you imagine someone is going to read it, right? This is true within a limit. But there’s a difference between someone actively reading your twitter feed and them getting sent emails based on keyword analysis. In my view, if the Samaritans wants to go classifying Twitter users as either possibly at risk of suicide or not, they need to ask those Twitter users if they can first. They haven’t done that.

The major issue I have about this is that I am dubious about sentiment analysis anyway, particularly for short texts which twitter feeds are.

Arguably, this is acting almost as a mental health related diagnostic tool. If we were looking to implement an automated diagnostic tool of any description in the area of health medicine, it’s pretty certain that we would want it tested for very high accuracy rates. Put simply, when you’re talking about health issues, you really cannot afford to make too many mistakes. Bearing in mind that – for example – failure rates of around 1% in contraception make for lots of unplanned babies, a failure rate of 20% classifications in terms of possibly suicidal could be seriously problematic. A large number of false positives and that’s a lot of incorrect warnings.

Some people might argue that a lot of incorrect warnings is a small price to pay if even one life is saved. If you deal with the real world, however, what happens is that a lot of incorrect warnings cause complacency. False negatives are classifications where issues are missed. They may result in harm or death.

Statistics theory talks about type 1 and type 2 errors, which effectively are errors where something is classified incorrectly in one direction or the other. The rate of those errors matters a lot in health diagnosis. In my view, they should matter here, and if the Samaritans have done serious testing in this area, they should release the test results, suitably anonymised. If they did not, then the application was not anywhere near adequately tested. Being honest, I’m really not sure how they might effectively test for false negatives using informed consent.

Ultimately, one point I would make is that sometimes, the world is not straightforward, and some things just aren’t binary. Some things exist on a continuum. This app, in my view, could move along the continuum from a bad thing to a good thing if the issues with it were dealt with. At the absolute best, you could argue that the application is a good thing done badly, spectacularly so in my view, since it may allow people who aren’t out for your good to monitor you and identify good times to harass you. The Samaritans’ response to that was to make a complaint with Twitter if you get harassed. A better response would be to recognise this risk and mitigate against enabling such harassment in the first place.

Unfortunately, as things stand, if you want to prevent that happening, you have to ask the Samaritans to put you on a list. The app, as designed, defaults towards allowing the risk and assumes that people won’t do bad things. This may not be a good idea in the grand scheme of things. It would be better to design the app to prevent people from doing bad things.

The thing is, in the grand scheme of things, this matters a lot, not just because of this one app, but because it calls into question a lot of things around the area of datamining and data analysis in health care, be it physical or not.

If you wanted, you could re-write this app such that, for example, every time you posted a tweet about having fast food in any particular fast food company, concerned friends sent you an email warning you about your cholesterol levels. Every time you decided to go climbing, concerned friends could send you emails warning you how dangerous climbing is, and what might happen if you fell. Every time you went on a date, someone could send you a warning about the risk that your new date could be an axe-murderer. You’d have to ask if the people who are signing up to this and merrily automatically tweeting about turning their social net into a safety net would love it if their friends were getting warnings about the possiblity that they might get raped, have heart attacks, get drunk, fall off their bikes, get cancer if they light up a cigarette, for example.

I personally would find that intrusive. And I really don’t know that twitter should default towards generating those warnings rather than defaulting towards asking me if I want to be nannied by my friends in this way. I’d rather not be actually. I quite like climbing.

The biggest issue I have with this, though, is that it is causing a monumentally negative discussion around machine learning and data analysis in the healthcare sector, such that it is muddying the water around discussions in this area. People like binary situations; they like black and white and they like everything is right or everything is wrong. If I were working in the data sector in health care, looking into automated classification of any sort of input for diagnosis support, for example. I’d be looking at this mess in horror.

Already, a lot of voices against this application – which is horrifically badly designed an implemented – are also voicing general negativity about data analysis and data mining in general. And yet data mining has, absolutely, saved lives in the past. What John Snow did to identify the cause of the 1854 Broad Street cholera outbreak is pure data mining and analysis. Like any tool, data analysis and mining can be used for good and for bad. I spent a good bit of time looking at data relating to fatal traffic accidents in the UK last year and from that concluded that a big issue with respect to collisions were junctions with no or unmarked priorities.

So, the issue with this is not just that it causes problems in the sphere of analysing the mindset of various unsuspecting Twitter users and telling their friends on them, but that it could have a detrimental impact on the use of data analysis as a beneficial tool elsewhere in healthcare.

So what now? I don’t know any more. I used to have a lot of faith in the Samaritans as a charity particularly given their reputation for integrity and confidentiality. Given some of their responses to the dispute around this application, I really don’t know if I trust them at the moment as they are unwilling to understand what the problems with the application are. Yes they are collecting data, yes they are creating data based on that data, and yes, they are responsible for it. And no they don’t understand that they are creating data, and no they don’t understand that they are responsible for it. If they did, they wouldn’t write this (update 4th November):

We condemn any behaviour which would constitute bullying or harassment of anyone using social media. If people experience this kind of behaviour as a result of Radar or their support for the App, we would encourage them to report this immediately to Twitter, who take this issue very seriously.

In other words, we designed this App which might enable people to bully you and if they do, we suggest you annoy Twitter about it and not us.

It’s depressing.

The other issue is that the Samaritans appear to be lawyering up and talking about how it is legal, and it’s not against the law. This misses a serious point, something which is often forgotten in the tech industry (ie, do stuff first and ask forgiveness later), namely, Just because you can do something doesn’t mean you should do it.

Right now, I think the underlying idea of this application is a good idea but very badly implemented and that puts it safely into the zone of a bad idea right now. Again, if I were the Samaritans, once the first lot of concerns started being voiced, I would have pulled the application and looked at the problems around consent to being analysed and having data generated and forwarded to followers. It’s obvious though that up front, they thought it was a good idea to do this without consent and you’d have to wonder why. I mean, in general terms, if you look at my twitter feed, it’s highly unlikely (unless their algorithm is truly awful altogether) that anything I post would flag their algorithm. I’m not coming at this from the point of view of feeling victimised as someone who is at risk of getting flagged.

My issues, quite simply, are this:

  • it’s default opt in without even informing Twitter users that they are opted in. The Samaritans have claimed that over a million twitter feeds are being monitored thanks to 3000 sign ups. You’d have to wonder how many of those million twitter accounts are aware that they might cause an email to be sent to a follower suggesting they might be suicidal.
  • the opt-out process is onerous and, based on the 72 hour delay they require, probably manual. Plus initially, they weren’t even going to allow people to opt out.
  • It depends on sentiment analysis, the quality of which is currently unknown.
  • The hysteria around it will probably have a detrimental effect on consent for other healthcare related data projects in the future.

The fact that you can ask the Samaritans to put you on a blocklist isn’t really good enough. I don’t want to have my name on any list with the Samaritans either which way.

 

EDIT: I fixed a typo around the Type 1 and Type 2 errors. Mea culpa for that.