Github alerts on Inbox

A day or so ago, the product team at Google Inbox made some updates in terms of how the application handles email coming from Github. I think they made similar changes to Trello too but I haven’t been using Trello much (tbh I had forgotten about it and it looks like I set up my account 4 years ago) so this probably applies to Trello if you have teams using Trello and as a result, receive lots of emails from Trello. I don’t.

I am watching a dozen or so Github open source projects, however, none of them huge, but a couple of them are relatively active and generating email on a regular basis.

One of the reasons I liked Inbox was that it effectively sorted my email into stuff that was worth annoying me about and stuff that wasn’t. This means that for all those Facebook, Twitter and other automated and mass email sendings, my phone didn’t bother me and I could review those at my leisure, like waiting for stuff to cook or whatever. Github was sorted into the Forums post and this suited me because anyone who needs to check who has updated a Github repo on their phone while they are out is not really the sort of person I tend to consort with.

As of yesterday though, this has stopped. Inbox now informs me every single time I get an email from Github. The sad part about that – for me probably – is that there are a good deal more Github notifications yesterday for one project than a) usual and b) I get from human beings on a day to day basis. As a result, Inbox has been annoying me with Github alerts, alerts which I can only get rid of by unwatching the projects in Github. Amongst the things I cannot tell Inbox to do at the moment is not to send lock screen/audible alerts to the phone for Github originating email.

The way they bundled Github in the Inbox itself is nice. But I cannot understand why it occurred to no one in Google land that enforcing an audible/buzz alert on the phones without a way to switch that off was a stupid, stupid idea which had the potential to wreck Inbox utility for some users. As for anyone whose subscribed to a lot of Github projects, their phones must be going crazy. Mine was annoying because it meant that a buzz alert no longer meant that I’d gotten actual email from a human for the most part, only that someone somewhere had updated a Git repo. Essentially, my phone started crying wolf over the email it was receiving. It used to alert me to personal/potentially important email. Now it alerts me to definitely not urgent for me email, email I want to receive, but do not want a lockscreen alert for.

I sometimes think that people working in the tech sector work inside a bubble and do not have access to a diverse enough pool of users for testing purposes. The first thing I would have said to someone if I’d been testing this is “You have to give users the options to switch off audible and lock screen alerts for these things. For many people, they may represent non-essential, non-urgent email and you’re stripping away useful meaning of those alerts”.

Up to yesterday I knew that if my phone buzzed an email alert, it was probably something I needed to look at now. As of yesterday, now if it buzzes, it’s probably a Github alert. This does not improve my life.

Google’s mic drop

I like to think that somewhere in Mountain View, a kindly manager of product managers is holding a meeting with the Gmail product team and shaking her head in disappointment over what I would personally consider to be a serious fiasco on 1 April.

Drop Mic was bad on so many levels, it is hard to decide where to start with the wrongness. The other issue is that it was so obviously wrong, it is hard to understand why anyone involved in letting it out into the wild didn’t realise it was a mess.

Typically, one of the core things which any company should be doing is protecting the integrity of their product. Do they have a product which has built up a lot of trust over years? And are various other parts of their business dependent on that product? The answer to both those questions was yes.

In a lot of respects, I suspect Google is heavily dependent on the continued will of people to actually sign into google accounts to maximise their advertising revenue. Gmail might be “free” at the point of use but it is not really free at all because the average google user, by signing into their gmail – and hence google – account is paying for it in cold hard data about their habits and interests. It is unlikely that the micdrop stunt will stop massive numbers of people using gmail…but they may trust it a little less. Google’s interests are served by people continuing to use Gmail. Someone, somewhere in GoogleLand should be saying “Do Not Mess With The Product For A Joke” over and over again.

It is not a case of people not having a sense of humour. It is a case of people expecting their tools to be reliable and not trying to kill them. Sure the micdrop button was orange but it shouldn’t have been there in the first place. It was located right next to the send button, in a location where a lot of users have a send and archive button. To say that it was put in the most stupid possible place is fair. It was guaranteed to cause problems. It got pulled quite quickly which suggests to me that in Google, at least one grown up works.

But possibly only one.

Google’s initial message to announce pullage was insulting “Oh it looks like we pranked ourselves” and an implication that if one or two bugs hadn’t existed it would have been fine. It was never fine. The subsequent follow up did not consider the fact that they should never have tried to actually implement it either. For this reason, even though Google has probably done some internal investigation and talking about this, they probably have not quite worked out that they should never have tried to implement it at all.

People in Gmail land need to recognise that one of the cornerstones on which their company’s wider business interests lie is trust in the gmail product and that when they mess with the integrity of that product, it can cost money.


Hysteria of Hype

Somewhere around the web, there’s a cycle of hype which generally pins down where we are in terms of a hype cycle. I have not the time to go looking for it now but put simply, it has bunch of stages. I have decided it is too complicated for the tech sector.

Basically, the point at which you start seeing comments around X is the next big thing is the point at which something else is the next big thing. Sounds contradictory? Well yeah, it is.

Most people talking about the next big thing being X tend not to know a whole lot about X. Their primary objective is to make money off X. They do not really care what X achieves, so long as it makes them money.

Five years ago up to oh I don’t know, middle of 2014, early 2015 sometime, Big Data Is The Next Big Thing. Being blunt about it, there has been very little obvious Life Changing going on courtesy of Big Data and that is because by the time people started screaming about big data in the media and talking about how it was the future, it had ceased to be the future in the grand scheme of things. Artificial intelligence and machine learning, now they are the next big thing.

I have to declare an interest in machine learning and artificial intelligence – I wrote my masters dissertation on the subject of unsupervised machine learning and deep learning. However, I am still going to say that machine learning and artificial intelligence are a) a long way short of what we need them to be to be the next big thing b) were the next big thing at the time everyone was saying that big data is the next big thing.

It is particularly galling because of Alpha Go and the hysteria that engendered. Grown men talking about how this was the N.

Right now, artificial intelligence is still highly task limited. Sure it is fantastic that a machine can beat a human being at Go. In another respect, it isn’t even remotely special. AlphaGo was designed to do one thing, it was fed with data to do one thing. Go, and chess to some extent, are the same thing as brute forcing a password. Meanwhile, the processes designed to win games of Go and chess are not generally also able to learn to be fantastic bridge players, for example. Every single bit of progress has to be eked out, at high costs. Take machine translation. Sure, Google Translate is there, and maybe it opens a few doors, but it is still worse than a human translator. Take computer vision. It takes massive deep learning networks to even approximate human performance for identifying cats.

I’m not writing this to trash machine learning, artificial intelligence and the technologies underpinning both. I’m saying that when we have a discussion around AI and ML being the next big thing, or Big Data being the next thing, we are having the equivalent of looking at a 5 year old playing Twinkle Twinkle Little Star and declaring he or she will be the next Yehudi Menuhin. It doesn’t work like that.

Hype is dangerous in the tech sector. It overpromises and then, screams blue murder when delivery does not happen. Artificial intelligence does not need this. It’s been there before with the AI winter and the serious cuts in research. Artificial intelligence doesn’t need to be picked on by the vultures looking for the next big thing because those vultures aren’t interested in artificial intelligence. They are only interested in the rentability of it. They will move on when artificial intelligence fails to deliver. They will find something else to hype out of all order. And in the meantime, things which need time to make progress – and artificial intelligence has made massive jumps in the last 5 or 6 years – will be hammered down for a while.

For the tl;dr version, once you start talking about something being the next big thing, it no longer is.

The invisible conduit of interpreting

Jonathan Downie made an interesting comment on his twitter this morning.

Interpreting will never be respected as a profession while its practitioners cling to the idea that they are invisible conduits.

Several things occurred to me about this and in no particular order, I’m going to dump them out here (and then write in a little more detail how I feel about respect/interpreting)

  1. Some time ago I read a piece on the language industry and how much money it generated. The more I read it, the more I realised that there was little to no money in providing language skills; the money concentrated itself in brokering those skills. In agencies who buy and sell services rather than people who actually carry out the tasks. This is not unusual. Ask the average pop musician how much money they make out of their activities and then check with their record company.
  2. As particular activities become more heavily populated with women, the salary potential for those activities drops.
  3. Computers and technology.

Even if you dealt with 1 and 2 – and I am not sure how you would, one of the biggest problems that people providing language services now have is the existence of free online translation services and, for the purposes of interpreters, coupled with the ongoing confusion between translation and interpreting, the existence Google Translate and MS’s Skype Translate will continue to undermine the profession.

However, the problem is much wider than that. There are elements of the technology sector who want lots of money for technology, but want the content that makes that technology salable for free. Wikipedia is generated by volunteers. Facebook runs automated translation and requests correction from users. Duolingo’s content is generated by volunteers and their product is not language learning, it is their language learning platform. In return, they expect translation to be carried out.

All of this devalues the human element in providing language skills. The technology sector is expecting it for free, and it is getting it for free, probably from people who should not be doing it either. This has an interesting impact on the ability of professionals to charge for work. This is not a new story. Automated mass production processes did it to the craft sector too. What generally happens is we reach a zone where “good enough” is a moveable feast, and it generally moves downwards. This is a cultural feature of the technology sector:

The technology sector has a concept called “minimum viable product”. This should tell you all you need to know about what the technology sector considers as success.

But – and there is always a but – the problem is not what machine translation can achieve – but what people think it achieves. I have school teacher friends who are worn out from telling their students that running their essays through Google Translate is not going to provide them with a viable essay. Why pay for humans to do work which costs a lot of money when we can a) get it for free or b) a lot less from via machine translation.

This is the atmosphere in which interpreters, and translators, and foreign language teachers, are trying to ply their profession. It is undervalued because a lower quality product which supplies “enough” for most people is freely and easily available. And most people are not qualified to assess quality in terms of content, so they assess on price. At this point, I want to mention Dunning-Kruger because it affects a lot of things. When MH370 went missing, people who work in aviation comms technology tried in vain to explain that just because you had a GPS on your phone, didn’t mean that MH370 should be locatable in a place which didn’t have any cell towers. Call it a little knowledge is a dangerous thing.

Most people are not aware of how limited their knowledge is. This is nothing new. English as She is Spoke is a classic example dating from the 19th century.

I know well who I have to make.

My general experience, however, is that people monumentally over estimate their foreign language skills and you don’t have to be trying to flog an English language phrasebook in Portugal in the late 19th century to find them…

All that aside, though, interpreting services, and those of most professions, have a serious, serious image problem. They are an innate upfront cost. Somewhere on the web, there is advice for people in the technology sector which points out, absolutely correctly, that information technology is generally seen as a cost, and that if you are working in an area perceived to be a cost to the business, your career prospects are less obvious than those who work in an area perceived to be a revenue generating section of the business. This might explain why marketing is paid more than support, for example.

Interpreting and translation are generally perceived as a cost. It’s hard to respect people whose services you resent paying for and this, for example, probably explains the grief with court interpreting services in the UK, why teachers and health sector salaries are being stamped on while MPs are getting attractive salary improvements. I could go on but those are useful public examples.

For years, interpreting has leaned on an image of discretion, a silent service which is most successful if it is invisible. I suspect that for years, that worked because of the nature of people who typically used interpreting services. The world changes, however. I am not sure what the answer is although as an industry, interpreting needs to focus on the value add it brings and why the upfront cost of interpreting is less than the overall cost of pretending the service is not necessary.

Ten Years’ Time Never arrives

Alex Ross recently wrote a piece for the Wall Street Journal basically suggesting that in 10 years time, we will have something like a Babelfish.

I did not agree much with the piece for various reasons but I have been busy for the last few weeks and frankly, this discussion (podcast based) by three interpreters demonstrates possible gaps in understanding what is required here, and where our technology actually in reality more than I have time to draft at the moment.

Way back in the history of computer science, computer programmers who were unfamiliar with the world outside the English language created an encoding system that couldn’t handle accented charactors. One of the points that struck me about this is that a significant amount of the progress in NLP is for recognising English…but we need significantly more work amongst a lot more languages.

Many things have been forecast for some time down the road. Those things rarely arrive, or if they do, they arrive in an entirely different shape to what we predicted.

I will say that I don’t agree that interpreters sell themselves as being dictionaries on legs – I’ve always seen interpreters as a bridge.


Some of my best friends are interpreters…

Most mornings, I read Ryan Heath’s newsletter from Politico. I like it for a lot of reasons – there’s a very strong scattering of pan European news links for example, and the news is insular at a European level rather than at a local city level. These are good things as they broaden my horizons. He’s generally a good curator of stuff that interests me as well – at this point I would say that if you have any interest in Europe and European affairs, it is worth subscribing.

So what has this to do with interpreters and the sky over them? Well this morning, he pointed at this story in the EU Observer. It quotes Klaus Welle as follows:

Speak slowly, speak in your mother tongue. Those are the main elements which lead to a deterioration in quality,” he noted. “It drives them crazy”.

I do, as it happens, spend quite a bit of my time, listening to EU parliamentary committee meetings and parliamentary plenary sessions. I also know how to interpret. I just don’t do it for a living right now. I know what Klaus Welle is trying to achieve. I think it’s laudable, and I think it is necessary. I just wish he had…tried to do it differently.

Interpreting is a highly challenging activity mentally. We have a limited understanding of how people can even do it, but they do, and in so doing, they facilitate communications between many people who might not otherwise be able to communicate. In many respects, simultaneous interpreters are basically babelfish.

Their job is to facilitate communication and when you have a room of up to 600 people who speak, between them, up to 24 languages, that’s massively impressive. And for the most part, they are so good, we take it for granted.

European Parliament procedures bring some interesting challenges however, and a key one is that when you have limited speaking time – which is guaranteed to be the case in a plenary session – people, rather predictably, try to maximise it and carefully write speeches to get as much into their 1 or 2 allotted minutes. As a result, what people are confronted with is not so much a minute of spoken language, but a minute of densely packed written language, read out.

Now, it could be argued that the average MEP has the right to do this. When they do this, however, they make it harder for their audience, be that those who share their language, or those who need to find a way of rendering it into another language. Maybe they are not necessarily representing the interests of their constituents if they are not taking into account the best way of communicating their interests. Maybe it is just completely counterproductive.

In other words, if you are speaking in a chamber where what you are saying needs to be interpreted into 23 other languages, the likelihood is that you’ll be more successful if you take that fact into consideration, not just for the interpreters, but for your colleagues. Other MEPs. And especially, for the people you represent. This is not a question of preventing interpreters from going crazy. It is a question of helping them to help you get your message across. That is, after all, what they are there for.

As I mentioned, I listen to the European Parliament feeds when I have time. I have heard people switch language mid contribution. I have seen plenary contributions delivered at such high speed that they were very nearly incomprehensible. And I have heard chairs pleading with contributors to slow down and give the interpreting service a chance.

Against that, I have heard some clear, concise speakers who were a pleasure to listen to.

Railing against interpreters going crazy just because they are pleading for people to speak in a manner that makes it possible for them to be interpreted is missing the point. The speakers have a part to play in facilitating communication too. Otherwise, why speak?



Glitches in the matrix and Viber

I ran into an interesting problem with the messaging service Viber yesterday – I had a brand new computer which developed a fault rather quickly and which, prior to my returning it to the vendor, could not be made to operate in such a way as I could get at the data I had stored on it.

In theory, there was not much such data – I had not finished installing software on the machine, and nor had I started uploading backups from the previous machine to it. However, I had installed Viber and I had opened one conversation with a friend online. Six hours after I bought the machine, the operating system would not load and so it went back to the vendor.

I cannot fault the vendor in this case between their phone customer support and the behaviour of the staff at the branch where I did buy the machine. I was concerned though that the fault might be fixed at some point in their workshop and whether there was any risk that Viber would attempt to sync any subsequent discussions as I had been unable to de-install it before I handed the machine back. What was on the machine itself was low risk. I just wanted to ensure that Viber would not be able to subsequently sync with any subsequent conversations.

When you go looking for information on this front, the general assumption is that people have lost access to Viber on their phones and not necessarily to their desktops.

In theory, the two obvious solutions for the operating system loading error would have been replacement hard drive or reinstalling the operating system. I obviously could not do the the first, and the latter had been kiboshed by the fact that I had not even got as far as making a recovery disk. When I looked into it in detail, there is a theoretical fix involving rebuilding the BCD in the kernel. I found a single document which was detailed on the process but it did not, for example, outline what happened to any other data on the machine. As such, it did not leave me much peace of mind.

Viber is a handy messaging platform which you can use from both phone and desktop. Its user documentation is of mixed quality and, again assumes the reason you might want to nuke your Viber service is that you no longer have access to your phone. If you’re looking to deal with a desktop which has your account on it, it is actually possible.

When you set up Viber on your phone, you’re effectively setting up an instance of an account and any desktop installations of Viber are tied to that. You can, from the desktop installation, deactivate that particular desktop installation via settings. If you do not have access to the desktop installation, you MUST deactivate Viber on your phone and this will deactivate all Viber installations linked with that instance of an account, ie, Viber on your phone, viber on any desktops or tablets associated with Viber linked to your phone number.

You cannot pick them off remotely and individually. It’s all or nothing.

What happens then is that if someone tries to access Viber on any of the desktop installations linked with the account is a dialogue over whatever was most previously opened in Viber at the time of the previous synch, a dialogue box opens to tell you that the account is no longer active. They will have to respond to that dialogue box and in my experience, that kills the Viber data behind. There is a window of risk that someone might see something in your messaging software that you would not want, but at least there is an option for destroying the connection between your phone number and that desktop instance remotely, even if it’s the equivalent of a nuclear option.

The downside is that you lose all your messaging data unless you back the messages up before which you must do on your phone.

In short, assuming you’ve lost a non-phone device with Viber data on it, here’s how you kill things:

  1. Back up your viber messages on your phone if you want to keep them. If you don’t, you don’t have to do this.
  2. Go to the privacy setting and select Deactivate. You’ll probably have to scroll to the bottom to find it. This will kill your viber service on your phone and any associated non-phone installations (desktops for the most part).
  3. Set up viber on your phone again. I did not actually have to de-install, reinstall Viber on the phone to do this – it sent me a new 6 digit code and I was up and running.
  4. Set up viber on your desktop again by obtaining a new code. I did not have to reinstall Viber to do this.

I had to deactivate the machine remotely for some subscription software – MS Office and Adobe Creative Suite – and I could do this. I think it would be useful if, somehow, it was possible to review how many machines were receiving push notifications from Viber so that you could deactivate them at will rather than having to nuke everything and start from scratch.

Undergraduate languages in the United Kingdom

I write, from time to time, on language related matters and one of the items on my list of backburner projects was to have a look at undergraduate language options in the United Kingdom. I had a look at Ireland as well but since we have 7 universities, there isn’t very much of interest to consider when it comes to language provision in Ireland. UCC is about your best option there. I’ll post the graph of that later.

The United Kingdom is interesting for a couple of reasons: firstly, tuition provision in languages has been falling off a cliff there and language departments have been closing near hand over fist. One of my recollections relating to language tuition provision in the university sector was that there was a great breadth of provision in terms of languages offered when I was looking for somewhere to study back in 1990, and given changes to language related matters in the UK in the interim, I was interested to see how things looked. Data, however, is not that easily come by and in the end I would up collecting it manually.

One of the things I wanted to do was see what the obvious clusters were and it occurred to me that using languages and higher education organisations as nodes might allow a network chart to be built. I actually did a proof of concept of that with the Irish provisions purely because there were neither too many languages nor too many universities (seven of the latter and not far off seven for the former). The network depicting software which I used was Gephi.

According to the basic research which I did, 78 higher education organisations are offering primary degrees of which a language is a major component. I suspect, if I were to look more closely and root out things like “International Business With A Language” type degrees, the number of pure language related courses would be significantly lower. I have not decided how best to sort out data to get that information and I may not do it just yet.

Eventually, when I plotted things, there was an interesting imbalance on the graph. I noted this on the graph itself for which you can find here, but it is obvious enough below too.

UnitedKingdomWhat this tells you is that if you want to learn anything other than, effectively, French, Spanish, English, Italian, German, Russian or Chinese, most of your options are limited to two universities in London or one in Edinburgh. The overwhelming number of universities which offer any language study at all draw primarily from the seven listed above. There are a few stragglers around but that’s more or less the way things are.

One of the things I would consider doing with this data at some stage is comparing language provision in the United Kingdom with language provision in the university sector in a bunch of other European countries, and also, looking at comparing provision of official European languages within the university sector across Europe. I really have no idea how I could quickly get this data together – I do not know if it’s even available anywhere. But it would be interesting to see where the holes exist in terms of provision of tuition at university level of official European languages.

Code Reviews

This piece on code reviews landed in my email via an O’Reilly newsletter this morning.

I’ve posted a brief response to it but I wanted to discuss it a little further here. One of the core issues with some code reviews is that they focus on optics rather than depth. How does this code look?

There are some valid reasons for having cosmetic requirements in place. Variable names should be meaningful, but in this day and age, that doesn’t mean they also have to be limited to an arbitrary number of characters. If someone wants to be a twerp about it, they will find a way of being a twerp about it no matter what rules you put in place.

However, the core reason for code reviews should be in terms of understanding what a particular bit of code is doing and whether it does it in the safest way possible. If you’re hung up on the number of tab spaces, then perhaps, you’re going to miss aspects of this. If you wind up with code that looks wonderful on the outside but is a 20 carat mess on the inside, well…your code review isn’t understanding what code is doing and it’s not identifying whether it is safe or possible.

So what I would tend to recommend, where bureaucratically possible, is that before any code reviewing is done, coding standards are reviewed in terms of whether they are fit for purpose. Often, they are not.

It won’t matter how you review code if the framework for catching issues just isn’t there.


One of those simmering arguments in the background has been blowing up spectacularly lately. The advertising industry, and to a lesser extent, the media industry, is up in arms about ad-blocking software. They do not like it and to some extent, you can probably understand this. It does not, exactly, support their industry.

There are two approaches which I think need to be considered. The advertising industry and the media industry, instead of bleating about how stuff has to be paid for, need to consider how they have contributed to this mess. On mobile, in particular, advertising is utterly destroying the user experience. When I wind up with content that I want to read because I can’t access because there is a roll over ad blocking it, for which I cannot find a close button, then the net impact is not that I feel a warm fuzzy feeling about the advertiser and the media site in question. The net impact is that I spend less and less time on the media site in question.

So, instead of screaming about how stuff has to paid for with advertising, maybe the media companies need to recognise how advertising is wrecking their user experience and how, ultimately, that is going to cut their user numbers. The fewer eyes they have, the less their advertising is going to be worth. I have sympathy for their need to pay their bills but at some point, they need some nuance in understanding how the product they are using to pay their bills now will likely result in them being unable to pay their bills at some point in the future.

As for the advertising industry, I have less sympathy. They appear to think they have a god given right to serve me content which I never asked for, don’t really want and which might cost me money to get particularly on mobile data. Often, the ads don’t load properly and block the background media page from loading. They have made their product so completely awful as a user experience that people are working harder than ever before to avoid it. Instead of screaming about how adblockers are killing their business, it would be more in their line to recognise that they have killed their business by making it a user experience which is so awful, their audience are making every effort to avoid it.

The ability to advertise is a privilege, not a right. It would help if advertisers worked towards maximising user engagement on a voluntary basis because by forcing content in the way which is increasingly the normal – full screen blocking ads – on users they are damaging the brands and the underlying media channels. Maybe advertisers don’t care. Maybe they assume that even if every newspaper in the world closes down, they will still find some sort of a channel to push ads on.

Adblocking software should be reminding them that actually, they probably won’t.


this is about data and technology and where I interact with both