Hysteria of Hype

Somewhere around the web, there’s a cycle of hype which generally pins down where we are in terms of a hype cycle. I have not the time to go looking for it now but put simply, it has bunch of stages. I have decided it is too complicated for the tech sector.

Basically, the point at which you start seeing comments around X is the next big thing is the point at which something else is the next big thing. Sounds contradictory? Well yeah, it is.

Most people talking about the next big thing being X tend not to know a whole lot about X. Their primary objective is to make money off X. They do not really care what X achieves, so long as it makes them money.

Five years ago up to oh I don’t know, middle of 2014, early 2015 sometime, Big Data Is The Next Big Thing. Being blunt about it, there has been very little obvious Life Changing going on courtesy of Big Data and that is because by the time people started screaming about big data in the media and talking about how it was the future, it had ceased to be the future in the grand scheme of things. Artificial intelligence and machine learning, now they are the next big thing.

I have to declare an interest in machine learning and artificial intelligence – I wrote my masters dissertation on the subject of unsupervised machine learning and deep learning. However, I am still going to say that machine learning and artificial intelligence are a) a long way short of what we need them to be to be the next big thing b) were the next big thing at the time everyone was saying that big data is the next big thing.

It is particularly galling because of Alpha Go and the hysteria that engendered. Grown men talking about how this was the N.

Right now, artificial intelligence is still highly task limited. Sure it is fantastic that a machine can beat a human being at Go. In another respect, it isn’t even remotely special. AlphaGo was designed to do one thing, it was fed with data to do one thing. Go, and chess to some extent, are the same thing as brute forcing a password. Meanwhile, the processes designed to win games of Go and chess are not generally also able to learn to be fantastic bridge players, for example. Every single bit of progress has to be eked out, at high costs. Take machine translation. Sure, Google Translate is there, and maybe it opens a few doors, but it is still worse than a human translator. Take computer vision. It takes massive deep learning networks to even approximate human performance for identifying cats.

I’m not writing this to trash machine learning, artificial intelligence and the technologies underpinning both. I’m saying that when we have a discussion around AI and ML being the next big thing, or Big Data being the next thing, we are having the equivalent of looking at a 5 year old playing Twinkle Twinkle Little Star and declaring he or she will be the next Yehudi Menuhin. It doesn’t work like that.

Hype is dangerous in the tech sector. It overpromises and then, screams blue murder when delivery does not happen. Artificial intelligence does not need this. It’s been there before with the AI winter and the serious cuts in research. Artificial intelligence doesn’t need to be picked on by the vultures looking for the next big thing because those vultures aren’t interested in artificial intelligence. They are only interested in the rentability of it. They will move on when artificial intelligence fails to deliver. They will find something else to hype out of all order. And in the meantime, things which need time to make progress – and artificial intelligence has made massive jumps in the last 5 or 6 years – will be hammered down for a while.

For the tl;dr version, once you start talking about something being the next big thing, it no longer is.

AI – Pause for thought

In the past week or two, views attributed to both Stephen Hawking and Elon Musk have been published, both in general questioning the value of major advances in artificial intelligence. Reading accounts of both, it seems to me that the issue is not so much artificial intelligence as artificial sentience.

You could not read this site without using artificial intelligence. It underpins pretty much any decision making software, and for this page to get to your computer, quite a lot of decisions get to be made network related software.

Decision making software decides what books Amazon recommends to you, what search results Google returns to you when you search for something like, for example, artificial intelligence. Ultimately the issue lies in our perception of artificial intelligence; it has moved on a long way from the ideas prevalent when Alan Turing wrote his seminal paper in the 1950s. At that point, I think it is fair to say, we had a vision of artificial humans rather than task driven machines. What we have been able to achieve in the last 60 years is huge in some respects, and relatively limited in many other respects. Ultimately, our successes have been because we have, over time, changed our understanding of what constitutes artificial intelligence.

There are a lot of intelligent systems floating around the world. Google’s driverless cars are amongst them, but even if you skip them, there are a bunch of driverless trams around the world (Lyon and CDG Airport in Paris as two examples). These systems make responsive decisions and not pro-active decisions. You may get up in the morning and decide to get a train to Cork; a train will never make that decision because a human makes the decision about where the train is going. It doesn’t, unless it manages to collude with any automated signalling system somewhere around Portarlington, decide to go to Galway instead.

I think it is fair to say that both Elon Musk and Stephen Hawking are far, far brighter than me so normally I would hesitate to suggest that they are perhaps worried about something which is less than likely to happen. Eric Schmidt has also spoken and he has pointed out that we are very far short of systems that match intelligence and he raised the one example that I tend to, although usually when I am talking about something else. Google have a rather infamous piece of research which I search for under the heading of Google cat paper. Google suggests this to me when I start typing it in which suggests I am not the only person to tag it that way. Anyway, from the Wired interview with Eric Schmidt

All that said, Schmidt also confessed that these machines are a lot more primitive than people would like to imagine. For evidence of that fact, he explained an experiment Google conducted a few years back, in which the company’s scientists developed a neural network and fed it 11,000 hours of YouTube videos to see what it could learn, without any training. “It discovered the concept of ‘cat,’” Schmidt said, assuming the tone of a disappointed dad. “I’m not quite sure what to say about that, except that that’s where we are.”

For me, the issue isn’t so much that the system limited itself to cat. If you know anything at all about YouTube, it’s almost a safe bet that after “human” the next thing any image recognition system is likely to pick up is “cat” purely by force of frequency. What matters is the sheer amount of work taken to get as far as learning “cat”. Schmidt does not mention this.

What the experiment actually did was obtain 10 million still images from YouTube videos and ran them through a nine layer neural network which ran on a cluster including 1000 machines (so 16,000 cores) for three days. According to their abstract, this is what they achieved:

Starting with these learned features, we trained our network to obtain 15.8% accuracy in recognizing 22,000 object categories
from ImageNet, a leap of 70% relative improvement over the previous state-of-the-art.

Now, ultimately, core piece of information here is that a) it was a huge effort distributed across 1000 machines b) took a long time and c) resulted in an ability to correctly identify 22,000 object categories from a big database in 15.8% of cases.

When Eric Schmidt suggests this is relatively primitive, he is right. Now compare it to how fast you can generalise the concept of a cat, or, for example, a new device (I like to use drones here). It is worth noting that Google specifically used unlabelled data here which is akin to a child wandering around a new world with no one telling him what anything is. Learning by experience if you like. Facebook is trying to do the same with labelled data however. They have a significantly greater success rate.

However, again it used large scale effort:

Thus we trained it on the largest facial dataset to-date, an identity labeled dataset of four million facial images belonging to more than 4,000 identities.

Microsoft, incidentally have some more research here. I haven’t read it in detail (yet) so am not going to comment on it. But they have made a lot of promises prior to publication, particularly with respect to accuracy in classifying animals, for example.

On the wider question of whether we should be worrying in much the same way as Elon Musk and Stephen Hawking are worrying; I am not sure the answer to that question is yes. Not at this point anyway.

One of my interests is in the area of natural language processing, and specifically language acquisition. We are miles short of generally usable systems here. As far as consumer systems are concerned, if you try to get Google to search for something which features a non-English name (ie, Phone, please play Alexandre Tharaud for me), you can forget about it. It does not work. My phone plays music by English rock bands but not French concert pianists if I try to use voice operation.

Yes, I can get my phone to list my appointments for the next few days simply by talking to it, but if you have spent any time using the Google Now at all, you will have run into some of the core issues with it: 1) your voice instructions are not locally processed and 2) your voice instructions are not guaranteed to be correctly processed either. Put simply, interaction is still limited. I still type search functions because it is faster.

Anyway, I wanted to pick up something Elon Musk said.

Musk is one of the high-profile investors, alongside Facebook’s Mark Zuckerberg and the actor Ashton Kutcher, in Vicarious, a company aiming to build a computer that can think like a person, with a neural network capable of replicating the part of the brain that controls vision, body movement and language.

It is worth noting, by the way that this is Vicarious. In certain respects, it is worth a look to see where they are at at present.

Put simply, there are a bunch of problems there and none of them are fully clear to being sorted. Arguably, we are reasonably close to building reasonable systems for handling motion related tasks – otherwise we would not be in the zone of having driverless cars. However, there is still a massive amount of work to be done in the area of vision, and above all else, language and then, a core issue will be getting systems handling these tasks to communicate effectively. We are a way from fully sentient artificial intelligences, for all the people who are working on them. Where we have had artificial intelligence successes, they have generally been task specific and those tasks have tended to be limited in some respect. We are a long way from building a truly sentient humanoid robot. When we build a self driving car, that is a monumental achievement in artificial intelligence. Said self driving car will never learn to come in and make tea. Artificial intelligence as it exists at present is pretty much scope limited.

However, there is a much wider debate to be had in terms of identifying what is practical and possible within a reasonable time frame, and what we want to achieve with artificial intelligence. This debate is not facilitated by Elon Musk suggesting that it is our greatest existential threat. It is entirely possible that in practical terms, it isn’t. Climate change is an issue. Resource misuse is another major problem. Inequality. Historically, every single major civilisation which has preceded ours has come one hell of a cropper at some point and left interesting looking landmarks behind for us to ponder and in no case was artificial intelligence the problem. It is entirely possible that before we ever get sentient robots we will run into problems. Even in itself, research into artificial intelligence has come a cropper in terms of obtaining funding.

But this shouldn’t preclude us considering what we want to enable and what we want to prevent. And in practical terms, how we ensure that bad things are never developed given that we have so far failed to prevent bad things being developed from useful technology. These are not, however, artificial intelligence questions. They are human questions.