learning: encrypting text

A while back, one of my friends on twitter introduced me to NaNoGenMo and late at night, I started thinking about what could be done with such a project. As they cited the possibility of 50,000 meows as an example of a possible successful project, I decided that a novel consisting of 50,000 different words making absolutely no sense whatsoever was a possibility, and decided to find a base text of 1000 words, and build an encryption algorithm that would generate 50 different encrypted versions of the text, and that would be it. And I would write the encrypted text generator in Python.

The primary reason I do stuff like this is for learning reasons and very often, you wind up learning more about how you look at a problem, rather than about whatever programming language you use for projects like this. More often than not, you’ll find a little bit of functionality that you didn’t know existed. And if you are really lucky, looking at stuff like this opens doors for you to look at things in more detail.

I am not an encryption specialist (yet) so effectively, I wanted to find some way of turning cleartext into something obviously encrypted but without it being too easy to immediately decode. The angle of attack I specifically wanted to block was frequency analysis. (It’s 10 years at least since I looked at encryption techniques but I remember some of it).

So I looked at building an algorithm which amounted to reducing the number of output letters, but in a random manner. Each individual run of the encryption algorithm generated a random number which was the total number of letters which could be used in encrypting the clear text, and for each letter generated random number, which represented which letter the cleartext letter would map to. I also generated a key of the mapping of original letter to encrypted letter, performed some minimal hiding of reality, and ran the algorithm against some text. It worked beautifully and what’s more, it was very obvious that you couldn’t see what had happened to the text.

Where I ran into a problem was in decrypting each piece of cipher text. When you basically reduce the dimension of available letters, regenerating the mapping from a smaller group of letters becomes a difficult to fix problem, particularly if you only have one example of the encryption algorithm. I could not actually produce a piece of code that immediately decrypted any individual piece of cipher text. The key generated by the encryption algorithm was a one way only key. So I have been pondering this problem in the meantime and I’ve concluded that the algorithm may yet be breakable if you have several examples of the same text encrypted, plus the matching keys and some willingness to go messing calculating the different encryption dimensions. In short, while it’s relatively straightforward to encrypt the text, you need many examples of the algorithm generating different keys plus the associated encrypted texts to break back in. I have not yet implemented this but I will look at it as a later problem.

In the meantime, key learning outcomes from this exercise:

  • encryption algorithms are easier to design than encryption with matching decription
  • Python has a useful string translate function which I was happy to find. I have used something similar in assembler programming for changing encoding between different character sets.
  • after all this, when I started reading up on cipher algorithms again, I discovered that there exists a pycipher library which implements a bunch of standard cipher algorithms. Even so, the existence of this does not mean I won’t, at some stage, have a look at implementing one of the Enigma or, possibly, one of the Lorenz ciphers just for the hell of it.
  • I want to read up on cryptography again. It’s been too long.

The github page for the project is here.

 

 

Recommendations at Etsy

Robert Hall, one of the data engineers at Etsy, a large online craft market place, has written a comprehensive overview of how they manage recommendations. It’s a very interesting piece in that it’s quite open particularly as relates to something which could be considered commercially sensitive, and it broaches on both the mathematics and infrastructure side of things.

Read it here. The Etsy Code as Craft blog is well worth a read in ongoing terms.

Everyone should learn to code

This, from the Wall Street Journal.

It annoyed me, not because I disagree with the idea of people learning to code – I don’t – but because as a piece supporting the idea that people should learn to code, it has some glaring errors in it and doesn’t really support the idea that people should learn to code. Personally I think a lot of tech people should learn to communicate more effectively but a lot of them appear to think they don’t have to so let’s just explain why this piece is a problem.

The most important technological skill for all employees is being able to code. If, as Marc Andreessen once noted, “Software is eating the world,” knowing how to code will help you eat rather than be eaten. Understanding how to design, write and maintain a computer program is important even if you never plan to write one in business. If you don’t know anything about coding, you won’t be able to function effectively in the world today.

So, two major assertions here: the most important technological skill for all employees is being able to code and “if you don’t know anything about coding, you won’t be able to function effectively in the world today”.

These assertions are patently not true. To be frank, the most important technological skill for an employee, in my opinion, is the ability to describe what’s gone wrong on the screen in front of them. That’s also a communications issue but it does enable technology experts to help them. As for “if you don’t know anything about coding, you won’t be able to function effectively”, I strongly disagree with that and would suggest that ultimately, the problems lie with interface design which employees are not actually responsible for the most part.

You will inevitably work with people who program for a living, and you need to be able to communicate effectively with them. You will work with computers as a part of your job, and you need to understand how they think and operate. You will buy software at home and work, and you need to know why it works well or doesn’t. You will procure much of your information from the Internet, and you need to know what went wrong when you get “404 not found” or a “500 internal server error” messages.

Not one thing in this paragraph requires coding skills. It requires programmers to learn to communicate effectively and given a lot of them have trouble with the basic need to document what they are doing already, it’s a steep learning curve. With respect to software, again, how well it works depends on how well it is documented and designed. You do not need to be able to program to understand a 404 not found or a 500 internal server error.

Of course, being able to code is also extremely helpful in getting and keeping a job. “Software developers” is one of the job categories expected to grow the most over the next decade.

But not every employee is a software developer and nor should they be.

But in addition to many thousands of software professionals, we need far more software amateurs. McKinsey & Co. argued a few years ago that we need more than 1.5 million “data-savvy managers” in the U.S. alone if we’re going to succeed with big data, and it’s hard to be data-savvy without understanding how software works.

Data and programming are not the same things. Where data is concerned we frantically need people who get statistics, not just programming. IME, most programmers don’t get statistics at all. Teaching people to code will not fix this; it’s a tool to support another knowledge base.

Even if you’ve left school, it’s not too late. There are many resources available to help you learn how to code at a basic level. The language doesn’t matter.

Learn to code, and learn to live in the 21st century.

I’m absolutely in favour of people learning to think programmatically, and logically. But I don’t think it’s a requirement for learning to live in the 21st century. The world would be better served if we put more effort into learning to cook for ourselves.

I hate puff pieces like this. Ultimately, I mistrust pieces that suggest everyone should be able to code particularly at a time when coding salaries are low at the time we are being told there’s a frantic shortage. I’ve seen the same happen with linguistic skills. There are a lot of good reasons to learn to code – but like a lot of things, people need to set priorities in what they want to do, what they want to learn on. Learning to write computer code is not especially different; learning to apply it to solving problems on the other hand takes a way of looking at the world.

I’d prefer it if we looked at teaching people problem solving skills. These are not machine dependent and they are sadly lacking. In the meantime, people who have never opened a text editor understand that 404 Not found does not mean they could fix their problems by writing a program.

 

Learning programming before going to university

Ryan Walmsley has a piece suggesting you shouldn’t learn programming before going to university. It’s worth a read.

Personally, I am not against people learning to code before they get to university. I am, however, not in favour of people who have no coding skills arriving at university and starting with Scratch. Scratch is a superb tool for teaching kids how to program, and a bit about how computers work. It is not a suitable tool for adults on a coding specialist code in my view. While I am not the biggest fan of Java (disclaimer: have yet to review Lambdas in Java 8 and this may make some of my frustration go away), and I recognise that some people have issues with the lack of strong typing in Python, ultimately, once you get as far as university, you should at least start with tools you have a fighting chance of using in the income earning world. And there are a lot of them. Not in the top ten is Scratch.

Like  a lot of things, tools need to be used appropriately and Scratch is an absolute winner in the sector it was designed for. But I have a book on my desk here that teaches kids how to program in Python and if kids can do that, I see no reason why we need kids level languages like Scratch at university level.

Matlab/Google…

I really have a lot of things to catch up on but a couple of weeks ago, a piece on the Business Insider site caught my eyes. In it, it suggested that if you wanted to work for Google, you needed to know Matlab. They attributed the comment to a guy called Jonathon Rosenberg.

This caused some discussion on twitter in the days afterwards. Mostly, people found it difficult to believe, particularly when Google uses a bunch of other tools, including my personal choice for a lot of data analysis, R.

I am not sure that Matlab is a mandatory requirement to work in Google; it doesn’t necessarily turn up on any of their job ads that I might be interesting, but in some respects, I can understand why A N Company might do something like this. It’s a little sorting mechanism. The point which I found most interesting about the piece above was less that Google were looking for Matlab, but that the writers of the piece had never heard of Matlab.

I was once interviewed about modern web technology and how it might benefit the company concerned way back in the early days of the web becoming a consumer sales channel. My view of the discussion ultimately wasn’t that they wanted me to work on their web interfaces (not at that stage anyway), but they wanted to see what my ability to learn about new stuff was. It may well be that if you go to work for Google in some sort of research job, you’ll use Matlab. Or, more probably, you’ll learn a bunch of other things in the area that you are working.

Either way, comments like Rosenberg’s may, or may not be official hiring policy but it’s often worth considering that they are asking a broader question rather than “Can you use Matlab” and more “Can you prove to use that you can develop in whatever direction we throw you”.

And if you haven’t heard of Matlab, the chances are, you may not.

The activity of interpreting

One of the things which a lot of people don’t actually know about me is that I trained as an interpreter in my twenties. I have a diploma from the University of Westminster, which, at the time, was the leading interpreting school in the United Kingdom. While I don’t interpret any more, I’m still interested in on a tangential basis and that’s why I found this article from Mosaic very interesting yesterday. I’ve always wondered about how it can be possible to carry out simultaneous interpreting even as I did it. A lot of it is practice related, and technique/strategy building. In certain respects, I found it a lot like playing music. It’s a skill you learn by doing, not so much by understanding how it works inside your mind. And yet:

The caudate isn’t a specialist language area; neuroscientists know it for its role in processes like decision making and trust. It’s like an orchestral conductor, coordinating activity across many brain regions to produce stunningly complex behaviours.

I strongly recommend reading the piece – even aside from the whole question of interpreting, the piece brings up some interesting information in the area of the neurosciences. I wasn’t familiar with the site before now, but it had an interesting collection of science writing on it from a number of different fields in the life science sector so the interpreting piece aside, I (so far) find it a valuable resource.

One of the aspects of programming life that most software developers will talk about, in terms of getting anything done, is flow. When you’re in a zone where everything is just working together nicely, the problem solving is happening, it’s you and the code and the phone isn’t ringing. There’s a space I used to get into in interpreting – I miss it a lot – which is broadly similar; I called it the zone; I imagine other people approach it different because like most effects, it can be quite personal. I actually did an interpreting test for the first time in more than ten years last year and while it didn’t go perfectly for me, I did, in the course of practice, hit that zone a couple of times. I’d love to see what my brain activity looks like when I hit; it’s a place where you’ve to fight for nothing mentally.

There are a couple of different paths into a career as a conference interpreter. The University of Westminster cancelled the course I did a number of years ago and appear to have replaced it with an MA in Translating & Interpreting, but there appears, in Ireland, to be a course at the National University in Galway, and in the UK, there are joint translation/interpreting courses at the University of Bath, the University of Leeds, London Metropolitan University, The University of Manchester, the University of Salford and Heriot-Watt University in Edinburgh. Outside the English speaking colleges, there are options in France and Belgium at ESTI and ISTI and in Germany at Hamburg and Heidelberg (at least). These courses are postgraduate courses so fees are very obviously going to be a factor to consider.

Ultimately, the two big employers of interpreters in the world are the United Nations and the European Union institutions.

From the point of view of what you need to go down the road of interpreting, the obvious ones are a) a very strong command of your mother tongue and b) comprehensive understanding of two other languages.

You also need the ability to research and get up to speed with various different fields of expertise. The one which used to make my blood run cold during my training was any discussion of European fisheries policy as fish species in English were ongoing hassle, never mind fish species in French and German.

In many respects, it’s a career which allows you access to learn about a lot of other different areas; I’d be happy to go back. But I’d also like to look at breaking down the challenges in automating it as well and that’s a really hard problem to solve; not least because we haven’t solved machine translation very effectively either although a lot of work is happening in the area. Not because I would like to see a bunch of interpreters lose their jobs – they shouldn’t because for all that we might get actual words automatically translated, we are missing a lot of the non-verbal nuances and cultural markers that come not directly from the words themselves, but how they are used, and marked with non-verbal clues, for example. Computers don’t get irony or sarcasm.

One of the reasons I really like the Mosaic piece is that it provides some useful other references for you to carry out your own research. With respect to science writing online, this is really helpful. I have to say kudos to them.

Professor David Spiegelhalter at the RIA

Friday 7 November saw Professor David Spiegelhalter talking about risk at the Royal Irish Academy. If you’re not familiar with him, his site is here, he occasionally pops up on BBC Radio 4’s More or Less and other interesting places.

Risk is an interesting thing because humans are appallingly bad at assessing it. Ultimately, the core of Professor Spiegelhalter’s talk focused on calculating risk (yes, there is a micromort unit of measurement) and more specifically, communicating it in human friendly terms. This is not to suggest statisticians are not human; only that they have a language (we have a language) that isn’t always at one with general understanding.

This isn’t the only problem either – humans appear to be very good at not worrying about non-immediate risks as well. So this presents a number of challenges in terms of decision making behaviour on the part of people.

Talks like this can be massively entertaining if done well; less so if badly done. In one respect, one of the overwhelming contrasts of the evening was the absolute contrast between Professor Spiegelhalter’s talk and Patrick Honohan’s response which focused on difficulties in risk assessment in the financial sector. I took a slightly dim view of the response on the basis that every single banking ad makes it clear that the value of your home (or assets ) can go down as well as up and did so for most of the 2000s in this country, and therefore it isn’t so much a question as we didn’t understand the risk – many people just did not want to accept it. In certain respects, it has a lot in common with people who find it hard to live healthily now to for benefits sixty years down the line. If I had to choose who got their message across more effectively, by some distance it was Professor Spiegelhalter.

Talks of this nature interest me; particularly as they relate to numbers and numeracy, and in this case, on risk. People are never particularly good on probability and chance despite all that Monopoly board training each Christmas. Ultimately, the impression I got from the talk is that the debate has moved on somewhat from “what is the risk of [X bad or good thing] happening” to “how do we effectively communicate this risk”. It’s interesting – in a tangential way – that we are swimming in methods of communicating things these days between online streaming, social media feeds, many online publishing platforms and still, with science and numbers, we are only finding the corrective narrative for engagement in a hit or miss manner. Professor Spiegelhalter delivers his talk in an excellent manner. It is a pity that more people will not get to hear it.

on a related note, if you’re interested in talks of a science and mathsy flavour, the RIA and the Meteorological Society are prone to organise such things on the odd occasion. Check their websites for further information.