Hello

One of the joys of being back at university is the unexpected bits of inspiration that pop up. Today was one of those days when…well…

NaoThis is Nao.

Nao came in to visit today, with one of the PhD students who is doing some research on robot-human interaction. I’ve never seen anything quite like him/her (decision to be made really).

I mean, how can you not love something like this:

IMG_1589_cropNao can dance, can walk, can talk and can interact with you. He/she plays this sports game where he/she mimes the sport and you guess.

Nao gets to know you. “Look at my eyes until they turn green”. And they do.

It is fair to say that every single student who met Nao was utterly entranced by him. I would love a Nao of my every own. Nao has five thousand brothers and sisters dotted around the world. Surely there could be one for me?

Here is Nao dancing:

And Gangam style thanks to the University of Canterbury

This is the promo video from Nao’s parents, Aldebaran Robotics.


Here’s what I would do if I wanted to get more people into information technology, computer science and related cutting edge technology. I would acquire a couple of these robots, and I would hand them over to school outreach programs. And I would send them into primary schools and junior cycle secondary and I would say “Look at what you can do if you study work on maths and related.”

This is the stuff of dreams and inspiration. We’re behind the game, I think, if we’re putting iPads into school. If we put Nao into schools, we are putting the future into schools.

Very few schools have the funds to fund a robot like this. It is something that needs to be done at a national level, or possibly by the universities.

Eben Upton at UCD

Eben Upton came to UCD to have a chat about the Raspberry Pi today. Actually, he was accompanied by Alan Lund from RS – whom I should mention spoke very eloquently about the challenges involved in the Raspberry Pi and why it was such a ground breaker for them.

I love the Raspberry Pi. I bought mine last November as a birthday present and one of the key attractions for me at the time was the arrival of Mathematica and Wolfram Alpha. I have a lot of time for Stephen Wolfram. But one of the key reasons that I love the Raspberry Pi is that I’m a child of the home computing era. I have been that trooper typing in the code from Atari XL Magazine to try and guide that frog across the road. I have a great respect for anyone else I ever meet who’s had a go at it. Bloody typos.

So I was never going to miss Eben’s talk today.

Eben’s point of view is fairly straightforward but it’s illustrative of other aspects of society which is that we tend not to notice problems coming down the line, not en masse anyway (cf property and stock bubbles the world over). Eben caught a decline in the numbers of students applying for computer science in Cambridge, and a corresponding decline in their experience. His hypothesis is – and I think it’s a reasonable one – that children from a certain era basically had locked down computers rather than the liberty of shoving a tape in the cassette deck and hoping that the thing would boot for a change so that we could attempt to play Flight SImulator again.

Children – to a great extent – had handheld consoles and PlayStations and the PC in the corner, to a greater extent, was probably Mum and Dad’s. So the landscape changed and became a little less free.

We’re screaming now about the lack of qualified technical people. Eben caught this vibe in 2006 and started looking at causes for it. That takes vision.

So, today, he spoke at UCD courtesy of the Mature Students Society and the School of Library and Information Science and he had a lot of interesting things to say.

He went into the history of the idea behind the Raspberry Pi in some detail in an utterly engaging manner, and talked about the difference between their original expectations around it – maybe build 1000 units and ship them out to schools and hope they fell into the right hands – and the reality which is well over two and a half million of them have been sold. Because rather than just being computers for kids. they have appealed to a far broader range of people. This was entirely unexpected.

I’m a bridge hopper on the geek front. I started programming when I was 12 or 13 – I thought it was fantastic what you could do with them, maybe wasn’t the 1% brilliant and sank rather than swam although I typed up some nice graphic thingies into the Atari and regularly beat my brother’s high score in Jet Boot Jack and Flight Apocolyse. And I liked maths a lot.

However, for various reasons, I wound up studying modern languages at university. I probably could have done computer science at the time but I didn’t, at the age of 17, operate in that zone. So I speak fluent French and German. And a smattering of Spanish. I’ve a degree in translation and a diploma in interpreting. And when I was 27. I got hired as a programmer.

Most of my working life, I have worked with IBM assembler. I have worked on Big Iron. I really want to say this because I sometimes find the technological world a bit divisive between us and non-us. I’m not a classical geek but I have done a lot of bare metal programming.

(so I told Eben that we had to get rid of this geek/non geek division).

Anyway, my experience with the Raspberry Pi is this. I bought one. Went into Maplins, bought one, instant gratificationn, the morning of my most recent birthday and then prepared to tell people. Interestingly, my mother’s response was highly positive. She’s not a technical person (although she will have a Raspberry Pi when I eventually sort out her entertainment centre, sometime after I get through the May exams) but she understood completely what Eben was trying to do. She had done it herself 30 years earlier when she went to my cousin and asked his advice about getting a computer for her two youngest children. Her only proviso is that when I make her entertainment centre work, it must be simple to operate.

I fully get that.

One of my friends who has typically fallen squarely into the Users category when it comes to computers is fascinated and wants, again, to look into the idea of an entertainment centre. This time though, she wants me to write the instructions and let her do it herself. She doesn’t at this point want to write code and isn’t really sure if there’s anything else she’d want to do.

I get that too. But more than that, I get the curiosity.

Curiosity matters a whole pile in this game and one of the factors which was most discussed today was the question of computers in education. The UK has just implemented a massive change to their computer science curriculum at EBac level which is Junior Cert level. It has gone from being a user centric process to a developer centric process. There are lots of doubts in terms of how it will be implemented and while this formed no part of Eben’s talk, I am aware that there are serious concerns about the structure put in place to support this. My main concern about this is that it is over ambition and misdirected. I got computers because they were a game, an exploration. When they become a duty, there is a very real risk that people lose a certain amount of interest. I’ve seen this over the year with mathematics and while it is important that people are mathematically literate, the simple truth is that mostly, they are not.

Eben gets this. and the Raspberry Pi Foundation get this so a lot of effort is going into professional development to support teachers and the recognition that there is a communications ask here.

The question and answer session afterwards was interesting; one of the key comments which was made related specifically to the failure of some people to bridge the divide on passing on programming skills. I think this is very important, and I also think that the idea of one true way needs to go. While maths skills are important, programming is very much a creative skill (and this is why I don’t particularly enjoy programming in Java – a lot of elements of creativity are taken out of it for me) and creativity is not a skill limited to people who self identify as geeks.

In the main, if you get a chance to hear Eben speak, I’d grab it. He is utterly engaging, he believes absolutely in what the Raspberry Pi Foundation are doing, and recognises the random steps that have changed things here and there for him – in particular relating to getting the Raspberry Pi manufactured in the UK.

He also mentioned one story which I thought was fantastic and it related to the person who invented the designs for one of the Lego based cases for the Raspberry Pi. She was 11 years old and she negotiated her royalty payment in Lego

I think that is absolutely fantastic and if that’s what it takes to get more kids looking at this, fantastic.

(the other story which I loved involved sending a teddy bear up to the edge of space. I would like to do the same with a Barbie doll – I feel it would be symbolic on a lot of levels plus an interesting technical challenge).

All in all, a fantastic couple of hours.

 

Coding comments again

I saw something mythical yesterday; something I hadn’t actually seen before. I saw self documenting code. This is unique in my experience.

I have seen on many occasions, code described as self documenting but which was anything but. I suspect a big contribution to the self documenting nature of the code is that it mapped a relatively simple process. The logic was straightforward. The objects were straightforward and non-complex. The code ran to 300 lines which is not a lot for a full application, and the application was culturally common. It was written in Python. It was a thing of absolute beauty.

I’m a fan – in general – of providing code comments, particularly in the zone of Why rather than How. My experience is that the why tends to get forgotten, that current knowledge gets taken for granted and that if the code base is still in use in 10 years time, you probably can’t rely on current knowledge.

In particular I am a fan of assuming that you should make your code as easy to read for the next person as possible. Ultimately, you shouldn’t assume – as a lot of programmers seem to do – that because you approach a problem in a specific way, that everyone will, and that everyone will immediately understand your approach.

And especially, if you’re not writing a clone of a well loved arcade game, it’s probably a good idea not to assume that your code will self document. What’s rare is wonderful and seriously, I have never – before yesterday – seen a piece of code longer than about 5 lines that could justifiably be called self documenting.

What’s rare is wonderful.

Documentation quick tip

Update Word – if you use it – with a couple of extra styles:

  • a style for code in some sort of monospace font (I use Courrier New and I colour it red)
  • a style for code commentary in some other font and colour (I use Century and I colour it blue).

When you are creating the style in Word 2013, you can tell the software to use this in all new documents as well and make it part of the Word normal template or the default. This is useful if you don’t want to build a separate template.

The other thing which may be useful is something highlighting action points and completed action points. I tend to use bold and and again, different colours, and for the case of completed action points, strike through.

People handle work and coding differently – I tend to like to have a commentary file of what I am doing, what I am trying to do, where I am stuck, how I’ve resolved problems, for each project and this is to ensure I don’t have to build a brand new document with new styles every time. Useful information on customising Word is here – I don’t recommend doing everything he suggests but there are ways of making it more helpful for you. If you’re not familiar with styles, they are useful to be able to work with.

What do you love about programming?

Via a tweet from, I think, Kathy Sierra, in which she said this was the one interview question she had never been asked.

I started programming, a bit, when I was 13 and did it on and off until I was about 16. And then I stopped for 10 years. In 1999 I did an interview with a major Irish company which was looking for IT staff but who did not, for various reasons, have to have a degree in computer science. I got through that process and despite expecting to be put working on web technologies, I was sent for assembler training and then spent the next chunk of my life as an assembler programmer. Since then I have programmed a bit in Java, some in VB, some in R and now, occasionally in Python and again in Java.

Programming is an interesting activity. I love starting off with a problem to solve, and I love thinking about how I might solve the problem given the available tools. When you’re learning a language, this leads to various interesting algorithms as you code around a lack of knowledge. Sometimes it leads to massively inelegant solutions, other times it leads to things of pure beauty. I love programming purely for the problem resolution aspect of it, the fact that I can sit down with nothing but a piece of paper and a task to accomplish. For me, programming is more the side of working out how to accomplish something rather than purely executing it in code. There are, if you like, many ways to do that – the hard bit is the working out not necessarily the coding.

I don’t, in general, mind debugging my own code mainly because I generally understand what it is I was trying to accomplish. You learn a lot from the way you look at problems when you’re trying to identify where you went wrong in trying to solve them. In this respect, programming is always a learning process.

What I love about coding is typically it opens up the possible. What can we achieve tomorrow that we could not do today?

Why do you develop…

Sometime ago, I had a conversation with a developer on the subject of rectifying a re-occurring issue. There was a straightforward fix a developer could do to fix each occurrence of that issue but the developer, who had also explained several times how to avoid the issue to one or two of the several users wanted to punish the users and stop fixing the problems for them to compel them to make efforts to avoid the problem by following procedure. This might work if you’ve one or two users but more than that, I think it’s unrealistic. Much better to allow for the software to protect against errors particularly if it’s a known and re-occurring issue.

I’ve often replayed that conversation in my mind and realised that I don’t really like it as an idea. While no part of the world is perfect, and there are often underlying considerations, rather than telling users how to avoid problems procedurally, we should enable them not to cause the problem in the first place by either a) preventing it from happening at a coding level or b) automatically fixing it in some way. Failing that, providing them with a tool to fix the issue themselves.

I don’t think we should ever be in a zone whereby it’s considered acceptable to punish users via the software we’ve designed for them. We should be in a zone whereby we develop to protect them against themselves to some extent. Ultimately, a developer’s role is to help a user to accomplish some task. That includes making it easy for them to accomplish that task while making it hard for them to break accomplishing that task. Punishing them because your software design fails on the second part of that role is perhaps a little unfair.

 

Bug blaming

Yesterday, I came across an interesting post on Programmers.Stack.Exchange which caught my attention for one reason or another regarding additional fields in bug tracking software.

In one of the latest “WTF” moves, my boss decided that adding a “Person To Blame” field to our bug tracking template will increase accountability

It was not, it must be said, universally welcomed by the PSE community. Allegedly, this post is from the boss in question and it didn’t do a whole lot to win him any favours.

Blame is a dangerous word. It is not the sort of word that aids in root cause analysis, it is the kind of word that causes drawbridges to be pulled back up, the kind of word that causes staff to avoid taking responsibility for anything because it means they will get the blame for anything that can stick. It makes it difficult to get teams to work together, and it damages collaboration. Why? Because people are looking for blame and fault where they could be looking for cause and learning.

According to the second post above:

We anticipated the increase in production bugs when we moved away from having a dedicated QA team.

I’m utterly stunned by this. I don’t usually – in interviews – ask questions about these things – but it’s almost inconceivable to me that any place which wants to release good quality software doesn’t have a dedicated testing team. I mean it’s good they recognised they’d wind up with more production bugs but now they want to blame individuals for those bugs when they result from a half assed management decision to get rid of dedicated QA? Why would anyone want to work there?

The thing is, root cause analysis is important. Very important and often ignored over time. For an effective root cause analysis, you need to drop the idea of fault and blame and get in the concept of up-front honesty. I’m a fan of taking responsibility for my mistakes. This is the only way I can learn from them and more to the point, it’s the only way that other people can learn from them. “I goofed up” is a better starting point than “You goofed up”. Accusations and blame result in lousy team atmospheres, less willingness for people to work together; it causes isolation.

If the guy above was serious about getting people to take more pride in their work, he’d not be looking for a “person to blame”. That’s a bullying, hectoring field title and it negatively impacts morale. It’s not the sort of thing that makes people want to take responsibility for their faults and it doesn’t support the desire to be better. It more supports the desire not to be caught.

 

 

Coding without comments

Via Robert the Grey and Jesse Liberty, I have been thinking about code comments and how necessary they are, and I have come to the conclusion that some people have just never really written assembler.

Jesse’s argument for his project is as follows:

  • Comments rust faster than code, even when you’re careful
  • Well written code can be read, and comments are annoying footnotes
  • Comments make for lazy coding

Comments rust faster than code not because you’re not careful, but because you’re lazy. And if you are lazy about commenting, how do I know you’re not lazy about coding?

I realise that sometimes the logic behind a piece of code can be less than clear; no matter how well you construct your variable and procedure names, that problem does not go away.

According to Robert:

  • Stating the bleedin’ obvious (I’m looking at you Method Arguments)
  • Put there because you’re too lazy to refactor the code as demonstrated in Jesse’s article and the comments
  • Enforced by stupid corporate coding standard mandates that are still stuck in the 90s
  • Stale (and sometimes actively harmful) by the next check-in or 6 months later

I think that this is a lazy approach. It says “I cannot be bothered to document things properly so it’s someone else’s fault if they do not/cannot read my wonderfully elegant code that is self-evident because I have designed it to be so”.

It’s also my experience that well written code tends to be accompanied by well-written comments, and that poorly commented code is rarely wonderful and easy to extract some meaning from.

If either Jesse  or Robert want to carry out thought experiments like this, it’s entirely up to them. But in the real world, code standards should include checks for adequate documentation so that people, other people, have a fighting chance of dealing with other people’s elegant code. Just because you think you’ve written good code and it is self-evident what it is doing does not mean you actually succeeded in doing so.

I’ve no objection against the new generation languages. Anything that makes programming more accessible to other people is a good thing. Anything, however, that allows you to break down some discipline, is not a good thing.

If you’re disciplined enough to write good code, I honestly don’t see an argument for not being disciplined enough to write good documentation for said code. If you can’t then, in my view, it is lazy.