Tuesday, 8 February 2011

Hmmm– what happened to my brain?

I’ve been reading the excellent “Joel on software” and “More Joel on software” books recently (by Joel Spolsky obviously) – I’ve got to say I wish more people had his insight into what motivates and drives a good developer to work well. I’ve lost count of the number of clients I’ve turned up at to find brow beaten developers sat next to the telesales department staring at postage stamp screens and wondering why their productivity is so low….

Anyway, I’ve got a few things to say about that myself, but I’ll save it for another day, today I’m thinking about some of the stuff Joel talks about with regards to groking pointers and how programming is getting easier, or should I say dumbed down (due to University teaching higher languages such as Java rather than C, C++ or assembly)  and I looked back to where I was in the early ‘90s and realised that I seem to have forgotten a lot since then!

Back then, I remember writing games in my spare time in C/C++ and assembler – I remember knowing about memory segments and offsets, I remember using registers to set source/target memory and moving bytes around, I “got” pointers, I knew about memory allocation, doubly linked lists, and I could write assembly language until it came out of my ears - I even remember using “debug” on the command line to inspect what a game was doing during the license key check at the start up of the game and then changing the op codes from a JE (jump if equal) to a JNE (jump if not equal) so that if you put the wrong key in, it let you play the game (sorry about that, but we’d erm, lost the key, that’s it, lost it).

I even remember myself and a few colleagues (John Steele and David Mort, just in case you’re out there!) having competitions on writing double buffer, screen swapping algorithms that would push the delta from two buffers into VGA video memory (0xA000)– because writing to video memory was so unbearably slow compared to RAM – we would actually sit there, pouring over the assembly listings, counting the iterations and how many clock cycles it would take to refresh a partial or full screen and therefore who’s routine was better. Geek fest, and you know what, I loved it and feel sorry for any developers out there that haven’t had a similar experience.

Anyway - what is my point? Well, I don’t think I have one other than this was a nice trip down memory lane, and I think I might struggle to go back and do that again now. I could do it with my eyes closed back in ‘92 and I had to for a variety of, usually performance, reasons, but because I’ve not had to use that skill in a long time, it’s gotten old and rusty like a neglected banger of a car left in a leaky garage. These days I work at a higher level of abstraction – I work with C# and I build software in that higher level world rather than in the bowels with assembly and whilst there are numerous principles and concepts learned in the past that are still useful, I’ve not needed to drop into assembly for any reason (building LOB apps) in the last, I’m guessing, 10 years.

Is that a good thing? Part of me is happy that building software is easier, but the other part of me feels that those kinds of skills provide an essential grounding to really understanding what is going on under the hood, even if you do spend your days in the .NET C# clouds.

No comments:

Post a Comment