Decaffeinated Gcat

*stands up* Hello, I’m Gcat, and I haven’t had a drink in nearly two weeks.

Oh, did I say “drink”? I meant “drink of coffee”, of course. And believe me, this is much more of a challenge than merely giving up alcohol for a few weeks.

Those of you who know me or who’ve gleaned enough from my previous blog entries will know that I’m prone to stress and anxiety, insomnia and sudden bouts of exhaustion (among other things). When you read about those sorts of thing, giving up caffeine is quite often a prominent piece of advice. But I never really tried it before, apart from a sort of half-hearted and not very successful attempt to cut down from 4 or 5 coffees per day to 2 or 3. I sort of doubted that it would make much difference, plus there was the small matter of how impossible it seemed. Me, get up in the morning without the aid of coffee? I’m a computer programmer ferchrissake. There’ll be multiple homicides before lunchtime if I don’t get my caffeine fix. The advice might as well have said “all you have to do is sprout wings and learn to fly” (something which, whatever the Red Bull marketing department would have you believe, is difficult for most people).

A couple of weeks ago something happened. Namely, I got hit by a horrible stomach bug one night, and by the morning I definitely wasn’t feeling able to stomach a cup of coffee, so I went without. I wasn’t doing anything much except lie in bed and groan anyway so it wasn’t like I needed the energy boost. But then something strange happened. Through the haze of illness that was still clouding my mind, I felt different. Like my mind had slowed down a bit and was stopping to enjoy the view (even though said view currently consisted of my darkened bedroom ceiling) instead of just racing on to the next goal. It’s hard to describe… it felt very weird, in that I wasn’t accustomed to feeling like that anymore, but also comfortingly familiar, like it was taking me back to how I used to feel a long time ago. It was hard to tell how much of it was due to the lack of caffeine and how much because of the semi-delirium from the bug… but it felt nice enough to make me want to try having a break from coffee. Anyway, I reasoned, now was as good a chance as I was ever going to get to try it… I’d already survived the first day without it so that was probably the worst bit over.

(I didn’t quit caffeine altogether. I just switched to tea, which I intend using as a sort of reverse-gateway drug if there’s such a thing).

That was nearly two weeks ago now… and since then I have noticed a change. The first day back at work started off, as I’d expected, pretty miserably without the coffee-kick to get me going. But the tiredness went away more quickly than I’d expected. Then in the middle of the evening I suddenly realised that (a) I hadn’t come home from work feeling completely wired and like I needed a beer to calm myself down, and (b) I hadn’t had a horrendous crash in my energy levels late in the day and was actually still feeling quite awake by about 8pm. Both of which are pretty unusual occurrences. After a few days of this I also noticed I was sleeping better and coping better even if I didn’t quite get my eight hours.

So yeah… I guess those suggestions about cutting down on caffeine ain’t just there to bump up the word count after all. I’d recommend at least giving it a try if you suffer from any similar problems to mine.

(I was hoping to write more about the projects I mentioned a few posts ago before now, but due to (1) the aforementioned stomach bug, (2) work going absolutely crazy in the last couple of weeks, and (3) having an unexpected new toy to play with, I haven’t really got far with any of them yet. I expect I will at some point soon 🙂 ).

Second Helping of Pi

Unsurprisingly, I found a few spare hours this weekend to work more on the Raspberry Pi. (Though I was very restrained and didn’t work on it non-stop… did still go dancing one night and out for a walk to take some nice photos yesterday afternoon. I know what it does to my mood if I spend a whole weekend cooped up coding, even if I am tempted to at the time).

First I finished up the Master System emulator. I added in a border to stop the graphics going off the edge of the screen, then turned my attention to the more challenging requirements: keyboard input, sound, and timing.

Getting input from the keyboard isn’t usually a particularly challenging thing to do… not for most programs, anyway. But for console emulators it’s a bit more involved, for two reasons:

  • we want to be able to detect when a key is released, as well as when it’s pressed
  • we want to be able to detect multiple keys being pressed at once (for example, the right arrow and the jump key)

I tried various ways of doing this – firstly, the way I used in emulators I’d written for Windows previously: the SDL library (this library can do lots of handy things and keyboard input is only one of them). But although the library was installed on the Raspberry Pi and I was able to link to it, I couldn’t detect any keyboard activity with it. Eventually I found out you can perform some arcane Linux system calls to switch the keyboard into a different mode where it will give you the information I needed. This only works from the real command line, not from X Windows, but it was better than nothing. (You also have to be very careful to switch the keyboard back to its normal mode when your program exits, otherwise the computer will be stuck in a strange state where none of the keys do what they’re supposed to do, with probably no way out other than turning it off and on again!). I still want to find a way to make it work in X Windows, but that’s a project for another day.

(I wrote a more technical blog post here about the keyboard code in case anyone wants to use it).

While reading the keyboard turned out to be a bit harder than I’d hoped, this was more than made up for by how easy it was to get the sound working. In fact I found I was able to re-use most of the code from the audio playing example program that came with the Pi. The only slight strangeness was that it seems to only support 16 or 32 bits per sample rather than the more standard 8 or 16, but it’s easy enough to convert the 8 bit samples generated by my Master System sound code to 16 bit. I didn’t know whether the Pi was expecting signed or unsigned samples, but the din of horribly distorted noise that greeted me the first time I tested the emulator with sound confirmed that it was the opposite of whatever I was giving it. That was easy enough to fix too.

As for the timing, it turned out to be a non-issue – the sound playing code will block until it’s ready for the next batch of sound data anyway, so this will keep the emulation running at the correct speed. (Actually it’s a non-issue for another reason as well, but I’ll get to that later).

(It’s amazing how enormous the pixels look now. I’m sure they never did when I was playing with a real Master System on a telly almost as big back in the 90s. I suspect it was just the general blurriness of TVs back then that masked how low resolution the graphics really are).

Since my first Raspberry Pi emulator had been easier than expected, I decided to port another one – my Android Gameboy emulator should be do-able by welding the head of Raspberry Pi-specific code I’d just written for the Master System one onto the body of behind-the-scenes code from the original Android version of the Gameboy and making a few important tweaks to make them look as if they match up. So that was what I did.

“This’ll be a breeze”, I smugly thought. “I’ll be done in a few minutes!”. But it wasn’t quite that easy…

I was mostly done in a few minutes (well, maybe half an hour) – graphics were working and I could play Tetris or Mario. But the sound was horrible. Really horrible. Not just normal-Gameboy-music level of horrible… something was clearly very wrong with it. I checked and double checked the code over and over but still couldn’t see the bug. I hadn’t changed the sound output code very much from the Master System, apart from changing the sample rate slightly and switching from mono to stereo. I switched back to mono again. No change. I tried a more standard sample rate (22050Hz instead of 18480Hz). Nope, now it’s horrible and completely the wrong pitch.

I puzzled over this one for a long time. I tried various other things I could think of, rewriting the code in different and increasingly unlikely ways, but nothing seemed to make a difference. The only thing I established was that the sound buffer was either not being completely filled or was underflowing – when I tried filling it with a constant value instead of the Gameboy’s sound output, I still got the horrible noise (a constant value should give silence). But why??

Eventually I cracked it, and learnt something in the process. I noticed that Mario seemed to be running a little bit slower than it should, and I wondered if the emulator was not actually running fast enough to process a frame before the sound buffer ran out. That would certainly explain the sound problem… but didn’t seem like it should be happening. The same emulator code had no trouble reaching full speed on my slower HTC Wildfire, it should be no problem for the Pi to manage it as well. On a hunch, I tried reducing the sound sample rate quite a lot. Finally a change! Sure, the game was running slower and the music was now sounding like a tape recorder with a dying battery… but for the first time the horrible noise was gone! Then I had a thought: what if the graphics code is locking to the frame rate of the TV? The Gameboy screen updates at 60Hz, but UK TVs only update at 50Hz. Trying to display 60 frames in a second when each frame is waiting one-fiftieth of a second is not likely to work very well. Sure enough, only outputting every second frame (so running at 30 frames per second instead of 60) cured the problem completely. It had never occurred to me that this could happen… I was so used to programming PCs, where the monitors have all run at 60Hz or more for decades, that I forgot the little Pi connected to my TV would be different.

Anyway… I decided to tidy up the code and release it in case it’s of interest to anyone. So if you head on over to my emulators page, you can now download the source code of both emulators for the Raspberry Pi along with detailed instructions for using them. Enjoy 🙂

(A word of warning… I wouldn’t say they were examples of good programming practise. The CPU cores and graphics and sound rendering code are written in ARM assembly language, which I only did because I wanted to learn it at the time – C would be a better idea if you wanted to write an emulator that’s easy to maintain and extend, and probably would be fast enough to emulate the old 8-bit systems).

(Another word of warning… I have some more things in the pipeline that might be more interesting than these two 😀 ).

It’s Pi-day :D

GCat’s adventures with a credit card-sized computer.

It’s months now since I blogged about the Raspberry Pi. At the time I said I was getting really excited about it. Well, my excitement did start to wane a bit after getting up at 5.45am on the release day (February 29th) only to find the level of interest had practically melted the servers of both supplying companies and there was very little chance of getting hold of one any time soon. I was still intending to buy one when some of the mayhem had died down, but I hadn’t given it so much thought lately. Then suddenly yesterday one of my colleagues walked into my office without any warning and handed one to me!

I couldn’t wait to give it a try. Unfortunately I didn’t have a screen in the office that it could hook up to immediately (it needs HDMI or composite, VGA or DVI monitor plugs are no use) so all I could do was download the software ready to try it out (it needs a custom version of Linux on an SD card) while casting occasional excited glances at the box. But luckily there’s a nice HDMI TV in my living room…

My first reaction was: wow, this thing really is tiny! I mean, I knew it was credit card-sized and all, but even so, it’s still hard to believe just how small it is until you see one in the flesh, so to speak. I was even more amazed by the size of the main processor (the black square chip just by my fingernail in the photo and about the same size!).

Hooking everything up to it reminded me of connecting up one of our old computers and brought back happy memories of geekily spent Christmases and so on. In the picture, the power is coming from my HTC phone charger and going into the micro USB connector on the lower left corner. The SD card with the Linux OS is the blue thing protruding out from underneath the board just by the power connector. The grey plug going into the near side is the HDMI cable to my television. The green cable coiling round the whole thing is ethernet to connect it to the internet (it doesn’t have built in wifi so it needs either a cable connection or an external USB wifi dongle). Finally, the two black plugs next to the ethernet are my ordinary USB keyboard and mouse.

With trepidation, I double checked all the connections and then turned the power on. Would it work? I’d seen reports that certain SD cards wouldn’t work properly so I knew there was a chance I’d got a bad one or that I’d messed up the OS install.

Success! I could see the raspberry logo on the screen and the Linux boot messages scrolling past (looking very tiny in full 1080p resolution). Soon I had the desktop environment running and was verifying that it was indeed capable of viewing pointless web pages.

It was pretty easy to get up and running by following the quick-start instructions on the Raspberry Pi website. It was a little bit sluggish for browsing the net, but that’s to be expected with such a low-powered machine with a chip designed for mobile phones but running a full desktop system. Apparently this will get better once X Windows (the software that provides the graphical user interface on Linux) is using the Raspberry Pi’s rather capable GPU to do most of the drawing instead of doing everything on the main processor as it is at present.

But nice though it was to see my blog on the big screen courtesy of the Pi, I was more interested in getting some of my own code up and running on it. After a quick break to redo the partitioning on the SD card (so that I could use the full 16GB of space rather than the default less than 2) and install my favourite geeky text editor, it was time to delve into the code examples.

As the Raspberry Pi is intended for teaching programming, it comes with some nice example programs showing how to make it do various things (play video, play sound, display 3D graphics, etc.). I’d decided for my first project I was going to try and get one of my emulators up and running on it; the architecture is actually very similar to my phone’s so even though the emulators contain quite a lot of assembly language code that would have no chance of working on a normal PC, they should work on the Pi without too much trouble. I decided to start with the Master System one as it’s a bit simpler than the others.

After an hour or two of hacking, I had something working.

As expected I didn’t need to change very much in the code. I just replaced the “top layer” that previously communicated with the Android operating system with a new bit of code to send the graphics to the Raspberry Pi GPU via OpenGL ES. (Although that’s mainly for 3D graphics, you can do some nice 2D graphics on it too if you more or less just ignore the third dimension).

The emulator isn’t fully working yet… there’s no sound (I need to look at the sound example that came with the Pi but it shouldn’t be too hard), no way to actually control it (that screenshot is just showing the demo running on its own – I need to figure out how to get key presses in the right form), and there are a few other glitches (the graphics seem to extend slightly off the edges of the screen and the timing is a bit off). But overall I’m reasonably pleased with my first few hours with a Pi 🙂

Update: the Master System emulator is now closer to being finished and you can download it from here.

Projects, projects, projects…

This is heavily inspired by (read “ripped off from” 😉 ) a post on my brother’s blog.

I also have a bunch of creative projects on the go. Well actually, a lot of them are not quite so on-the-go as I would like, in fact some seem to be terminally stuck not going anywhere. Maybe talking about them a bit more publicly will inspire me to get them going again.

I’ve always been like this, I think. Ever since I was quite small I would come home from school and spend most of my free time writing stories, messing around making things on the computer, drawing maps of places I found interesting, or learning new music on the piano. I never saw the appeal of spending hours in front of the TV (I still don’t), and although I did play a lot of computer games, I must have spent at least as much time designing and writing my own as I did playing other people’s.

Now that I’ve got a full time job it’s a bit harder to find the time to do all that kind of stuff. But because it’s important to me, I still try. I’ve already blogged from time to time about my Android app making, my bandour film group, (on my other blog) one of my home-made computer games, and piano playing. To try and organise things a bit better and prioritise the stuff that’s really important to me, I decided to make a list and give them all codenames like Alex did in his blog. Here is the list, along with a little symbol of some kind for each one. Some of these overlap with Alex’s ones because they’re group projects of some kind – they have the same names that he gave them. Some of them are slightly ill-defined and are really catch-alls for a whole possibly area of creativity that I might be interested in experimenting with later on. Some are much more specific. OK, on with the list!

Project Bubble – this is the codename for our next Sonic Triangle EP, which has been in production for quite a while now. Alex already wrote a whole post about it so I won’t say much here.


Project Hohoho – the Beyond Studios Advent Calendar! Alex and I have both already written whole posts about this so again I won’t say much here.


Project Everything – this is really Alex’s project and I don’t know if he wants to reveal what it is yet, so I won’t.



Project Chippy – Alex’s web series!



Project Noah – this is actually a work (as in paid work) project. I need to find out whether I’m allowed to blog about it or not. I probably will be able to, and I hope I am, because I think it’s really interesting.


Project Bits – this one’s computer related and probably way over-ambitious, but at least I’ve been managing to make some progress on it lately.


Project Buster – one of the sort of vague, catch-all ones.



Project IOM – this one has been coming along quite nicely, before I even decided to make it a Project with a defined end goal. It’s nice because unlike most of the others it involves leaving the house quite a lot.


Project X-ray – another of the vague, catch-all ones… including ideas that are probably also way over-ambitious, but might be fun to play around with anyway.


Project Megadroid – if you’ve paid attention to my previous blog posts you can probably work out exactly what this one is just from its symbol and name. But anyway… it’s one of the few that’s (a) got a well defined goal, and (b) probably isn’t too far from reaching it… yay! It’s been taking a bit of a hiatus recently but thinking about it is starting to tickle my interest again, so maybe I’ll finally get it finished (and release it on here).

Project History – this one is journaling-related. It probably deserves its own post at some point.



Project Classical – another one that’s probably quite obvious from the name and pic.



Project New Leaf – a nice, hopefully quick and simple but very rewarding little Project that will help with some of the others once it’s done. I won’t say more than that because I’m saving it for its own blog post.


Project Tridextrous – ambitious, probably slightly insane, may never happen.



Project Fantasy World – very broad, catch-all project… no definite plans in it yet but an area I’m still interested in returning to.



Project Bonkers – … um, yeah.



So that’s them. Some of them will hopefully get their own posts soon and hopefully having a place to write about progress will inspire me to actually make some progress to write about.


On Internet Censorship

(This post is a bit different from my normal ones. Instead of talking about something from my own life I’m going to have a rant about something that annoyed me from the news. It’s pretty common for things in the news to annoy me, but what’s slightly more unusual is I actually feel somewhat qualified to rant about why I think it’s stupid this time).

So. The Pirate Bay is now blocked by the biggest ISPs in the UK. If, like me, you’re on Virgin Media, BT or one of the other big ones, that link won’t work… it will take you instead to a page explaining why that site’s blocked. (For those of you that didn’t know, The Pirate Bay is a site where you can search for torrent downloads of music, movies, TV shows, operating system ISO images and virtually anything else that can be represented as a chunk of bits).

The block itself won’t have much of an effect; there are a million and one ways round it, from using proxies in other countries to using Tor to using this address helpfully provided by the Pirate Party UK. But I still think the fact that they’ve done this brings up a number of interesting (and concerning to an internet user) issues.

First of all, the block is implemented using the BT CleanFeed system, which was first created only for blocking child porn sites. Most people didn’t have a problem with this originally, although some may have been concerned that it wouldn’t do much good or that there would be bad side effects. (My own personal view, shared by many computer professionals it would seem, is that the filter is so easy to get around as to be completely pointless, and that any small good effects that may come from it would be far outweighed by the problems – sites being wrongly blocked, for example). But at the time, some people voiced worries that this was just the first step and that the system would eventually be used to block other sites that those in power don’t like as well. No way, said the officials in charge. This is only for child porn, which is so abhorrent we have to make a special exception for it. They lied. Now that very same infrastructure is being used in an attempt to prevent copyright infringement as well. What might it be used for next? Blocking “hate speech” (which might sound appealing at first, but could have pretty far-reaching results when you consider how broad and subjective it can be)? Blocking political sites that the government of the day doesn’t approve of (again a matter of opinion)? Of course they’ll say that could never happen, and I hope they’re right. But then they’ve already lied about what it would be used for and gone far beyond their original mandate, so in my opinion they’ve demonstrated that they can’t be trusted with it. There is already a pretty questionable plan afoot to use it to block all “adult” material and make people opt in if they want it unblocked again.

“But”, you might say. “Copyright infringement is still illegal. OK, it’s not as serious as child porn, but it’s still wrong and still against the law, so what’s wrong with using the filter to block sites that allow it?”.

Lots of things, in my opinion. Firstly, The Pirate Bay (and torrent sites in general) do have legal, non-copyright-infringing uses. For example, the last time I downloaded a torrent, it was a new version of Xubuntu for my netbook, which is freely distributable. Torrents are generally faster for downloading these large files than using the normal web. I’ll freely admit that this probably only accounts for a small proportion of The Pirate Bay’s traffic and will be dwarfed by illegal downloading. But there are less clear cut examples. The filter is a blunt instrument – it blocks access to the entire Pirate Bay, not just the copyright-infringing portions. Any site that allows its users to upload their own content (YouTube, Facebook, Flickr, and countless others) is bound to have plenty of illegal stuff (copyrighted and worse) on there at any given time because people can upload it much faster than the sites can check it. Does that mean they should all be blocked as well, just in case?

Also I think the claim that all copyright infringement is wrong or harmful is highly questionable. If I download a torrent of an ancient TV show I used to like that isn’t available anywhere else, who’s been harmed? No-one lost out… I didn’t download it instead of buying the DVD because there IS no DVD to buy, even if I wanted one. This is still illegal but it doesn’t seem wrong to me. Even when it comes to music or films I could have bought instead, it’s still a grey area because there’s no guarantee I actually would have bought them, and therefore no guarantee that anyone’s lost money due to me downloading them. This argument has already been done to death all over the internet, but the copyright system as it stands was never really designed for the times we live in. It worked well back in the days when only large companies had the means to copy things, so it was only affecting large commercial competitors, and not really impinging on the rights of individuals, who mostly couldn’t copy books or records even if they wanted to. But these days everyone and their cat has PCs and smart phones that can effortlessly copy music and movies at the touch of a button, and we’re starting to see that copyright can’t really be enforced in this world without resorting to some quite oppressive measures. A lot of people are questioning whether copyright in its current form is actually worth all the trouble anymore.

You will have got the impression by now that I don’t think these filters are going to work. I don’t, and what’s more, I believe anyone who does expect them to be effective fundamentally misunderstands how the internet works (or else they themselves understand it perfectly well but they’re trying to exploit people who don’t into buying some useless snake-oil filtering product they’re selling). The filter, you see, is basically a blacklist. That means it has a list of “forbidden” sites and everything else is, by default, allowed. But given the vastness of the internet and the speed at which sites appear and disappear, any blacklist is doomed to be forever out of date. For example, there is nothing to stop me putting a copy of The Pirate Bay site up on the domain right now, and it would be accessible to everyone in the UK. The filter wouldn’t be able to prevent that because it has no idea of the existence of the copy of The Pirate Bay until someone updates its blacklist. By the time they get around to that, a hundred other mirror copies of the site might have sprung up in other places, still unfiltered until they also get found and added to the blacklist. Whatever authority is maintaining the list is reduced to playing a very large game of whack-a-mole that they can’t possibly win.

And that’s before even considering proxies (which allow you to access a blocked site by bouncing your connection off an intermediate server somewhere else, completely bypassing the filter), or more advanced solutions like Tor (which encrypts and anonymises all your communication, making it almost impossible for anyone watching your internet traffic to even tell what you’re accessing). There is pretty much no way around this problem unless you go down the road of having a “whitelist” instead, where people are only allowed to access a restricted list of sites that are known to be “safe”, and where all their communications are closely monitored to make sure they aren’t using any clever encryption or proxying to hide what they’re doing. If that were the case, it would completely change the internet as we know it. I wouldn’t have been able to set up this site the way I did, just buying a domain name, pointing it at my server and starting to write articles. I would have had to get government permission to do it, would have had to get the site vetted to make sure it’s not breaking any rules, possibly have to prove that I own the copyright of every last little thing I post (or even link to)… and presumably would have to be re-checked every time I added new content to make sure I was still worthy of the whitelist.

Even then, even with all the stupendous amount of effort it would take to police such a system, I actually doubt that it would remain water-tight for very long. Given a combination of clever tricks like steganography (hiding nefarious data within harmless looking data), and the security holes that plague virtually all new software, it would still be possible for enterprising people to share whatever they wanted. Or they could just ditch the “official” internet and setup a new one using cheap wireless comms technology, which is everywhere now.

Essentially, what I’m trying to say is that stopping the “free” internet now would be like shutting the stable door after the horse has not only bolted, but flipped the security guard the V sign, mooned the CCTV camera, set free all of the other horses and had a wild party with them, culminating in burning the entire stable block to the ground.

If it’s that futile, then why do politicians keep trying? Well, it’s mostly just posturing to try and look good, in my opinion. None of them is going to stand up in front of parents whose votes they want and say “sorry, but we can’t completely rid the internet of child porn”, or in front of corporations whose donations they want and say “sorry, we can’t completely stop people downloading your movies/software for free”. They’re going to keep up the tough talk about tackling the challenges of the internet, even if “tackling” them in this way makes about as much sense to anyone computer literate as investing in perpetual motion machines to solve the energy problem.

Summary: you won’t stop people using the internet to do Bad Things with a little bit of tweaking around the edges, only by tearing down the whole system and starting again (even then you still probably won’t succeed). And I don’t want you to do that. Please don’t.