Making the Online Botanic Gardens Station Model (Part 2: The Viewer)

Last time, I talked about how the 3D model itself was made. In this post, I’ll discuss how I embedded it into a web page so it can be explored in a web browser.

Not so long ago, it was difficult or impossible to produce real time 3D graphics in a web browser, at least it was if you wanted your page to work in a variety of browsers and not require any special plug-ins. That’s all changed with the advent of WebGL, which allows the powerful OpenGL graphics library to be accessed from JavaScript running in the browser. WebGL is what’s used to render the Botanic Gardens Station model.

The finished WebGL viewer

The finished WebGL viewer

There are already a number of frameworks built on top of WebGL that make it easier to use, but I decided I was going to build on WebGL directly – I would learn more that way, as well as having as much control as possible over how the viewer looked and worked. But before I could get onto displaying any graphics, I needed to somehow get my model out of Blender and into the web environment.

I did this by exporting the model to Wavefront OBJ format (a very standard 3D format that’s easy to work with), then writing a Python script to convert the important bits of this to JSON format. Initially I had the entire model in a single JSON file, but it started to get pretty big, so I had the converter split it over several files. The viewer loads the central model file when it starts up, then starts loading the others in the background while the user is free to explore the central part. This (along with a few other tricks like reducing the number of digits of precision in the file, and omitting the vertex normals from the file and having the viewer calculate them instead) reduces the initial page load time and makes it less likely that people will give up waiting and close the tab before the model even appears.

How not to convert quads to triangles

How not to convert quads to triangles

Once the model is loaded and processed, it can be displayed. One feature of WebGL is that (in common with the OpenGL ES API used on mobile devices) it doesn’t have any built in support for lighting and shading – all of that has to be coded manually, in shader programs that are compiled onto the graphics card at start up. While this does increase the learning curve significantly, it also allows for a lot of control over exactly how the lighting looks. This was useful for the Botanics model – after visiting the station in real life, one of my friends observed that photographing it is tricky due to the high contrast between the daylight pouring in through the roof vents and the dark corners that are in the shade. It turns out that getting the lighting for the model to look realistic is tricky for similar reasons.

The final model uses four distinct shader programs:

  1. A “full brightness” shader that doesn’t actually do any lighting calculations and just displays everything exactly as it is in the texture images. This is only used for the “heads up display” overlay (consisting of the map, the information text, the loading screen, etc.). I tried using it for the outdoor parts of the model as well but it looked rubbish.
  2. A simple directional light shader. This is what I eventually settled on for the outdoor parts of the model. It still doesn’t look great, but it’s a lot better than the full brightness one.
  3. A spotlight shader. This is used in the tunnels and also in some parts of the station itself. The single spotlight is used to simulate a torch beam coming from just below the camera and pointing forwards. There’s also a bit of ambient light so that the area outwith the torch beam isn’t completely black.
  4. A more complex shader that supports the torch beam as above, but also three other “spotlights” in fixed positions to represent the light pouring in through the roof vents. This is only used for elements of the model that are directly under the vents.
The full brightness shader in all its horrible glory

The full brightness shader in all its horrible glory

Although there’s no specular reflection in any of the shaders (I suspect it wouldn’t make a huge difference as there’s not a lot of shiny surfaces in the station), the two with the spotlights are still quite heavyweight – for the torch beam to appear properly circular, almost everything has to be done per-pixel in the fragment shader. I’m not a shader expert so there’s probably scope for making them more efficient, but for now they seem to run acceptably fast on the systems I’ve tested them on.

Can’t see the wood or the trees

In Part 1, I mentioned that the trees weren’t modelled in Blender like the rest of the model was. I considered doing this, but realised it would make the already quite large model files unacceptably huge. (Models of organic things such as plants, animals and humans tend to require far more vertices and polygons to look any good than models of architecture do). Instead I chose to implement a “tree generator” in JavaScript – so instead of having to save all of the bulky geometry for the trees to the model file, I could save a compact set of basic parameters, and the geometry itself would be generated in the browser and never have to be sent over the internet.

A Black Tupelo with no leaves

A Black Tupelo with no leaves

The generator is based on the well-known algorithm described in this paper. It took me weeks to get it working right and by the end I never wanted to see another rotation matrix again as long as I lived. I wouldn’t be surprised if it fails for some obscure cases, but it works now for the example trees in the paper, and produces trees for the Botanics model that are probably better looking than anything I could model by hand. I didn’t mean to spend so much time on it, but hopefully I’ll be able to use it again for future projects so it won’t have been wasted time.

A Black Tupelo with leaves

A Black Tupelo with leaves

(Blender also has its own tree generator based on the same algorithm, called Sapling. I didn’t use it as it would have caused the same file size problem as modelling the trees manually in Blender would).

Spurred on by my success at generating the trees programmatically (eventually!), I decided to apply a similar concept to generating entire regions of woodland for the cutting at the Kirklee end of the tunnel. Given a base geometry to sprout from and some parameters to control the density, the types of trees to include, etc., the woodland generator pseudo-randomly places trees and plants into the 3D world, again only requiring a compact set of parameters to be present in the model file.

The viewer also contains a texture overlay system, which is capable of adding graffiti, dirt, mineral deposits or whatever to a texture after it’s been downloaded. This is achieved by having a second hidden HTML 5 canvas on the page on which the textures are composited before being sent to the GPU. (The same hidden canvas is also used for rendering text before it’s overlaid onto the main 3D view canvas, since the 2D text printing functions can’t be used directly on a 3D canvas).

Why not just have pre-overlaid versions of the textures and download them along with the other textures? That would work, but would increase the size of the data needing to be downloaded: if you transferred both graffiti’d and non-graffiti’d versions of a brick wall texture (for example), you’d be transferring all of the detail of the bricks themselves twice. Whereas if you create the graffiti’d version in the browser, you can get away with transferring the brick texture once, along with a mostly transparent (and therefore much more compressible) file containing the graffiti image. You also gain flexibility as you can move the overlays around much more easily.

A selection of the station model's many items of graffiti

A selection of the station model’s many items of graffiti

The rest of the code is reasonably straightforward. Input is captured using standard HTML event handlers, and the viewpoint moves through the model along the same curve used to apply the curve modifier in Blender. Other data in addition to the model geometry (for example the information text, the parameters and positions for the trees, etc.) is incorporated into the first JSON model file by the converter script so that it can be modified without changing the viewer code.

So that’s the viewer. Having never used WebGL and never coded anything of this level of complexity in JavaScript before, I’m impressed at how well it actually works. I certainly learned a lot in the process of making it, and I’m hoping to re-use as much of the code as possible for some future projects.


Making the Online Botanic Gardens Station Model (Part 1: The Model)

One of my “fun projects” this year has been to make an interactive model of the abandoned Botanic Gardens Station in Glasgow. Although I’ve dabbled in 3D modelling before, including making a documentary video about Scotland Street Tunnel last year, the Botanics project turned out to be by far the most complicated 3D thing I’ve made, as well as by far the most complicated bit of web coding to make a viewer for it. It’s been a lot of fun as well as a hell of a learning experience, so I thought I’d write it up here in case anyone is interested.

The finished model, viewed in Chrome for Linux

The finished model, viewed in Chrome for Linux

In Part 1, I’ll talk about making the actual 3D model. Part 2 will cover the viewer code that actually makes it possible to explore the model from the comfort of your web browser.

I made the station model using Blender, a very capable free, open source 3D package. While various software and hardware now exists that allows you to generate a 3D model automatically from photographs or video, I didn’t have access to or knowledge of it, and I’m not sure how well it would work in a confined and oddly shaped space like the Botanic Gardens Station anyway. So I did it the old fashioned way instead, using the photos I took when I explored the station as a reference and crafting the 3D model to match using Blender’s extensive modelling tools.

The whole model in Blender

The whole model in Blender

I tried to keep the dimensions as close to reality as I could, using one grid square in Blender per metre, referring to the published sizes of the station and tunnels where possible, and estimating the scale of everything else as best I could.

It was actually surprisingly easy and quick to throw together a rough model of the station itself – most of the elements (the platforms, stairs, walls, roof, etc.) are made up of fairly simple geometric shapes and I had the basic structure there within a couple of hours. But as with a lot of these things, the devil is in the details and I spent countless more hours refining it and adding the trickier bits.

The beginnings of the station model

The beginnings of the station model

Because there’s quite a lot of repetition and symmetry in the station design, I was able to make use of some of Blender’s modifiers to massively simplify the task. The mirror modifier can be used for items that are symmetrical, allowing you to model only one side of something and have the mirror image of it magically appear for the other side. (In fact, apart from the roof the station is almost completely symmetrical, which saved me a lot of modelling time and effort). The array modifier is even more powerful: it can replicate a single model any number of times in any direction, which allowed me to model a single short section of roof or tunnel or wall and then have it stretch away into the distance with just a few clicks.

Tunnel, modelled with array modifier

Tunnel, modelled with array modifier

Finally, the curve modifier was very valuable. The entire station (and much of the surrounding tunnel) is built on a slight curve, which would be a nightmare to model directly. But thanks to the curve modifier, I was able to model the station and tunnels as if they were completely straight, and then add the curve as a final step, which was much easier. (I still don’t find the curve modifier very intuitive; it took quite a lot of playing around and reading tutorials online to get the effect I wanted, and even now I don’t fully understand how I did it. But the important thing is, it works!).

Tunnel + curve modifier = curving tunnel

Tunnel + curve modifier = curving tunnel

Texturing the model (that is, applying the images that are “pasted onto” the 3D surfaces to add details and make them look more realistic) turned out to be at least as tricky as getting the actual geometry right. The textures had been a major weak point of my Scotland Street model and I wanted much better ones for the Botanics. Eventually I discovered the great texture resource at, which had high quality images for almost everything I needed, and under a license that allowed me to do what I wanted with them – this is where most of the textures for the model came from. The remainder are either hand drawn (the graffiti), extracted from my photos (the tunnel portal exteriors and the calcite), or generated by a program I wrote a while ago when I was experimenting with Perlin Noise (some of the rusted metal).

The fiddly part was assigning texture co-ordinates to all the vertices in the model. I quickly discovered that it would have been much easier to do this as I went along, rather than completing all the geometry first and then going back to add textures later on (especially where I’d “applied” array modifiers, meaning that I now had to assign texture co-ordinates individually for each copy of the geometry instead of just doing it once). Lesson learned for next time. At first I found this stage of the process really difficult, but by the time I’d textured most of the model I was getting a much better feel for how it should be done.

The model in Blender, with textures applied

The model in Blender, with textures applied

(The trees and bushes weren’t in fact modelled using Blender… more about them next time!).


A very geeky web project

Update: the Glasgow version of the map is now live!

My interest in railways started off about 3 years ago, as simply a desire to squeeze into disused and supposedly-sealed-up tunnels and take photos of them. Normal enough, you might think. But since then it’s grown into a more general interest. I’ve collected a lot of books on railways, especially the ones around Edinburgh and Glasgow (in fact, so many that I’m starting to fear for the structural integrity of my bookshelf). I haven’t yet graduated to skulking on station platforms in all weathers wearing a cagoule and meticulously writing down the numbers of all the passing trains, but it may just be a matter of time now.

Maybe I inherited it from my mother. She writes a whole blog about trains and railways, here.

My rapidly growing collection of railway books (minus a few that are scattered around the house, wherever I last read them)

My rapidly growing collection of railway books (minus a few that are scattered around the house, wherever I last read them)

One thing I found while researching the history of the rail network was that I always wanted more maps to help me visualise what was going on. There were a few good ones in the books, but I often found myself struggling to imagine how things were actually laid out in the past, and how the old lines fitted in with the present day railways. I wished there was some sort of interactive map out there that would let you change the date and watch how the railway network changed over time, but I couldn’t find anything like that (the closest thing I found was a “Railway Atlas” book that has a map of the present day network in each area with a map from 1922 on the opposite page). So I decided to make one.

(Actually, I decided to make two: one for Edinburgh and one for Glasgow. The Glasgow one is taking a bit longer due to the more complex network on that side of the country, but I’m hoping to release it soon).

The project fitted in well with some other things I’d been wanting to do as well. I’ve always had an interest in maps and have been collecting the Ordnance Survey 1:50000 series (among others) for most of my life now, so when I discovered that Ordnance Survey now release a lot of their data for free, I was excited at the possibilities. I knew that the OS OpenData would make a good basis for my railway maps. I’d also been wanting to experiment with some of the newer web technologies for a while, and coding the viewer for the maps seemed like a good opportunity to do that.

My (mostly) Ordnance Survey map collection. I don't have a problem. Honest, I don't. I can stop any time I want to.

My (mostly) Ordnance Survey map collection. I don’t have a problem. Honest, I don’t. I can stop any time I want to.

As with a lot of projects, it seemed simple at first but once I actually started work on it, I quickly realised it was going to take longer than I thought. There were two main elements to it:

  1. The data sets. To be able to draw the map, I would need detailed data on all of the railway lines and stations in the Edinburgh and Glasgow areas, past and present, including their names, opening and closing dates, which companies built them, and so on. As far as I knew, this information didn’t even exist in any one single source, and if it did it was sure to be under copyright so I wouldn’t be able to just take it and use it. I was going to have to create the data sets pretty much from scratch.
  2. The viewer. Once I had the data, I needed to make a web page that could display it in the form I wanted. I already had quite a clear idea in my head of what this would look like: it would show the map (of course), which could be scrolled and zoomed just like Google or Bing Maps, and there would also be a slider for changing the date. The lines on the map would be colour coded to show which company they were owned by, or their current status, and special lines like tunnels and freight routes would also be shown differently.

It turned out I also needed to build a third major element as well: an editor for creating the data sets. Previously when I’d drawn maps, I’d either used the Google map maker (which has copyright problems if you want to actually use your creations for anything), or drawn them using Inkscape (which, great though it is, isn’t really designed for making maps in). I didn’t think either of those was going to cut it for this project… I needed something better, something that had all the features I needed, but was free from copyright issues. So I decided to make a map editor first.

Step 1: The Editor

At this point, anyone who’s a software engineer and has had it drummed into them “Don’t re-invent the wheel!” is probably shaking their head in exasperation. “You built your own map editor? Why would you do that? Surely there must be one out there already that you could have used!”. To be honest, I’m sure there was, but I don’t regret my decision to make my own. I had three good reasons for doing it that way:

  1. I would learn a lot more.
  2. I could make an editor that was very well suited to the maps I wanted to make. It would have all the features I needed, but wouldn’t be cluttered with extra ones I didn’t need. And I would know exactly how to use it, and would be able to change it if anything started to annoy me.
  3. It would be fun!

I’d had my eye on the Qt GUI toolkit for a while, wanting to give it a try and see if it was better than the others I’d used in the past. So I downloaded Qt Creator and got building.

Of course, I needed some map data first, so I downloaded one of the Ordnance Survey OpenData products: “OS OpenMap Local”, for grid squares NS and NT. (Ordnance Survey products don’t use the latitude and longitude co-ordinates familiar to users of Google Maps or OpenStreetMap; they have their own “National Grid” system that divides the UK into hundred kilometre squares, and uses numerical co-ordinates within those squares). These came in the form of two enormous (nearly a gigabyte altogether) GML files.

GML stands for “Geography Markup Language”, and is a standard XML grammar used for expressing geographical information. The contents of the OpenMap Local files are actually pretty simple conceptually; there’s just a hell of a lot of them! They mostly consist of great long lists of map elements (which can be areas such as forests or lakes or buildings, linear items like roads or railways, or point locations like railway stations) with names, national grid references, and any other relevant information. I wanted to use this information to display a background map in my map editor, on top of which I could draw out the railway routes for my interactive map.

I knew that parsing several hundred megabytes of XML data was likely to be pretty slow, and I didn’t really want the editor to have to do this every time I started it up, so I wrote a Python script that would trawl through the GML files and extract just the bits I was interested in, saving them in a much more compact file format for the actual editor to read.

Now I was onto the fun part: actually displaying the map data on the screen. Thankfully, Qt’s excellent graphics functionality was a great help here. After writing a quick function to translate OS national grid references to screen co-ordinates, and using it to project the map data onto the screen, I was looking at a crude map of Edinburgh. I spent a while tweaking the details to get it to look the way I wanted it: changing the colours of each type of element, changing the line widths for different types of road, hiding the more minor details when the view was zoomed out (OpenMap Local is very detailed and includes the outline for every single building, so trying to display all of that when you’re zoomed out far enough to see an entire city results in a very cluttered map, not to mention one that displays very slowly!).

Edinburgh, courtesy of Ordnance Survey's OpenData, and my map editor.

Edinburgh, courtesy of Ordnance Survey’s OpenData, and my map editor.

Once I had the background map displaying to my satisfaction, I turned my attention to the actual editing functions and finding a suitable way to store the data for the railway map…

Step 2: The Data

The data model for the interactive map is pretty simple. The three main concepts are: segments (simple sections of track without any junctions), stations (pretty self explanatory I hope) and events. An event is a change in one of the segments’ or stations’ properties at a certain date. For example, the segment that represents Scotland Street Tunnel has an event in 1847 when it came into use (a “change of status” event), another in 1862 when it was taken over by the North British Railway company (a “change of company” event), and another in 1868 when it was abandoned (another “change of status”). When the events are complete and accurate, this gives the viewer all the information it needs to work out how the map should look at any particular date. For a file format, I decided on JSON – it was straightforward, easy to access from both Qt and JavaScript, and easy to inspect and edit by hand for debugging.

Editing the data for Scotland Street Tunnel

Editing the data for Scotland Street Tunnel

I considered storing the data in a database rather than a file and having the viewer page query it in the background to retrieve whatever data it needed. But for this particular application, the data is relatively small (about 150KB for the Edinburgh map), and the viewer needs almost all of it pretty much straight away, so throwing a database into the mix would just have added complexity for no good reason.

Creating the data set was by far the most time-consuming part of the whole process. Every railway line and station, past and present, had to be painstakingly added to the map, and then all of the event dates had to be input. I collated the information from many different sources: present-day railway lines are part of the Ordnance Survey OpenData that I was using for the background map, so it was easy enough to trace over those. However, disused lines are not included, so I had to refer to old maps to see their routes and then draw them onto my map as best I could. For the dates, I referred to several books and websites – “An Illustrated History of Edinburgh’s Railways”, and the corresponding volume for Glasgow, were particularly valuable. Where possible, the event dates are accurate to the nearest day, although the current viewer only cares about the year.

The whole data set for Edinburgh, loaded into the editor

The whole data set for Edinburgh, loaded into the editor

I think I made the right choice in creating my own map editor – if I’d used existing software, it’s doubtful that I would have got the maps done any more quickly. There would have been a learning curve of course, but even after I’d got past that, it’s doubtful that I would have been as productive in a general map editor as I was in my specialised one.

Step 3: The Viewer

The viewer was the final piece of the jigsaw, and although I’d given it some thought, I didn’t properly start work on it until the Edinburgh map data was nearly completed. Unlike for the editor, there was only one real choice of technology for the viewer – if I wanted it to run on a web page and work across virtually all modern devices, it was going to have to be HTML5.

HTML5 extends previous versions of HTML with new elements like the canvas tag, which allows graphics to be rendered in real-time from JavaScript – in days gone by, this required a plug-in such as Flash or Java, but now it can be done in a vanilla browser without anything added. I hadn’t used the canvas before, but a bit of quick experimentation confirmed that it was more than capable of doing everything I needed for my interactive map. I also made use of the JQuery library to simplify operations such as fetching the map data from the web server in the background.

First, I wrote a small library of line drawing routines for all the different sorts of railways: dashed lines for tunnels, crossed lines for freight, and dashed-line-within-solid-line for single track railways (as used on some OS maps). These aren’t supported directly by the canvas, but it only took just over a hundred lines of JavaScript code to add them. Then I was ready to build a map renderer on top.

Different line styles and their uses

Different line styles and their uses

I had a basic version up and running pretty quickly, but it took a lot longer to implement all the features I wanted: background images, scrolling and zooming, the slider for changing the date, clicking on items for more information. Getting the background images lined up perfectly with the lines and stations turned out to be the trickiest part, though it really shouldn’t have been hard. It took me an embarrassingly long time of debugging before I realised I was truncating a scaling factor to two decimal places in one place but not in another, and once that was fixed everything looked fine.

It lives! The finished product

It lives! The finished product

There are still a few things that annoy me about the end product (the mobile browser and touch screen support, especially, could be better), but overall I’m pretty happy with it. It was fun to create and I learned a lot in the process… about the history of the local railways of course; about how geographical data is stored and processed; about programming GUIs with Qt; and about creating interactive graphics using HTML5.


Android Emulators Update

I just made a minor update to my Android emulators for 8-bit machines (the Raspberry Pi versions have not been changed). Since I updated my HTC One X to Android 4.1.1, the sound in all three of the emulators had been really horrible and distorted (yes, even more so than usual 😉 ). So it seemed a good time to update them to use 16-bit sound output, which seems to be better supported in Android. It turns out that 8-bit samples, which I was using before, aren’t actually guaranteed to work at all on every device, so this change would have been worth making even without the sudden appearance of the distortion.

Nothing else has changed except that they’re now being built with a newer version of the Android SDK; however, they should still work on all devices back to Android 2.1, and indeed they do still work on my old Wildfire. Please let me know if you encounter any problems.

Much as I like Android and Google and HTC in some ways, they do seem to like changing things that worked perfectly well already, and not always for the better. Almost every system update for my phone seems to turn into a fresh game of hunt-the-process-that’s-draining-the-whole-battery-and-guess-how-to-make-it-stop… including the ones that claim to improve battery life. And the latest update not only broke 8-bit sound, the phone also refuses point blank to talk to my desktop PC anymore, either as a USB disk drive or for app debugging purposes – both worked fine before. Ah well… got to keep the users and developers on their toes, I guess.


Let me be the first (err, second actually) to say I’ll miss netbooks

I was interested to see this article in the Register. The majority of the comment online about the death of netbooks seems to be along the lines of “Tablets are so much cooler and slicker, netbooks were clunky and annoying to use and who really needs a full PC these days anyway, especially when they’re travelling? Hardly anyone, that’s who. Good riddance netbooks”. But I for one am disappointed that they’ve stopped making them… I can’t see that anything else is going to meet my needs quite so well when I’m travelling… and finally someone agrees with me!

I took my HP netbook running Xubuntu away with me several times last year. I always found it useful, but on the three trips where I combined work with pleasure, it was indispensable. It was light enough to carry around in my backpack without taking up half my cabin baggage allowance or knackering my shoulders. It was cheap enough that if it did get damaged or stolen it wouldn’t be the end of the world (yes, I do have insurance, but you never know when they’re going to worm their way out of actually paying up). Its battery lasts a genuine six hours on a single charge, even when I’m actually doing work on it. It has a proper (if fairly small) keyboard so typing emails or documents on it doesn’t make me lose the will to live. It has enough storage space to keep most of my important files locally in case I can’t get online.


Most of all, it actually runs a proper full operating system! This isn’t something I’m just arbitrarily demanding because I’m a technology snob. I really do need it and do make use of it. At my technical meeting in Madrid in September, I was running a Tomcat web server, a MySQL database server, a RabbitMQ server running on top of an Erlang interpreter, and a full Java development environment. Try doing that on an iPad or an Android tablet! You might think all of that would be pretty painful on a single core Atom with 2GB of memory, but it actually ran surprisingly well. I wouldn’t want to work like that all the time but for a three day meeting it was perfectly adequate and usable. The full OS also means I can encrypt the whole disk which gives me a lot of peace of mind that my files are secure even if the thing does get stolen.

But now I’m starting to get worried about what I’m going to replace it with when the netbook finally departs for the great electronics recycling centre in the sky. Despite the market being flooded with all sorts of portable computing devices, I can’t see any that are going to do what I want quite so well as the netbook did.

Get a tablet? Urgh, no thanks… I’m sure they have their place, but even if I added a proper keyboard there is no way I’d get all that development software to run on Android or iOS. OK, I wouldn’t be surprised if there is some way to hack some of it into working on Android, but Android is hardly a standard or well supported environment for it. It’s not going to Just Work the way it does on a standard PC running Windows or Ubuntu.

Get a Microsoft Surface Pro? This tablet actually does run a full version of Windows 8 (or will when it comes out), but at $900 it costs nearly three times as much as my netbook did. I couldn’t justify spending that on something I’m going to throw into my backpack and take all over the place with me. I’d be constantly worrying it was going to get broken or stolen.

Get an “ultrabook”? Again would do the things I need, but again would cost WAY more than the netbook, would almost certainly weigh a lot more than the netbook, and I’d be very surprised if it had comparable battery life either (at least not without spending even more money on SSDs, spare batteries, etc.). For the “pleasure” part of my Madrid trip I was staying in a hostel room with seven other people. There was ONE power socket between the eight of us. When travelling, battery life really does matter.

Get a Chromebook and install a full Linux distribution on it? This is actually the option I’d lean towards at present. Chromebooks have price, portability and battery life on their side and apparently are easy to install Linux on. The downsides would be the ARM processor (which could limit software compatibility as well as making even the lowly Atom look blazingly fast in comparison), and the lack of local storage (Chromebooks generally seem to have a few gigabytes of storage. My netbook has a few hundred gigabytes!). So, still not an ideal option, but unless some enterprising company resurrects the netbook concept, could be the best of a bad lot :(.

(I freely admit I’m in a small minority here… not many people need to run multiple servers on their computer while travelling, and not many of those that do tend to extend their business trips with nights in hostels. But that doesn’t stop it being annoying that something that met my needs perfectly is no longer being made 😉 ).

Fossil SCM

In the course of trying to organise my life enough that I don’t feel I’m being crushed under the weight of a disorganised mass of files, papers and projects, I discovered a very neat little software tool called Fossil. (This probably isn’t of interest unless you develop your own software or at least do some sort of creative projects using a computer, so feel free to skip it if you don’t. The rest of you, read on!).

For a while I’d half-heartedly wondered about setting up some kind of source code management system and maybe also a bug tracking system for my own projects. It’s not so critical to have all this stuff on a single person project as it is on a bigger collaborative work, but at times things can still get disorganised enough to be a pain, especially if you have a bit of a break and then try to come back to it later. Where did I put the most up-to-date copy of that emulator’s code again? Is it on my desktop machine with all the other projects? Or did I make some updates on my old laptop? Maybe the master copy is on an external drive somewhere, or in that folder called “new”? Oh, now I remember, last time I worked on it was on the Raspberry Pi, but the changes I made there probably won’t work on any other machine. Come to think of it, what do I want to do on it next anyway? I know there was a list of bugs and missing features… it was in my green notebook I think… or maybe it’s in a text file in my Dropbox folder… or did I have a fit of organisation one day and add them all to TaskCoach as individual tasks? Aaargh!

I liked the idea of having a single repository of the code in one place so I always know where it is, and being able to get earlier versions back when I break it would be great as well. A ticket system for the bugs and missing features would also be useful. I considered installing a Trac server somewhere – I’d used it on work projects and liked the way it gave you source code management, a wiki and bug tracking/ticketing system all in one place – but I quickly gave up on that when I saw the length and complexity of the installation instructions. Using Trac for projects like mine would be like hiring a combine harvester, learning how to drive it and then using it to give your front lawn a quick trim.

Enter Fossil! I can’t remember where I heard about it, I think it was a forum post somewhere, but I’m very glad I did. It does all of those things perfectly adequately for small projects (probably even for medium sized multi-person projects) and is much, much more lightweight and easy to install than the likes of Trac or GForge. In fact it doesn’t even need to be installed at all – it comes as a single executable (available for Windows, Mac, Linux and others) with no dependencies on other software at all. You just download it, and away you go. Each project repository you create with it is also stored in a single file so it’s easy to keep track of it, move it around, back it up, etc.

You can access the source control functionality from the command line (commands like “fossil add”, “fossil commit”, clearly reminiscent of well-established SCMs like CVS and SVN), or you can fire up the web interface in your browser by typing “fossil ui”, and that gives you access to the other functions like the wiki and the ticketing system. I’ve been using it for a while now on one of my projects and definitely intend to use it for the others as well once I start working on them more seriously. I must say, so far I’m quite blown away by how capable it is; shoehorning a fully working source code manager, bug tracker and wiki into such a tiny, fast and easy to use package is an amazing achievement. I’m always apprehensive about trying out new development tools. It’s pretty common to be left high and dry with an incomprehensible error message because your version of some library you’ve never even heard of is version 1.2.29 instead of 1.2.28, but Fossil has none of those issues. It Just Works.

(I haven’t tried to use some of the more advanced functionality such as the distributed SCM, so I can’t really comment on that. But for small projects using a local repository, it seems ideal).

Projects Update

(This post is mainly an attempt to give myself a gentle kick up the bum towards doing something about all this stuff).

So… it’s nearly 3 months since I posted about my personal projects, so it must be time for an update. Generally I haven’t got as much done on them as I’d hoped; travelling the world and playing with geek-toys has taken up a lot of my time over the past few weeks. But looking down my list and thinking about what I’ve achieved, I can see that it hasn’t been quite as bleak as I feared. And now I have Luna and a whole month (well, nearly) of not travelling anywhere at my disposal, I should be able to make some more progress.

Projects Bubble, Everything and Chippy are not really my responsibility to keep on track. There was a tentative plan to do something on Chippy back in June, but it was scuppered by a very full schedule and a hair dye disaster. It would be nice if more was happening on them (especially Bubble), but I’m not going to beat myself up over the fact that it hasn’t yet.

Project Hohoho: the funding campaign is now over and we raised a respectable amount :). First actual filming commences soon, though I probably shouldn’t say any more about it just now as the plans are still being kept under a certain sandwich-like food item (watch the pitch video!).

Project Noah is one of my major paid work projects. It’s coming along very nicely (apart from a slight setback involving a crucial building being full of asbestos and possibly having to be evacuated for an extended period while they get rid of it). I have an idea for a blog entry I want to post about this as I do think it’s really interesting stuff… it will take a bit of preparation though.

Project Bits: This is maybe the one I feel is most important but it seems slow to get started. I did a bit of writing and a bit of general planning work and research. It’s become more and more ambitious in my mind, which is probably a good thing in that it might help to differentiate it from anything similar that’s out there, but a bad thing in terms of making it less likely to actually get finished. I definitely need to organise it and work out what exactly I want to do.

Project Buster: not much progress. I downloaded a whole load of stuff for it onto my new computer but haven’t had time to do much with it yet. In my head it’s starting to become a bit more concrete, and form tentative links with Projects IOM and Fantasy World.

Project IOM: I was sort of hoping for some nice summer evenings as they would have given me a chance to do more of this. So far I’ve been disappointed :(. Let’s hope August and September are nicer.

Project X-Ray: haven’t done much, but it’s sort of linking up in my head with Project Fantasy World, which is going a bit better… and I have a more definite (but probably impossible) idea for it.

Project Megadroid: this one actually is going OK, after a quiet spell. Getting the new phone has helped it along rather a lot. So has something else that I may blog about separately.

Project History: making a lot of progress on this lately, again after a quiet spell. The first thing that needs to be done on it is quite a laborious task but the end is now in sight!

Projects Classical, New Leaf and Tridextrous haven’t got far. New Leaf really shouldn’t be hard to get finished but other things keep distracting me.

Project Fantasy World: this was possibly the vaguest idea of them all, but it’s taken shape in my head and started to connect with X-Ray, Buster and Bits. I’ve been playing with some software that could help with it and getting further than I expected to.

Project Bonkers: … um, yeah.

I do feel a bit more inspired now :). Hopefully next time I post about one of these it won’t be in quite such vague and meaningless terms!

Second Helping of Pi

Unsurprisingly, I found a few spare hours this weekend to work more on the Raspberry Pi. (Though I was very restrained and didn’t work on it non-stop… did still go dancing one night and out for a walk to take some nice photos yesterday afternoon. I know what it does to my mood if I spend a whole weekend cooped up coding, even if I am tempted to at the time).

First I finished up the Master System emulator. I added in a border to stop the graphics going off the edge of the screen, then turned my attention to the more challenging requirements: keyboard input, sound, and timing.

Getting input from the keyboard isn’t usually a particularly challenging thing to do… not for most programs, anyway. But for console emulators it’s a bit more involved, for two reasons:

  • we want to be able to detect when a key is released, as well as when it’s pressed
  • we want to be able to detect multiple keys being pressed at once (for example, the right arrow and the jump key)

I tried various ways of doing this – firstly, the way I used in emulators I’d written for Windows previously: the SDL library (this library can do lots of handy things and keyboard input is only one of them). But although the library was installed on the Raspberry Pi and I was able to link to it, I couldn’t detect any keyboard activity with it. Eventually I found out you can perform some arcane Linux system calls to switch the keyboard into a different mode where it will give you the information I needed. This only works from the real command line, not from X Windows, but it was better than nothing. (You also have to be very careful to switch the keyboard back to its normal mode when your program exits, otherwise the computer will be stuck in a strange state where none of the keys do what they’re supposed to do, with probably no way out other than turning it off and on again!). I still want to find a way to make it work in X Windows, but that’s a project for another day.

(I wrote a more technical blog post here about the keyboard code in case anyone wants to use it).

While reading the keyboard turned out to be a bit harder than I’d hoped, this was more than made up for by how easy it was to get the sound working. In fact I found I was able to re-use most of the code from the audio playing example program that came with the Pi. The only slight strangeness was that it seems to only support 16 or 32 bits per sample rather than the more standard 8 or 16, but it’s easy enough to convert the 8 bit samples generated by my Master System sound code to 16 bit. I didn’t know whether the Pi was expecting signed or unsigned samples, but the din of horribly distorted noise that greeted me the first time I tested the emulator with sound confirmed that it was the opposite of whatever I was giving it. That was easy enough to fix too.

As for the timing, it turned out to be a non-issue – the sound playing code will block until it’s ready for the next batch of sound data anyway, so this will keep the emulation running at the correct speed. (Actually it’s a non-issue for another reason as well, but I’ll get to that later).

(It’s amazing how enormous the pixels look now. I’m sure they never did when I was playing with a real Master System on a telly almost as big back in the 90s. I suspect it was just the general blurriness of TVs back then that masked how low resolution the graphics really are).

Since my first Raspberry Pi emulator had been easier than expected, I decided to port another one – my Android Gameboy emulator should be do-able by welding the head of Raspberry Pi-specific code I’d just written for the Master System one onto the body of behind-the-scenes code from the original Android version of the Gameboy and making a few important tweaks to make them look as if they match up. So that was what I did.

“This’ll be a breeze”, I smugly thought. “I’ll be done in a few minutes!”. But it wasn’t quite that easy…

I was mostly done in a few minutes (well, maybe half an hour) – graphics were working and I could play Tetris or Mario. But the sound was horrible. Really horrible. Not just normal-Gameboy-music level of horrible… something was clearly very wrong with it. I checked and double checked the code over and over but still couldn’t see the bug. I hadn’t changed the sound output code very much from the Master System, apart from changing the sample rate slightly and switching from mono to stereo. I switched back to mono again. No change. I tried a more standard sample rate (22050Hz instead of 18480Hz). Nope, now it’s horrible and completely the wrong pitch.

I puzzled over this one for a long time. I tried various other things I could think of, rewriting the code in different and increasingly unlikely ways, but nothing seemed to make a difference. The only thing I established was that the sound buffer was either not being completely filled or was underflowing – when I tried filling it with a constant value instead of the Gameboy’s sound output, I still got the horrible noise (a constant value should give silence). But why??

Eventually I cracked it, and learnt something in the process. I noticed that Mario seemed to be running a little bit slower than it should, and I wondered if the emulator was not actually running fast enough to process a frame before the sound buffer ran out. That would certainly explain the sound problem… but didn’t seem like it should be happening. The same emulator code had no trouble reaching full speed on my slower HTC Wildfire, it should be no problem for the Pi to manage it as well. On a hunch, I tried reducing the sound sample rate quite a lot. Finally a change! Sure, the game was running slower and the music was now sounding like a tape recorder with a dying battery… but for the first time the horrible noise was gone! Then I had a thought: what if the graphics code is locking to the frame rate of the TV? The Gameboy screen updates at 60Hz, but UK TVs only update at 50Hz. Trying to display 60 frames in a second when each frame is waiting one-fiftieth of a second is not likely to work very well. Sure enough, only outputting every second frame (so running at 30 frames per second instead of 60) cured the problem completely. It had never occurred to me that this could happen… I was so used to programming PCs, where the monitors have all run at 60Hz or more for decades, that I forgot the little Pi connected to my TV would be different.

Anyway… I decided to tidy up the code and release it in case it’s of interest to anyone. So if you head on over to my emulators page, you can now download the source code of both emulators for the Raspberry Pi along with detailed instructions for using them. Enjoy 🙂

(A word of warning… I wouldn’t say they were examples of good programming practise. The CPU cores and graphics and sound rendering code are written in ARM assembly language, which I only did because I wanted to learn it at the time – C would be a better idea if you wanted to write an emulator that’s easy to maintain and extend, and probably would be fast enough to emulate the old 8-bit systems).

(Another word of warning… I have some more things in the pipeline that might be more interesting than these two 😀 ).

It’s Pi-day :D

GCat’s adventures with a credit card-sized computer.

It’s months now since I blogged about the Raspberry Pi. At the time I said I was getting really excited about it. Well, my excitement did start to wane a bit after getting up at 5.45am on the release day (February 29th) only to find the level of interest had practically melted the servers of both supplying companies and there was very little chance of getting hold of one any time soon. I was still intending to buy one when some of the mayhem had died down, but I hadn’t given it so much thought lately. Then suddenly yesterday one of my colleagues walked into my office without any warning and handed one to me!

I couldn’t wait to give it a try. Unfortunately I didn’t have a screen in the office that it could hook up to immediately (it needs HDMI or composite, VGA or DVI monitor plugs are no use) so all I could do was download the software ready to try it out (it needs a custom version of Linux on an SD card) while casting occasional excited glances at the box. But luckily there’s a nice HDMI TV in my living room…

My first reaction was: wow, this thing really is tiny! I mean, I knew it was credit card-sized and all, but even so, it’s still hard to believe just how small it is until you see one in the flesh, so to speak. I was even more amazed by the size of the main processor (the black square chip just by my fingernail in the photo and about the same size!).

Hooking everything up to it reminded me of connecting up one of our old computers and brought back happy memories of geekily spent Christmases and so on. In the picture, the power is coming from my HTC phone charger and going into the micro USB connector on the lower left corner. The SD card with the Linux OS is the blue thing protruding out from underneath the board just by the power connector. The grey plug going into the near side is the HDMI cable to my television. The green cable coiling round the whole thing is ethernet to connect it to the internet (it doesn’t have built in wifi so it needs either a cable connection or an external USB wifi dongle). Finally, the two black plugs next to the ethernet are my ordinary USB keyboard and mouse.

With trepidation, I double checked all the connections and then turned the power on. Would it work? I’d seen reports that certain SD cards wouldn’t work properly so I knew there was a chance I’d got a bad one or that I’d messed up the OS install.

Success! I could see the raspberry logo on the screen and the Linux boot messages scrolling past (looking very tiny in full 1080p resolution). Soon I had the desktop environment running and was verifying that it was indeed capable of viewing pointless web pages.

It was pretty easy to get up and running by following the quick-start instructions on the Raspberry Pi website. It was a little bit sluggish for browsing the net, but that’s to be expected with such a low-powered machine with a chip designed for mobile phones but running a full desktop system. Apparently this will get better once X Windows (the software that provides the graphical user interface on Linux) is using the Raspberry Pi’s rather capable GPU to do most of the drawing instead of doing everything on the main processor as it is at present.

But nice though it was to see my blog on the big screen courtesy of the Pi, I was more interested in getting some of my own code up and running on it. After a quick break to redo the partitioning on the SD card (so that I could use the full 16GB of space rather than the default less than 2) and install my favourite geeky text editor, it was time to delve into the code examples.

As the Raspberry Pi is intended for teaching programming, it comes with some nice example programs showing how to make it do various things (play video, play sound, display 3D graphics, etc.). I’d decided for my first project I was going to try and get one of my emulators up and running on it; the architecture is actually very similar to my phone’s so even though the emulators contain quite a lot of assembly language code that would have no chance of working on a normal PC, they should work on the Pi without too much trouble. I decided to start with the Master System one as it’s a bit simpler than the others.

After an hour or two of hacking, I had something working.

As expected I didn’t need to change very much in the code. I just replaced the “top layer” that previously communicated with the Android operating system with a new bit of code to send the graphics to the Raspberry Pi GPU via OpenGL ES. (Although that’s mainly for 3D graphics, you can do some nice 2D graphics on it too if you more or less just ignore the third dimension).

The emulator isn’t fully working yet… there’s no sound (I need to look at the sound example that came with the Pi but it shouldn’t be too hard), no way to actually control it (that screenshot is just showing the demo running on its own – I need to figure out how to get key presses in the right form), and there are a few other glitches (the graphics seem to extend slightly off the edges of the screen and the timing is a bit off). But overall I’m reasonably pleased with my first few hours with a Pi 🙂

Update: the Master System emulator is now closer to being finished and you can download it from here.

Projects, projects, projects…

This is heavily inspired by (read “ripped off from” 😉 ) a post on my brother’s blog.

I also have a bunch of creative projects on the go. Well actually, a lot of them are not quite so on-the-go as I would like, in fact some seem to be terminally stuck not going anywhere. Maybe talking about them a bit more publicly will inspire me to get them going again.

I’ve always been like this, I think. Ever since I was quite small I would come home from school and spend most of my free time writing stories, messing around making things on the computer, drawing maps of places I found interesting, or learning new music on the piano. I never saw the appeal of spending hours in front of the TV (I still don’t), and although I did play a lot of computer games, I must have spent at least as much time designing and writing my own as I did playing other people’s.

Now that I’ve got a full time job it’s a bit harder to find the time to do all that kind of stuff. But because it’s important to me, I still try. I’ve already blogged from time to time about my Android app making, my bandour film group, (on my other blog) one of my home-made computer games, and piano playing. To try and organise things a bit better and prioritise the stuff that’s really important to me, I decided to make a list and give them all codenames like Alex did in his blog. Here is the list, along with a little symbol of some kind for each one. Some of these overlap with Alex’s ones because they’re group projects of some kind – they have the same names that he gave them. Some of them are slightly ill-defined and are really catch-alls for a whole possibly area of creativity that I might be interested in experimenting with later on. Some are much more specific. OK, on with the list!

Project Bubble – this is the codename for our next Sonic Triangle EP, which has been in production for quite a while now. Alex already wrote a whole post about it so I won’t say much here.


Project Hohoho – the Beyond Studios Advent Calendar! Alex and I have both already written whole posts about this so again I won’t say much here.


Project Everything – this is really Alex’s project and I don’t know if he wants to reveal what it is yet, so I won’t.



Project Chippy – Alex’s web series!



Project Noah – this is actually a work (as in paid work) project. I need to find out whether I’m allowed to blog about it or not. I probably will be able to, and I hope I am, because I think it’s really interesting.


Project Bits – this one’s computer related and probably way over-ambitious, but at least I’ve been managing to make some progress on it lately.


Project Buster – one of the sort of vague, catch-all ones.



Project IOM – this one has been coming along quite nicely, before I even decided to make it a Project with a defined end goal. It’s nice because unlike most of the others it involves leaving the house quite a lot.


Project X-ray – another of the vague, catch-all ones… including ideas that are probably also way over-ambitious, but might be fun to play around with anyway.


Project Megadroid – if you’ve paid attention to my previous blog posts you can probably work out exactly what this one is just from its symbol and name. But anyway… it’s one of the few that’s (a) got a well defined goal, and (b) probably isn’t too far from reaching it… yay! It’s been taking a bit of a hiatus recently but thinking about it is starting to tickle my interest again, so maybe I’ll finally get it finished (and release it on here).

Project History – this one is journaling-related. It probably deserves its own post at some point.



Project Classical – another one that’s probably quite obvious from the name and pic.



Project New Leaf – a nice, hopefully quick and simple but very rewarding little Project that will help with some of the others once it’s done. I won’t say more than that because I’m saving it for its own blog post.


Project Tridextrous – ambitious, probably slightly insane, may never happen.



Project Fantasy World – very broad, catch-all project… no definite plans in it yet but an area I’m still interested in returning to.



Project Bonkers – … um, yeah.



So that’s them. Some of them will hopefully get their own posts soon and hopefully having a place to write about progress will inspire me to actually make some progress to write about.