Glasgow to Kilsyth canal walk

At the end of my last post I mentioned that the Bowling to Glasgow walk was only the first of three and that I might write about the others when their 25 year anniversaries came around. But since I probably won’t remember to do that (the dates of the other two walks aren’t so deeply emblazoned onto my brain as the first is), and since I quite enjoyed writing the last post and a few people seemed to enjoy reading it, I decided to just write the next one now instead.

I think it was towards the end of our first walk that someone (probably Chris) mentioned the possibility of doing the rest of the canal at a later date. I was all in favour of this, of course, even though Chris hadn’t enjoyed the rest of the canal as much as the Bowling to Glasgow section when she and Ian had walked it before (she’d said that the middle section was quite long and boring, and by the time they reached the Falkirk end she was tired and just wanted to go home). It took us a while to get around to arranging the next part, and in the end it wasn’t until September of 1994 that we returned to the Forth and Clyde Canal.

Ian and Chris had walked the whole remainder of the canal in one go, a total distance of about 25 miles and more than double the length of our first walk! This time we weren’t being quite so ambitious and our plan was just to walk eastwards from Glasgow until we’d had enough, whenever that turned out to be. Apparently there were a few places along the way that were handy for public transport, so this seemed a good plan.

This walk was going to be different from the first in several ways. For one thing we wouldn’t have a dog with us: Ben had a bad toe and couldn’t manage such a long walk, so he was being left with my parents for the day. We wouldn’t have a car either; Ian and Chris’s was off the road so we would be completely reliant on buses and trains to get us to and from the canal. And I’d been researching the canal and poring over maps since our first walk, so I had a much better idea of what to expect this time.

(You might also be wondering why so few photos compared to last time. Well, my camera was still broken so I was borrowing my mum’s SLR, but I found it so complicated to use that I only ended up taking four photos all day, and two of those were identical ones of Firhill Bridge because I was worried the first one hadn’t worked properly!).

The day of the walk dawned, and Ian and Chris arrived bright and early to drop Ben off and pick me up. Ever the master of optimism and motivation, Ian’s first words to me were “You know, it’s highly unlikely that we’ll actually make it all the way today”. I showed him the Dextrosol tablets I was bringing to try and boost my energy, but he just said “Those’ll be handy if anyone suddenly starts suffering from low blood sugar levels”. The weather was looking pretty good as we walked to the bus stop, and I could feel the mounting sense of excitement that could only mean I was off to explore somewhere interesting. The Glasgow bus was just pulling up at the stop as we turned the corner and we had to run for it. Thankfully we made it, and Chris and I found three seats together up at the back while Ian paid for the tickets.

The journey through was uneventful and we talked about how much better for the environment it was to get the bus rather than driving through as we had done for our first walk. When we got to Glasgow the weather was still so nice that we decided to walk up the Glasgow branch of the canal instead of getting the bus up to Maryhill as we’d originally planned. After all, what’s an extra two miles on a walk that long? It meant that we were rejoining the canal at exactly the point we’d left it 8 months earlier, which made for some nice continuity, though it looked very different on a sunny morning from how it had on that cold November night. Eagerly we set off along the towpath, looking forward to a good day’s walking.

Firhill Bridge

I enjoyed seeing the Glasgow branch again, but of course I was most looking forward to seeing some new canal. We diverged from our previous path at Stockingfield Junction, where we had to go down and through a tiny tunnel-like aqueduct beneath the canal, then up a bank to get to the towpath on the mainline to the east. Although we were still very much in Glasgow, this part of the walk had a pleasantly rural feel to it, especially with the blackberries we were able to pick from some nearby brambles to keep us going. Chris pointed at a funny looking brick tower over to the north and said “I wonder what that is?”. None of us knew, but from looking at maps since I think it was probably the chimney of the nearby crematorium.

 

Unlike most of the Bowling to Glasgow section, the canal we were now following had recently been reopened, so the bridges were mostly high enough for boats to sail beneath, and for us to walk under. The first one was an old metal bridge at Lambhill, and just beyond it was an original canal stables block as well as some weird underground tunnel entrances (both of which I would probably have tried to get inside if I’d been a bit older). The houses to the north gave way to open countryside and there was a little picnic area by the nature reserve at Possil Loch. We decided to stop there for a snack. Chris shared out some biscuits she had made and I took a swig from my large bottle of Irn Bru (a must for walking, in my opinion).

While we were having our break, something unexpected happened: it started raining. I suppose we shouldn’t have been surprised, we were still in the west of Scotland after all, no matter how nice it had looked first thing. I put my jacket on and sheltered the biscuits underneath while Ian and Chris struggled into their waterproof trousers. Chris was amused that my first impulse had been to save the biscuits from getting wet, but said she would have done the same. There didn’t seem much point sitting around in the rain so we decided we might as well walk on.

This walk certainly had quite a different feel from the previous one. The Bowling to Glasgow stretch of canal had had a constant succession of bridges, locks and other canal features to look at, not to mention all the buildings of the surrounding town. On our second walk we didn’t pass a single lock (we were entirely on the canal’s “summit” reach), the bridges were much more spaced out (more than a mile between the ones at Lambhill and Bishopbriggs) and the surroundings were far more rural (currently we had a golf course to the south of us and open country to the north). But I wasn’t sure I agreed with Chris’s comment that it was actually boring; it was certainly quiet and peaceful, but I was enjoying the tranquillity and found some of the countryside quite pretty.

The next bridge was Farm Bridge, next to the Leisuredrome at Bishopbriggs. This was a slightly notorious bridge because it was only about 5 or 6 feet above the water which meant that bigger boats couldn’t go underneath it. It was supposed to be raised in the early 90s but the Glasgow Canal Project, which rebuilt all the other low bridges and culverts between Glasgow and Kirkintilloch, ran out of money before it got to Farm Bridge, leaving this annoying obstruction in the way. (Now, of course, it’s been replaced by the Millennium Link project along with all the other low bridges on the canal, and the new one has the full 10ft headroom).

But low bridge or no low bridge, I was glad to see that (a) there were trees by the canal after the bridge which would give us some shelter, and (b) the rain was easing off a bit anyway by this time. I found myself looking enviously at Ian and Chris’s waterproof trousers as I felt my own soaked trousers against my legs and made a mental note to definitely get some of my own before I next did a long walk.

The next little stretch, through the trees past Cadder, turned out to be really pretty. As the canal turned a corner, we climbed up onto a wooded bank and looked down over the valley (and yet more golf courses) below. The River Kelvin was down there, looking a lot smaller than it had been where we’d crossed it on the aqueduct at Maryhill the previous year. Apparently the bank we were standing on was probably part of the Antonine Wall. People think of the road building programmes of the 1960s and 70s as being pretty destructive as they bulldozed old buildings out of their path (and blocked canals), but things weren’t actually much better back in the canal age – the canal was cut right through the Antonine Wall here, and the navvies even quarried a nearby Roman fort to get stone to line the banks!

Blurry Glasgow Bridge

There were a couple more bridges to pass before we reached Kirkintilloch where we planned to stop for lunch. The second of these was quite interesting because it had recently been replaced with a modern concrete one so that boats could get under it again, and there were quite a few boats moored nearby. There was also a pub in a converted canal stables block (called, imaginatively enough, The Stables).  I took a photo but it came out blurry unfortunately. The rain kept going on and off, so at least it wasn’t raining constantly, but it never stayed dry for long enough at a time for my trousers to properly dry out.

It was at this point that the walk started to drag a bit. Kirkintilloch seemed further away than we’d expected and we were all starting to get a bit hungry by this time, which may account for the slightly bizarre conversation that ensued. It started off innocently enough, with Chris telling us about one of her plants (she had no idea what it was, but she suspected it might be an African Lily, so she asked an expert who said “well I don’t know what it is but it’s definitely not an African Lily”, so from then on Chris just referred to the plant as “not an African Lily”), but moved into the realms of the weird when Ian mentioned a filing cabinet that had mysteriously appeared in his office at work and told us his theory that it might in fact be an alien from another planet in disguise. After that the effects of the hunger set in even more deeply and we started talking about how anything might in fact be anything else, which at least passed the time until we rounded a corner and reached Townhead Bridge.

(Well, I say “bridge”, but at this point in time it was actually just an embankment blocking the canal, with a horrible silted up submerged culvert in the middle. It was to be another six years after our walk before there was an actual bridge there again).

Eagerly we climbed up the steps to the main road. We’d been planning to find a fish and chip shop and get something from there, but with the weather having turned unreliable we decided to sit in at the nearby shopping centre’s food court instead (they had fish and chips so we didn’t feel like we were missing out). For some reason we ended up having a bit of an argument about religion, with Ian and I saying it was mostly a negative thing that had caused a lot of wars and so on, while Chris said without it we might not have ended up so civilised. That was what I liked about spending time with Ian and Chris, you could talk to them about anything at all, from plants and filing cabinets right up to big things like the effects of religion on society. (And aliens disguised as filing cabinets).

As we returned to the canal with fuller stomachs, I was interested to see that there was a bridge I hadn’t known about next to Townhead “Bridge”. I’d made notes from the Forth and Clyde Canal guidebook in a library since the last walk and ended up memorising the table of bridges and locks in the back (not intentionally, I just found it so interesting that the information stuck in my head without me even having to try! I’m weird like that), but this new concrete flyover wasn’t on the list, so it must have been built after the guidebook was published. It looked a bit out of place, soaring overhead in a big sweeping curve to give lots of headroom over a disused, silted up canal that disappeared under an embankment only a few yards to the west, but I guess they were already planning ahead for the canal’s eventual reopening by the time this road was built.

Just beyond the new bridge (“Nicholson Bridge”, I believe it’s actually called) was a more interesting piece of infrastructure: the Luggie Aqueduct, the second biggest one on the canal after the Kelvin. We went down below to have a closer look. It was just a single arch, but unusually the Luggie Water which it had been built to cross wasn’t visible underneath – that had been culverted under the aqueduct so that a railway line could be built through the arch. The railway line had gone but the path in its place had been resurfaced with railway track patterns in the stone work. I took my fourth and final photo of the day, then we returned to the canal, where there was more rain waiting for us.

Luggie Aqueduct

Pretty soon we were out in the country again, following the canal through open fields, with very few features along the way. The next bridge, Twechar Bridge, was only a few miles east of Kirkintilloch but it seemed to take us ages to get to it. At times the towpath shared its course with a minor road which was harder on the feet and meant we had to be on the lookout for cars. Eventually we started to suspect that the village of Twechar was actually getting further away from us the more we walked towards it. Then it suddenly “appeared” in front of Chris as she tried to unobtrusively relieve herself. She came back to where Ian and I were waiting and reported that she had found it.

I was starting to flag by this time. We’d already walked about 14 miles, further than I’d ever walked in one go before, and although the scenery was pleasant enough, there wasn’t really enough canal infrastructure on this section to spur me on to keep going. So when Ian suggested we leave the canal at the next bridge (Auchinstarry) and make our way home from Kilsyth, I agreed. The next suitable stopping point was several miles further on and I wasn’t sure if I could manage that, Dextrosol or no Dextrosol.

Kilsyth wasn’t too far away, just a short walk to the north along a B road. When we reached the main street, Ian sprinted across the road in front of a huge lorry to ask a passer by what time the next Falkirk bus was due. (“I thought I was going to collect my insurance money there!” said Chris, grinning, as we followed Ian slightly more carefully). Apparently the bus was due soon after 5pm… that wasn’t too bad, it was nearly 5 already. We settled down on the bench in the bus shelter, glad to take the weight off our weary feet for a few minutes.

But it turned out to be a lot longer than a few minutes! 5pm came and went with no sign of the bus. By 5.30pm we were starting to get a little restless, but since it was 1994 and smartphones and bus trackers were yet to be invented, there wasn’t a lot we could do except continue to wait. By 6pm I was starting to wonder whether I would have to live out the rest of my life in this slightly grotty bus shelter, and whether the old woman who kept smiling out of the window of a nearby flat was laughing at us.

Finally at twenty past six or so, a bus trundled round the corner. As we heaved ourself onto it, not sure whether to be annoyed at the wait or glad it was here at last, Ian asked what had happened to the 5pm bus. Apparently it had broken down. So much for buses being better than cars.

We had to change buses at Falkirk bus station. We had half an hour or so before the Edinburgh bus was due, which meant there was time for Chris and I to make use of a funny looking automatic public loo (quite a novelty in those days), and then for us all to go to a nearby cafe while we waited. Chris and I just had hot drinks, but Ian was hungry and ordered a chip butty. I’d never heard of such a thing before and was quite amazed to find, when it arrived, that it was exactly like its name suggested. I decided I quite liked the look of it and half wished I’d ordered one myself.

Despite having had half an hour to spare, we still managed to nearly miss the Edinburgh bus. This was one of the more memorable bus journeys of my life; almost all the other passengers seemed to know each other and the driver and were chatting to each other the whole way, making us feel a bit like we were intruding on some private gathering. The only other person who didn’t appear to be part of this cosy little community was a middle aged man sitting near Ian, Chris and me. He spent most of the journey staring at us and laughing whenever one of us spoke. Luckily I was feeling pretty out of it after my long day and all the fresh air and exercise, so I was happy to just sit there and let it all wash over me. Even so it was a relief when we got off into the comparative sanity of my own neighbourhood.

Despite the rain and the travel difficulties, I think I actually enjoyed this walk the most out of the three. It was a nice picturesque stretch of canal and satisfying to walk so much of it in one go.

Letter to Theresa May

Dear Mrs May,

I heard you were in Scotland today trying to “sell” your Brexit deal to members of the public. I’m not in Scotland just now myself so I thought I would write down my thoughts instead.

First of all it seems pretty pointless talking to people about the deal since you’ve already made it clear that we’re not going to get any say on it. Why bother engaging in discussion if you’ve already decided you’re going to push ahead regardless?

Secondly, no, I will not “get behind” your deal. It’s clearly far inferior to remaining in the EU and today’s economic forecasts provide yet more evidence of that (as if any more were needed). In particular, I won’t support anything that removes rights from me and my family against our will. You go on about “ending free movement once and for all” as if that was something to be celebrated, but you can’t expect intelligent and well informed people to revel in having their rights stripped away, especially not when you’ve spent the last two years systematically alienating them almost every time you open your mouth.

You’re right about one thing: your deal is better than no deal, but that’s like saying being booted in the genitals is better than being shot in the head. I’d rather not have either, thanks.

I don’t even believe that you yourself think this is a good deal. When asked whether you think it will be good for Britain, you respond with weasel words like “I believe our best days are ahead of us”. Well, frankly that could mean anything. Maybe it means you believe our best days will be in twenty years time after we’ve put this shambles behind us and rejoined the EU. Maybe you believe they’ll be in 1000 years time when we all live in a Star Trek-style utopia where no-one has to work anymore. But I don’t think those are the words you would choose if you genuinely believed this deal was in our best interests. I think you can’t quite bring yourself to say those words.

It’s the dishonesty that really gets me about the whole Brexit process. Whatever else you might be, Mrs May, I don’t think you’re stupid, or ignorant of the political realities. You know full well, as does everyone who’s been paying attention, that the referendum victory for Leave is on very shaky ground indeed. Leave only just scraped a majority even with Brexiters promising the earth. Do you think there is even the slightest chance it would have won if the campaign had been fought honestly? If Boris Johnson and Nigel Farage had said “Vote Leave! It’ll either be utter chaos with grounded flights and food and medicine shortages, or it’ll be a deal that leaves us having to comply with EU rules without any say over them, and we’ll be far worse off either way”? Of course it wouldn’t, and you know it, but like most other politicians you’re scared to say it, preferring to keep on pretending for as long as possible that the fantasies might yet come true.

Maybe you feel you can’t challenge the result or call another referendum because it would make millions of Leave voters feel betrayed. Well, OK, but how do you think they’re going to feel when they discover that Brexit is nothing whatsoever like what they were sold, that the problems that made them feel so discontented in the first place have only got worse? The truth is there’s no way out of this situation now that doesn’t make a substantial number of people feel betrayed, because they already HAVE been betrayed, by the motley collection of far right idealogues and chancers who told them Brexit was the answer to their problems. And they’re going to find out sooner or later. Reality can’t be avoided forever, so you can either risk alerting Leavers to the betrayal that’s already happened, or you can betray the entire country further. Those are the only choices left.

So, in summary: the only thing I’ll be “getting behind” is whatever movement emerges to reverse this idiocy, whether that’s via Scottish independence (which until Brexit I never thought I would support) or via one of the Westminster parties returning to sanity.

Bowling to Glasgow canal walk (with “vintage” photos)

When I checked the date this morning I realised it’s exactly 25 years since a day I’ll always look back on fondly. On Saturday 20th November 1993, I walked from Bowling to Glasgow along the Forth and Clyde Canal with my uncle and auntie, Ian and Chris. It was part of my 14th birthday present from them. I could also have chosen a geological expedition in the Pentlands, a meal out, or suggested something myself, all of which would have been fun, but it didn’t take me long to decide on the canal walk.

(I’ll just acknowledge right now that walking 12 miles along a derelict canal in the freezing cold wouldn’t have been most 14 year olds’ idea of fun, and also that most 39 year olds probably wouldn’t remember the exact date of said walk 25 years later. But then as you probably already know if you’re reading this blog, I’m not most people).

As soon as we’d arranged to do the walk, I couldn’t wait for the day to arrive. My mum and brother and I (well, mostly my mum and I, I think my brother was just dragged unwillingly along) had been walking the Union Canal in stages of a mile or two for a few years and had completed most of it by that time. I think we’d done all the way from Edinburgh to about Polmont and probably would have already finished the whole thing if I hadn’t broken my arm falling over a fence a few weeks earlier. I always enjoyed exploring each new section, seeing what it would be like… the forgotten mossy bridges, the overgrown reed-filled basins, the silted up culverts.

But this walk was going to be even better. For a start, we were walking a full 12 miles, further than I’d ever walked in one go before, and that meant a whole 12 miles of canal to explore rather than the usual one or two! Secondly, I barely knew the Forth and Clyde Canal at all so it would all be completely new to me. Thirdly, going right over to the other side of the country, especially to Glasgow, felt very adventurous to me back then. And fourthly, I always enjoyed spending time with Ian and Chris no matter what we were doing.

I’d got a new camera for my birthday so I was taking plenty of photos along the way. It wasn’t the best camera and I wasn’t the best photographer, but it’s still interesting to look back at what the canal was like in those days, as it’s changed quite a lot since then. So I’ve scanned the whole lot.

When Saturday 20th dawned, I was glad to see that it was perfect weather for walking – cold but clear, with no rain and little wind. Ian and Chris (and Ben, their very excitable black labrador, who was also coming with us) picked me up first thing in the morning and Ian drove us all through to Bowling on the Clyde, via the M8 and the Erskine Bridge. On the way through I looked with interest at the map they’d brought, trying to work out the route we were going to be walking. I liked maps and already had quite a few of my own, but I didn’t have that one yet.

Bowling basin

At Bowling we parked in the station car park (an ideal location as we planned to get the train back to Bowling at the end of the day) and found the canal quite easily. Although most of the canal had been closed for 30 years, the first part was still open so that boats from the Clyde could moor there, and there were lots of them in the terminal basin by the sea lock. The towpath that we would be following for the next 12 miles led invitingly off under a large disused railway bridge at the east end of the basin and, after I’d taken my first photo of the day, we were off!

Now that we’d actually started, the walk somehow felt even longer. Even the Erskine Bridge, which passed over the canal a mile or so ahead of us, looked a long way away to me now. The canal was very different to the one I was used to; unlike the Union Canal it was built to take sea going ships, so it was much wider, and the bridges were lift or swing bridges rather than stone arches. The first couple of miles of our walk had a pleasantly open and rural feel, the canal threading its way along the bank of the River Clyde next to a nature reserve and an overgrown disused railway line.

Dalmuir Bridge, where the drop lock is now

The character of the canal changed completely at Dalmuir, where it swung away from the river and was promptly blocked by the very busy Dumbarton Road. I didn’t know then that within a few short years the canal would be reopened, with Britain’s only “drop lock” installed here to take boats underneath the low road. At the time I was just glad to see that there was a pedestrian crossing to help us cross the busy road and get back onto the towpath again. The rural feel had gone completely and there were now houses on both sides.

By this time we were starting to get hungry, probably due to a combination of the cold and the exercise. Chris had originally suggested we have lunch at the Lock 27 pub, but that was still miles away. Thankfully the huge Clyde Shopping Centre was just beyond the next road crossing, so we left the canal there to go and find something to eat.

Clyde Shopping Centre

(Chris had told me before the walk that the canal went underneath a shopping centre at one point and I’d been looking forward to seeing that, but when we got there I found the reality of it a little disappointing. I’d pictured a huge building soaring high over the canal, with shoppers peering down at us from elevated walkways as we walked along beside it. In reality it was really just a covered footbridge across the canal, linking the two parts of the shopping centre on either side. But there was a cafe that served bacon and chips and I wasn’t complaining about that).

I wolfed down my lunch pretty quickly, then drew a rough map of what we’d walked so far while I waited for Ian and Chris to finish their coffee. I was enjoying myself immensely so far, and I knew that the longer and more interesting part of the walk was still to come – I couldn’t wait to get back to it! Once we’d retrieved Ben from where Chris had tied him up (someone had been feeding him dog biscuits but he obviously hadn’t liked the pink ones, which were still lying intact among the crumbs of the other colours), we returned to the towpath and to our walk. As we left the shopping centre behind I wondered what the huge “boat” was sitting on the far side of the canal – I later found out that it was a fish and chip shop!

Linnvale Bascule Bridge (and Ben)

The next section of canal through Clydebank mostly flowed through quite an open, green area – it was nicer than what I’d expected from Clydebank, at any rate! Some of the original little wooden lifting “bascule” bridges were still there, but the newer roads mostly just crossed on embankments, the water channelled into pipes or weirs underneath – there was no chance of getting a boat through this section any time soon.

Lock 35, with Ben and Chris

There were also quite a few locks, which I found interesting as I hadn’t seen many locks before, the Union Canal not having any, so I lagged behind with Ben and had a closer look at them while Ian and Chris walked on. The old wooden gates mostly looked in a pretty bad state, and had been cut down to the minimum size needed to hold back the water now that the canal was no longer in use.

Great Western Road infill

At one point we had to climb up an embankment and cross a busy dual carriageway, Great Western Road. On the other side the canal was piped for about half a mile so we had a little canal-less walk through some trees and then up through two dry, half-buried lock chambers that had been made into a little park. Ian was amused by a heritage sign that had been put up by British Waterways or some similar organisation that said “Forth and Clyde Canal” with “in culvert” in very small letters underneath, because there was no sign of any canal!

Temple Lift Bridge

After the canal re-appeared, we passed a few more dilapidated locks, but at least these ones had water in them. We were well into the suburbs of Glasgow now and the towpath was quite busy with local people walking their dogs or using the towpath as a shortcut. Lock 27, with its eponymous pub, was just past a huge metal lifting bridge carrying Bearsden Road at Temple. I think it was about 3pm by the time we got here, so I was glad we hadn’t waited this long to have lunch! We stopped for a little rest on a convenient bench and I loaded a new film into my camera, having finished the previous one.

Cleveden Road culvert

I was surprised to see that the next bridge looked from a distance as if it was arched, though too low for anything bigger than a canoe to get under it. As we got closer it turned out to actually be a modern corrugated iron culvert, so again we had a road to cross (you can see Chris putting Ben’s lead on ready for the road in the photo above).

Ian, Chris and Ben at the Kelvin Aqueduct

Now we were nearly at the part of the walk I’d been looking forward to the most: the Kelvin Aqueduct! After walking across the top and admiring the views down the river valley, we went down underneath it so I could take some photos of it from below. It wasn’t as long or as tall as the Union Canal’s three aqueducts, but it was impressively huge and solid. (I think it was actually easier to get decent photos of it back then that it is now – there were fewer trees in the way!).

View from Kelvin Aqueduct

I also took a photo of the view from the top. I remember noticing the bridge piers a little way downstream and wondering what they were for. Little did I know they used to carry a railway line into the Kelvindale Tunnel, which I would explore nearly 19 years later.

Maryhill Locks

Just beyond the aqueduct was the final lock flight of our day. It was also the steepest climb of the day since there were 5 locks in quick succession here. They were a bit different from the other ones I saw that day since they’d recently been restored to working order, with new metal gates and smart black and white paint. (They also might look familiar to Still Game fans!).

Stockingfield Junction

It was starting to get dark by this time, but I’d still been hoping to take a few more photos. Imagine my annoyance when my camera decided the film was finished and it was going to rewind it, after I’d taken this picture of Stockingfield Junction (where a 2 mile branch into Glasgow city centre meets the main canal). I’d just put in what was supposed to be a 24 exposure film and I’d only got 9 photos out of it. I was hoping it was just a freak film but it turned out to be a recurring problem with that camera.

Anyway, after we’d finished discussing the annoying behaviour of my camera, we turned our attention to the rest of the walk. Ian said that if I was tired we could leave the canal here and get the bus the rest of the way to the station, but I was still feeling fine and was happy to carry on walking. (As luck would have it, I then did start to feel really tired a few hundred yards further on, but I didn’t like to say anything at that point!).

The Glasgow branch turned out to be pretty interesting. There were no more locks, but there were a couple of bridges which had been rebuilt in a modern concrete Charles Rennie Mackintosh-inspired style in order to reopen this section of the canal. (Previously they’d been low bridges or culverts that would have been impossible to sail under). There were also several aqueducts over roads, as this part of the canal was high up on an embankment above the surrounding city.

We passed the Partick Thistle football stadium at Firhill. There was a match on, and some people were sitting up on the embankment so that they could see the pitch. Ian wanted to stay and watch the game but Chris and I, who had no interest in football, wouldn’t let him. We also passed a basin filled with colourful boats, the first boats we’d actually seen since Bowling (other than the fish and chip boat at Clydebank). It was now starting to get properly dark.

Finally, we rounded a corner and at the end of a cobbled wharf was… the end of the canal! It was weird to see it suddenly just stop after following it all day. We stood there a while getting our breath back and looking out over the lights of the city below us. We’d made it.

Now it was time for the next challenge: finding Queen Street Station so that we could catch a train back to Bowling where we’d left the car. Fortunately this was easy enough, and soon we were sitting on one of the low level platforms waiting for the train and eating what was left of the food we’d brought. (After that night I had quite a vivid memory of the low level platforms at Queen Street, though I hadn’t realised there was also a high level station, so I was quite perplexed when I got the train through from Edinburgh the following year and the station looked nothing like I remembered. It was years before I finally found myself in the low level part again and thought “Yes! This is the place I remember!”). When the train came, it only took a few minutes to cover the distance it had taken us all day to walk.

It had been a great day out and had more than lived up to my expectations, as evidenced by the fact I still remember it so well 25 years later! It ended up being the first of three walks, as I ended up walking the whole canal with Ian and Chris between 1993 and 1995. The other two walks were just as enjoyable so maybe I’ll write about them as well when their 25 year anniversaries come around… but for now, thanks for reading 🙂 .

In memory of Ian Ogilvy Morrison, 1950-2006.

 

Game Project part 6: My tools

Right now on the game project, I’m working on something that I think is going to be pretty cool once it’s done, but it’s going to take me a while to get it done, or even to get it to the point where it’s worth me writing about it here. So I thought I would break up this interlude by writing a bit about the tools I’m using to develop the game, in case anyone is interested. I’ve mentioned some of these already in passing but in this post I’ll try and flesh out the full picture a bit more, and also say a bit about why I chose the tools I did (since they probably seem an odd choice to a lot of people).

As I said before, I am writing the game in JavaScript so that it runs in a browser environment, using WebGL for the graphics. The major advantages of this approach are: 1. I don’t have to grapple with all the annoying differences between the various platforms I want to target (Windows, macOS and Linux to begin with), I can just target the web environment and the game will run on almost anything that has a browser; and 2. once the game’s finished, people won’t have to download and install the game in order to play it, they can just navigate to a web page. The major disadvantage is that the performance will be worse than “native” code in C++, but the game isn’t likely to be super demanding so this shouldn’t be too much of a problem in practice.

For testing the game while I develop it I’m mostly using Google Chrome, since that’s what I have on all my machines anyway. I’ll test it on other browsers (Firefox, Safari, etc.) at some point and check that everything works as it should, though I’m trying to steer clear of relying on obscure or browser-specific features as much as possible.

The only JavaScript library I’m currently relying on is glmatrix, a very useful library for doing the vector and matrix operations that are very common in 3D graphics applications. I’ve used it before and there seemed no point in re-inventing the wheel for something so basic and ubiquitous. Other than glmatrix I’m targeting the browser directly and writing all the code myself, though bits of it are adapted from my previous projects rather than being written from scratch for this game.

XAMPP running an Apache server on Windows

As well as a browser, it’s also necessary to have a web server installed for testing purposes, because the game expects to load its data files from a server via HTTP rather than from a local disk. So I have Apache, the most widely used web server in the world, installed on all my machines. On Linux I installed it through the normal package manager; on Windows and Mac I’ve installed XAMPP, which does a great job of packaging up Apache plus some other useful software and making it easy to install and use.

To edit the actual game code I’m using GNU Emacs. This is largely due to habit: Emacs has been my editor of choice for nearly 20 years now and its most useful shortcut keys are burned into my brain so deeply that I don’t even have to think about them anymore. It’s a pretty powerful editor and has some decent support for JavaScript, so it probably wouldn’t be a bad choice even if it wasn’t for my long history of using it for everything.

Editing the rendering code using GNU Emacs

I’m storing the code in a Fossil repository. I wrote a whole blog post about Fossil when I first discovered it, so I won’t say too much here. I still like its philosophy of storing all the code along with wiki pages and a bug tracker in a single compact file, while the Fossil executable itself is also a single file that can be dropped onto any system without going through a complex installation process. I probably wouldn’t use Fossil to co-ordinate a huge software project with multiple authors, but for a personal project like this one it’s ideal.

I’m using Fossil’s wiki facility to make notes as I go, covering things like the binary file formats used for models, and the exact process of authoring content for the game engine. Hopefully I’ll manage to keep this up going forwards as I know from past experience how useful it will be later on! I haven’t used the bug tracker system yet but I might later on if it becomes useful.

So that the code is automatically synchronised between all my machines, I keep the Fossil repository file in my Dropbox. I also keep many of the asset files in Dropbox, though I haven’t checked them into Fossil as it’s not really designed for working with large binary files, and putting them in there would just bloat the repository file and cause it to take longer to sync. I’ve used this workflow, for want of a better word, on various projects for the last few years and found it works pretty well for me.

Editing terrain in Blender

I think that pretty much covers the coding and testing side of things. The other major aspect to the game is creating the assets (3D models, landscapes, animations, textures, icons, and so on). My primary 3D software is Blender. As I said in the first post of this series, it’s free, it’s very powerful (people have made entire animated movies using it), it runs on almost anything, and it’s got a great community behind it. Its biggest downside is probably the steep learning curve, but since I’m already mostly over that after using it for my previous 3D dabblings, there was no reason not to use it for this.

I covered MakeHuman, which I’m using for the (you’ve guessed it) human models in some depth in part 2, so I won’t go into it further here.

Even when making 3D games, some 2D image editing is still usually required for preparing textures, making heads up displays, that kind of thing. Gimp has been my paint program of choice for years now. Apparently it’s got a horrible interface compared to Photoshop, but then I’ve never really used Photoshop so I guess you can’t miss what you’ve never had. In any case, Gimp has never let me down so far in terms of features. I’ve only used it once so far for this particular game, for tiling the ground textures into a single image (the “snap to grid” feature was very helpful here), but I’m sure I’ll be breaking it out again before too long. My other favourite image editor is Inkscape, which handles vector graphics rather than bitmapped. In the past I’ve found it great for designing icons and stuff like that.

Editing the terrain texture map in Gimp

I think that’s pretty much all my tools covered now. If you’ve been following along, you’ll notice that all of them are free to use and the majority are open source, which is no accident. I’m not a free software zealot who demands that all software must be freely licensed, but there are some good practical reasons why I much prefer using open source tools wherever possible.

Firstly, I don’t have to pay for them. I’m not a complete cheapskate, but at the same time I have a family to support now and better things to do with my limited money than pour it into (for example) Adobe’s pockets for a Photoshop subscription when Gimp meets my needs perfectly well.

Secondly, they tend to be cross-platform. That’s important for me because I regularly use all three major operating system platforms (Windows, Linux and macOS) so I much prefer tools that work on all three of them, as all of the tools I’ve described in this post do. I like it this way; it means I’m not locked into one particular platform and am free to switch whenever I want without having to throw away the time I’ve invested in learning this stuff and start from square one with a whole new suite of software. For example, last year it made sense for me to get a MacBook (I needed a Mac for a project and didn’t own a decent laptop at the time) and, even though I hadn’t used one since high school, I was able to install all my preferred free tools and get up and running with it very quickly.

Thirdly, they’re not going anywhere. With commercial software there’s always the worry that the company will go out of business or will discontinue their product after I’ve come to rely on it. Sure, I could continue using an old version (unless it’s a subscription service *shudder* ) but it might not keep working forever, or might keep me locked into an older operating system and unable to upgrade. That’s much less likely with open source, because even if the original developer disappears, someone else from the community can step up and take over maintenance. (I could even do it myself in some theoretical world where I have time for such things 😉 ).

So yeah. While I could probably make my game more rapidly if I switched to some expensive all-singing-all-dancing commercial Windows-only solution, I’m happy with the approach I’m taking, and it also fits well with my desire to understand everything and be able to tinker with the low level code if I want to. I hope you found this at least somewhat interesting or informative. Next time I’ll be back with a proper progress report to share with you.

 

Game Project part 5: Billboards, Culling and Depth Cuing: not just a load of random words…

… though you might be forgiven for thinking that at first 😉 . Why do so many things in computing have such weird names?

In my post about trees, I mentioned that having too many trees in the scene can make the game engine run pretty slowly, because each one contains a lot of polygons and vertices. This could be a problem for me because some of the areas of my game are going to be quite big and contain quite a lot of trees, and I want the game to perform reasonably well even on quite modest computers. So in this post I’m going to talk about some of the tricks that can be used to speed up the rendering of complex 3D scenes, which I’ve been spending a lot of time lately coding up for my game engine.

Culling

The first trick is a pretty simple one: don’t waste time drawing things that aren’t going to be visible in the final scene. This might seem obvious but in fact it’s quite a common approach in simple 3D graphics applications just to throw everything onto the screen and let the GPU (Graphics Processing Unit) sort out what’s visible and what isn’t. (My game engine as described in the earlier posts used this method). This still works fine because the GPU is smart enough not to try and draw anything that shouldn’t be there, but it’s inefficient because we’ve wasted time on processing objects and sending them to the GPU when we didn’t need to. It would be better if we could avoid as much of this work as possible.

This is where culling comes in. It refers to the process of removing items from the graphics pipeline as early as possible so as not to waste time on them. There are various methods of doing this, because there are various reasons why items might not be visible:

  1. They’re behind the viewer.
  2. They’re too far to the side to be visible.
  3. From the viewer’s point of view they’re completely hidden behind other objects.

The first two cases aren’t too hard to deal with. We can imagine the area of the world that’s visible to the viewer as a big sideways pyramid shape projecting out into 3D space (often called the view frustum), then we can immediately cull anything that falls completely outside of this pyramid, because it can’t be visible. The details of how this is done are quite complicated and involve projections and various different co-ordinate systems, but it’s reasonably efficient to do.

There are a couple of ways of making the clipping even more efficient:

  1. Instead of examining every vertex of an object to see if it’s in or out of the frustum, it’s common to work with the object’s bounding box instead. This is an imaginary cuboid that’s just big enough to contain all of the object’s 3D points within it. It’s much faster just to clip the 8 points of the bounding box against the frustum, and it still gives us nearly all same benefits as clipping the vertices individually.
  2. If you arrange your 3D scene in a hierarchical form (often called a scene graph), then you can cull large parts of the hierarchy with very little effort. For example, if your scene graph contains a node that represents a house, and various nodes within that that represent individual rooms, and various nodes in each room that represent the furniture, then you can start by clipping the top level “house” node against the frustum. If it’s outside, you can immediately cull all of the room nodes and furniture nodes lower down the hierarchy and not have to spend any more time dealing with them.

(The view frustum only extends a limited distance from the viewer, so it’s also common to cull things that are too far away from the viewer. However, if this distance is too short it can cause far away objects that should be visible to disappear from the scene).

The case where an object is hidden behind another object is a bit trickier to deal with, because there’s usually no easy way to tell for sure whether this is the case or not, and we don’t want to have to get into doing complicated calculations to try and work it out because the whole point of culling things in the first place was to try and avoid doing too many calculations! However, there are exceptions; indoor scenes are a bit more amenable to this sort of optimisation because (for example) if you’ve got a completely solid wall separating one room of a building from another, you know straight away that when the viewer is in the first room, nothing in the second room is ever going to be visible (and vice versa).

Depth Cuing

Sometimes, though, even when we’ve culled everything we realistically can, things still run too slowly. For example, imagine a 3D scene looking down from a hill over a big city spread out down below. There could be hundreds or even thousands of buildings and trees and other objects visible to the viewer, and we can’t just start removing them without the player noticing, but on the other hand it’s a hell of a lot of work for the computer to render them all. What can we do?

One other option is depth cuing. This involves using less detailed models for certain objects when they’re further away from the viewer. For example, I can instruct my tree generator code to use fewer vertices on the stems and trunks, and simpler shapes made up of fewer triangles for the leaves. This wouldn’t look good for trees close to the camera, because you’d notice the shapes looking less curved and more blocky, but for trees in the distance it’s not too bad.

MakeHuman can also use less detailed “proxy” meshes which would be an option for adding depth cuing to human models.

Full detail MakeHuman model (left), and with low resolution proxy mesh (right)

Ideally it’s better if we can generate the less detailed models of the objects automatically, but it’s also possible to make them manually in Blender if necessary.

Billboards

In 3D graphics terms, billboards are a bit like depth cuing taken to the extreme. In this case, instead of replacing a 3D model with a less detailed 3D model, we replace it with a flat rectangle with the object “painted” onto it via a texture – just like a billboard!

Obviously this is quite a drastic step and it only really looks acceptable for objects that are pretty far away from the camera, but the speed improvement can be dramatic. We’re going from having to render a tree model that might contain thousands of vertices and polygons to rendering a single flat surface composed of 4 points and two triangles!

In fact, older 3D games used to make extensive use of “billboard sprites” – all of the enemies and power-ups in Doom were drawn this way, as were the trees and some other things in Super Mario 64. The downsides are that they can look quite pixellated and blocky close up, and also that (unless the game creators included images of the objects from different angles) they look the same no matter what angle you view them from.

Creating texture images for every object that we might want to turn into a billboard would be a lot of work, and the resulting images would take up a lot of space as well. Fortunately, we don’t have to do this; WebGL is quite capable of creating the billboard images on-the-fly when they’re required, using a technique called render-to-texture. Basically, this means that instead of drawing a 3D scene directly onto the screen like normal, we draw it into an image stored on the GPU, and that image can then be used as a texture when drawing future scenes.

That little pixellated tree was my very first attempt at a billboard sprite!

This is an incredibly useful technique. As well as making billboards, it can also be used for implementing things like display screens and mirrors in games, and some 3D systems use it extensively for doing multiple rendering passes so that they can do clever stuff with lights and shading. I’d never used it myself before, but once I’d coded it up for generating the billboards, I was pleased that it seemed to work pretty well.

Up close, it’s pretty obvious which tree is the 3D model and which is the billboard…

… but from a bit of a distance the billboard looks a lot more convincing

One potential problem with both depth cuing and billboards is known as “pop in”. This is the effect you sometimes see when you’re walking forwards in a game and you see a sudden visible “jump” in the scenery coming towards you, because you’ve now got close enough to it that the billboard (or less accurate model) being used for speed has been replaced by the proper 3D model. It’s difficult to get rid of “pop in” altogether, because no matter how good the billboard is, it’s never going to look exactly the same as the original model, even from quite a distance; but we can minimise it by using as good a substitute as possible and by only using it for objects a long way from the viewer.

Phew! That was pretty long and quite technical this time, but I’m really pleased to have got all of this stuff into the game engine and working. (It’s swelled the engine code up to a much larger 3,751 lines, but it’ll be worth it). I’ve tried to make it all as general as possible – there’s a mechanism in the code now for any object in the game world to say to the engine, “Hey, you can replace me with a 256×256 pixel billboard once I’m 20 metres away from the camera!” or “Here’s a less detailed model you can use once I’m 10 metres away!”, so it should be useful for speeding up all sorts of things in the future. Hopefully next time I should be back doing something a bit more fun… I haven’t quite decided what yet, but it’ll probably involve adding more elements to the game world, so stay tuned for that.

But why now?

You might reasonably ask why I chose to do all this optimisation work so early on in the project. After all, there were plenty of more interesting (to most people anyway!) things I could have been working on instead, like adding streets and buildings to my town. Also, the general advice given to programmers is not to get caught up in optimising code too early, because it complicates the code and because you might end up wasting your time if it turns out it would have run fast enough anyway. I had three main reasons for disregarding this advice:

  1. I already knew from similar projects I’d done recently that I was going to need these optimisations or the engine would be nowhere near fast enough.
  2. I also expected that the billboarding was going to be (along with the skeletal animation) one of the trickiest things to code, so I wanted to get them both out of the way as early as possible, because if it turned out that they were beyond my coding ability or beyond what JavaScript could realistically cope with, I’d rather find that out now than when I’ve spent months perfecting the rest of the game only to find out I can’t actually finish it.
  3. In my experience it’s usually easier to build fast code from the start than it is to try and “retrofit” speed to slow code later on. Some optimisations require a certain code architecture to work properly, and it’s not ideal if you find you’ve already written 10,000 lines of code using a completely different architecture.

Anyway, I’m happy. It’s all working now and the coding difficulty should hopefully be mostly downhill from this point onwards.

 

Game Project part 4: Vegetation… that’s what you need

Oops. I didn’t mean to leave it quite so long after part 3 before posting this. I have been doing quite a lot on the game, it’s just either been dull (but necessary) rearranging of the code that I’m not going to bother writing about, or it’s been other coding stuff that I’ll discuss in part 5 (my coding has got a bit out of sync with my posts on here).

Having now got the basics of human models and animation working using WebGL, for the next several posts I’ll be shifting my attention back to creating the game world, and building the tools I need to give the characters a more interesting environment to explore. Starting, in this post, with trees!

As luck would have it, I already had a very useful chunk of tree-related code that I wrote for the Botanic Gardens Station model I made a couple of years ago, and I was able to integrate that code mostly unchanged into my new game engine, allowing me to place trees into the game world. But although the main core of the tree generation code was already there, I wanted to add three major things for this game:

  1. A nice tree editor. I always intended to make one for the Botanics project, but in the end I only used one type of tree for that model, so I did it by just editing numbers in the code until I got it to look vaguely like I wanted it to. For the game I want a more powerful way of editing tree types.
  2. An easy way of placing trees into the game world. I want the ability to place individual trees at specific points, but I also want to be able to generate areas of woodland without having to manually specify the exact location of every single tree. I already had something like this for the Botanics model which I could adapt.
  3. A way to keep the game running fast even with lots of trees in the world. The tree models contain quite a lot of polygons and on less powerful systems (like the Macbook Air that I’m using for much of the game development) things can easily slow down to a crawl when there are lots of trees visible. I need to speed up the code so that it can cope better with this.

Altogether that added up to quite a lot of work, so much so that I’m going to split off the third item into a separate blog post about optimisation and just concentrate on 1 and 2 here.

No. 1: The Larch. The Larch *

I wrote most of the tree editor on the train down to London. All the people standing in the aisles who would have been on the previous train if it hadn’t been cancelled made it a bit hard to concentrate, but luckily this code didn’t require too much thought – it was mostly a case of adding edit controls to a web page and writing code to move values to and from them. The resulting editor isn’t particularly advanced or pretty, but it works and will be far better than trying to create tree types by editing numbers in the code like I was before.

It’s a bit like a very primitive MakeHuman for trees, in the sense that it lets you edit tree models by tweaking meaningful(ish) parameters like the lengths of the branches, the density of the leaves and the overall shape rather than having to worry about the individual vertices and faces like you would in traditional 3D editing. Once I’m done editing each tree, I can copy the text below the tree graphic and paste it into my JavaScript code to include that tree type in the game.

Placing trees using Blender

One downside to generating geometry such as trees within my code is that it means I can’t easily use a 3D editor like Blender to put them in the scene – if I was to model a tree using Blender and put it in my scene and export it, I’d have a full 3D mesh, which isn’t what I want, because I want to generate the meshes for the trees within my JavaScript code instead.

However, I still want to use Blender to build the game scenes. I’m already using it for my terrain, and it has lots of amazing editing tools that will be handy for future parts of the game. I could just leave the trees out of the Blender scenes and put them in some other way (either by building some editing tool of my own or just by tweaking the co-ordinates manually), but that would be more work, and I’d have to remember that there were going to be trees in certain places when editing the other stuff in Blender, so it’s not ideal.

“That’s not a cube, it’s a tree”. “Oh. I see you’ve played Cubey-Treey before!”

Instead I’m just putting placeholder objects in the Blender scene (I’m using cubes but they could be anything), and giving them names that make it obvious they’re meant to be trees, and also include their types. Then I’ve extended the terrain converter program that I wrote way back in post 1 to recognise the placeholders and spit out a list of their positions and types in a form that I can easily incorporate into the game code. That way I can move the trees around and change their types from within Blender like I wanted, but I still get the advantages that come with generating the tree models within my game engine.

But what about whole areas of woodland? As I said earlier on, I didn’t want to have to tediously place every tree manually for those. Once again I’m using the idea of a “placeholder” object in the Blender scene. This time it’s merely a base that trees will later sprout from. The code I wrote for the Botanic Gardens Station model can automatically place trees on this base according to some parameters. I can tell it what types of tree I want in this woodland, how densely they cover the ground, how close they’re allowed to be to each other, etc. and it will generate a woodland for me with very little effort on my part!

My terrain model in Blender. That yellow rectangle is an area that I’m marking out to become a woodland.

Of course, there’s always a risk that I won’t like what it comes up with. If so, I can change a “seed” number and it will place the trees differently, though still obeying the same constraints as before.

Back in the game engine, my character can now go for a walk in the woods.

So now the game has a slightly more interesting landscape! There are still many more elements to be added of course, but my next task is going to be to speed it up. It’s already slowing my laptops down noticeably when I add a woodland to the scene (though my nice big Linux desktop PC with proper graphics card doesn’t even break a sweat), so that needs to be addressed before I go too much further.

* may not actually be a larch

Game Project part 3: An animated discussion

… well, a blog post about animation, at least 😉 .

Last time I left my preliminary game characters to move around the preliminary game world with their legs gliding across the ground and their arms sticking out like scarecrows’. My task since then has been to get them animated properly.

There’s a few different ways of handling animation in 3D. At the most basic level, if you want to make a character model appear to walk, you need to show the model with its arms and legs in different positions each frame to give the illusion of motion. So one method would be to just make a separate 3D model for each animation frame and show them in quick succession, one after another.

This is a pretty simple approach, but it has its drawbacks. Firstly, you’re going to have to make a lot of models! If you want a walking animation that runs at 30fps and is 2 seconds long, you need to edit and export 60 separate models, one for each frame. That’s a lot of work, and if you then want to make a walking animation for a second character, you need to do it all over again. Secondly, a lot of models means a lot of data, and in the case of a web-based game like mine a lot of data is bad news because it’s all got to be transferred over the internet whenever someone plays the game.

These drawbacks can be overcome by using skeletal animation instead. In this case you assign a skeleton (a hierarchy of straight line “bones”) to your character model, as well as a “skin” that describes how the character mesh deforms when the skeleton moves. Then you can create animations simply by determining the shape of the skeleton for each frame, and the model itself will automatically contort itself into the right shape. This means that you don’t need to store a complete new copy of the mesh for each frame, only the angles of each bone, which is a much smaller amount of data. Even better, as long as all your human models share the same skeleton structure, you can apply the same animations to all of them.

A skeleton “walking” in Blender:

Despite the big advantages of skeletal animation, I hadn’t been planning to use it at first for this game, because it requires some slightly complicated calculations that JavaScript isn’t ideally suited to. But once I’d thought about it a bit I realised that realistically I would have to use it to keep the amount of data involved (and the amount of manual effort to create the animations) manageable, so I coded it up. It wasn’t as bad as I feared and the core code to deform a mesh based on a skeleton only came to about 100 lines of JavaScript in the end.

Even small bugs in the animation code can cause effects that surrealist artists would have loved

You can make animations using Blender’s “Pose Mode” and animation timeline, then save them to BVH (BioVision Hierarchy) files. I wrote (yup, you’ve guessed it) yet another converter tool to convert these into binary files for the game engine, as well as extending my previous Collada converter to include the skeleton and skin information from MakeHuman. MakeHuman has various built-in skeletons that you can add to your human. I use the CMU skeleton, partly because at 31 bones it’s the simplest one on offer, and partly because it works with some nice ready made animations that I’ll talk about in a bit.

Here’s a simple animation of a character’s head turning that I made in Blender. It probably won’t win me any awards, but it served its purpose of reminding me how to create animations:

I didn’t, however, make that “walking skeleton” animation that I showed earlier. It came from a great animation resource, the Carnegie Mellon University Motion Capture Database. This contains a huge library of animations captured by filming real people with ping pong balls* attached to them performing various actions, and they’re free to use for any purpose. I will probably use some of these in the game, though they probably won’t have all the animations I need and I’ll still have to make some myself, so I’m a bit worried that the CMU ones will show up mine as being pretty rubbish! Still, we’ll get to that later.

Here’s one of my game characters walking with an animation from the CMU database:

With the animation in place it suddenly starts to look much more like an actual game, though admittedly a pretty dull one at this point!

Next I think I’ll turn my attention to re-organising the code a bit so that it can better manage all the assets required for an actual game – not the most glamourous of tasks but it needs to be done and will be well worth it. Then I’ll probably switch back to working on the game world and add something more interesting than a bare landscape for the characters to explore! Hopefully talk to you again soon.

(In case anyone’s interested, the current JavaScript game code runs to 1,558 lines).

* may not actually be ping pong balls

Game Project part 2: I guess this is character building

Last time I worked out a method of editing terrain using Blender, exporting it and then rendering it using JavaScript and WebGL. This time we’re going to add something a bit more interesting: namely a character!

Now, the characters in my game are going to be humans, and modelling 3D humans is not an easy thing to do unless you’re a pretty experienced 3D artist (which I’m not). Fortunately for me and others like me, there’s a great little free program called MakeHuman that does most of the difficult bits for you! In fact, I would go so far as to say that I probably wouldn’t be building this game at all if it wasn’t for MakeHuman, because I’d be worried about my character models looking so depressingly awful that they’d ruin the whole thing.

Making a male character in MakeHuman

MakeHuman is (to me, at least) one of those tools that’s so amazing that it’s almost difficult to believe it really exists. Basically, you start it up and you’re confronted with a 3D human model and a bunch of different controls (mostly sliders) that you can use to change various properties of the human. The sliders on the initial screen control very important properties that affect the shape of the entire human: here you can control gender (a continuum between male and female rather than a binary choice), age (from 1 to 90 years), height, weight, race (as a mix of African, Asian and Caucasian) and a handful of other things. But if you drill down into the other tabs, there are sliders to change just about every detail you can imagine (for example, there are 6 sliders just in the “Nose size” category, and there are also “Nose size detail” and “Nose features” categories!). The quality of the generated models is very good indeed.

As well as giving you a huge amount of control over the shape of your human model, MakeHuman also provides a wealth of other useful features. On the “Materials” tab you can assign skin and eye textures to the model, and there’s also a “Pose/Animate” tab that controls the character’s pose and allows you to add a skeleton for skeletal animation (more on that in a future post). And unless you’re exclusively making naked, hairless characters (not that there’d be anything wrong with that 😉 ), you’ll definitely want to visit the “Geometries” tab to add some hair and clothes to your model. If MakeHuman has a weakness it’s probably that the selection of hair, clothes and other accessories included is a bit barren, but you can make your own using Blender or download ones other people have made from the MakeHuman Community site, which has a much better selection.

Making a female character in MakeHuman

As you’ve probably guessed by now, I rather like MakeHuman! I’m only really scratching the surface of it here as this is supposed to be a blog about my game rather than about MakeHuman, but there’s plenty more information on the MakeHuman Community website and elsewhere online if you’re interested.

The licensing of MakeHuman is set up so that as long as you use an unmodified official build of the software and export using the built-in exporters, the resulting model is under the Creative Commons Zero licence, allowing you to do anything you want with it, even incorporate it into a commercial product. In order to get my character models out of MakeHuman and into my JavaScript game engine, I decided to use a similar approach to that used for the terrain last time: export to a standard file format and then write a converter program to convert to a compact binary format for the engine.

MakeHuman supports the same OBJ format that I used for exporting the terrain from Blender, but I didn’t use it this time; it doesn’t support all the features of MakeHuman and although this limitation wouldn’t be a problem right now, it will become a problem in the near future. Instead I used Collada, a complex XML-based format that captures a lot more information than OBJ does. Loading Collada files is a lot more involved than loading OBJ, but luckily I already had some C++ code from a previous project that was capable of extracting everything I needed. I modified this to write out the important data (at this point just the basic 3D mesh for the human body as well as accessories like the hair and clothes) to a binary file that I could load into my engine.

I also had to make quite a lot of changes to the engine at this point. When I wrote my last blog post, all it could do was display a single terrain mesh with a single texture mapped onto it. I rewrote the code to include a scene graph system, allowing multiple models to be placed in the 3D world in a hierarchical fashion, and also to support multiple textures. Then I wrote a loader for the models I converted from MakeHuman and added my human to the scene.

Male character in the game world

(The meshes exported from MakeHuman contain a lot of faces and are very detailed, in fact probably too detailed for my purposes. Fortunately it gives you the option of using a a less detailed but more rough looking “proxy” mesh in the place of the detailed mesh, so that’s what I’m doing for my characters, so that file sizes stay small and rendering isn’t too slow. However, the clothes and hair meshes are still very detailed so I suspect I will end up making my own so as not to bog the game down with too many polygons to deal with, as well as to make them look just how I want).

At this point (once I’d ironed out the inevitable bugs) I had a character in my game world, but all it could do was stand there. I wanted some movement, so I added code allowing me to move the human across the terrain using the keyboard, keeping the model at ground level at all times. I also added a very basic “floating camera” that would follow along behind the character. All of this (the models, the physics, the camera) is very preliminary and will need a lot of improvement in the future, but right now it’s quite cool to see the humans and the terrain working together in the game engine like this.

Female character in the game world

“But”, you might say, “Why are their arms sticking out like scarecrows’?”. If I’d been arsed to make a video rather than just post still screenshots, you would probably also comment on the fact that they’re sliding along the ground without their legs moving at all. Fear not! My very next step will be to add some animation to the characters. I was originally planning to have that done for this post but once I realised how much work it would be I decided to split it off, so stay tuned for that.

 

Game Project part 1: Do you think it’s going terrain today?

Last time I talked about the new game project I’m planning to start. I was feeling quite enthusiastic about it and I had a bit of time while I was away in the caravan, so I decided it was time to actually start doing stuff on it!

I won’t say too much about the actual concept of the game yet, and in fact it might still change a bit before it’s finished, but it’s going to be set in a 3D town that you can wander round. So one of the first things to do will be to get the bare bones of a game engine running that can display the 3D world using WebGL, and also get an editing pipeline working so that I can create and edit the environment.

For most of the 3D editing I’m planning to use Blender. It’s free, it’s very powerful, it has a great community, it runs on almost everything and I already know how to use it, so what’s not to like? At some point I might want a more customised editing experience, maybe either writing a plugin or two for Blender or adding an editing mode to the game engine itself, but for the moment vanilla Blender will do.

The first element of the environment that I’m going to focus on is the actual ground, since it’s (literally) the foundation on which everything else will be built. I could model the ground as a standard 3D mesh, but it would be more efficient to treat it as a special case: it’s basically a single plane but with variations in height and texture across it, so we can store it as a 2D array of height values, plus another 2D array of material indices. My plan for the ground was as follows:

  • Add the ground as a “grid” object in Blender, and model the height variations using Blender’s extensive array of modelling tools
  • Export the geometry in OBJ format (a nice simple format for doing further conversions on)
  • Write a converter program in C++ to convert the OBJ file into a compact binary format containing the height values for each point and the material values for each square
  • Create a single texture image containing tiled textures for all the materials used
  • Write JavaScript code to parse the binary file and actually display the terrain!

(We could load and parse the OBJ file directly in JavaScript, but this would be significantly larger, and size matters when working in a browser environment, because every data file has to be downloaded over the internet when running the game).

Editing terrain in Blender

The terrain work went reasonably smoothly once I got started. Editing the heights as a Blender grid worked well, with the proportional editing tool being very useful. Writing the C++ converter tool didn’t take too long, and the binary terrain files it creates are about 10 times smaller than the original OBJ files exported from Blender, so it’s well worth doing the conversion.

Terrain rendered in WebGL

Writing a WebGL renderer for the terrain was a bit more involved. The main problem I ran into was an unexpected one: I could see dark lines appearing along the edges of the terrain “tiles” when they should have joined up with each other seamlessly. I eventually traced this to my decision to store all of the textures for the ground in a single image. This works fine for the most part, but I hadn’t foreseen that it would cause problems with the filtering used by WebGL to make the textures look smoother at different scales. This causes a slight blurring effect and when you have multiple textures side-by-side in a single image, it causes their edges to “bleed” into each other slightly.

I solved this by putting 4 copies of each texture into the image, in a 2×2 layout, and mapping the centre section onto the terrain, so that the blurred edges are never used. This reduces the amount of texture data that can be stored in a single image, but it’s still better than storing each texture in a separate image and having to waste time switching between them when rendering.

Now that it’s done I’m reasonably happy with how it looks. I was a bit worried that the height differences might distort the textures too much, but they actually don’t seem to. I do plan to add some additional texture images for variety, and it should also look a lot better once some of the other elements of the scene (buildings, roads, vegetation, etc.) are in place.

Another shot of the rendered terrain

The WebGL renderer is still in its very early stages; right now all it can do is render a single 3D terrain object with a plain sky blue background, illuminated by a single directional light source, and allow me to move the camera around for testing. Obviously it’ll need a lot of other stuff added to enable it to show everything else required for the game, as well as to make things look a bit nicer and run a bit faster – but we’ll get onto that next time.

(Incidentally, the texture images are all from textures.com, a great resource for anyone doing anything 3D related. You can get loads of textures of all sorts from there and they’re free to use for most purposes).

New game project

I’ve decided it’s time to start a new game project. I haven’t done one (well, not a proper one) in a few years but things now seem to be nudging me back in that direction.

A screenshot from the first full game I created. Yes, it’s a fan-made Dizzy game for the Spectrum.

Since the Union Canal Unlocked project finished a year or so ago, I’ve been working on a 3D project in my spare time, but I’ve become frustrated that it’s not really going anywhere, at least not very fast. I had very ambitious goals for it and maybe I’m just starting to realise how long it would realistically take for me to achieve them. I may go back to it at some point, but right now I’m getting tired of pouring time and effort into code that may not actually produce any interesting output for several months or even years.

At the same time, a few things have happened that reminded me how much I used to enjoy making games. I read through an old diary from the time when I was making my first one (well, the first one I actually finished), back in 1994, which seems impossibly long ago now. It brought back the feeling of achievement and progress I used to get from making another screen or another graphic. I’ve also recently played through a game that a friend made a few years ago, and another friend started studying game design just a few weeks ago. It feels like the right time to go back to it.

My second game, also for the Spectrum. This was going to have a ridiculously ambitious 56 levels but I only got around to making 6

Of course, it’s going to be challenging to find the time, especially with our new arrival in the household! But in a way that just makes me more determined to use my scarce time more effectively, on something that I’ll actually find rewarding, rather than trying to force myself to work on something I’ve lost interest in.  Even if I only manage to do a little bit each week, I’ll get there eventually.

I have a rough plan for the new game, which will no doubt get refined and altered a lot once I get started on it. It’s going to be my first 3D game (except for a little joke one I made late last year), something I’ve shied away from in the past mainly due to the additional complexity of 3D asset creation, but after actually completing some 3D models in the last few years, I feel a bit more confident that I can do it, and I think it will fit my concept better.

My third game, this time a historical Scottish game for DOS PCs. I only finished one level of this one

I’ve decided to write it to run in a browser, using JavaScript and WebGL. This will have its pros and cons. On the plus side, it’s technology I already have quite a bit of experience of; the game will automatically work on pretty much every platform without much extra effort on my part; people won’t have to install it before playing it; and not using a ready-made game engine will give me freedom to do everything exactly the way I want (plus I find tinkering with the low level parts of the code quite fun!). On the minus side it likely won’t run as fast as it would have as a “native” app, though I don’t see this being a huge problem in practice as what I have in mind shouldn’t be too demanding; and building the engine from scratch will take quite a lot of work.

To begin with I’m just going to target computers rather than tablets and phones. The control system I have in mind will work with keyboards and mice but not so well with touchscreens. At some point later on I might add a touch control scheme since most of the rest of the code should work fine on touch devices.

“Return of the Etirites”, probably the best game I ever made. It’s basically a rip-off of Mystic Quest on the Gameboy

I’m intending to write a series of posts on here to chronicle my progress. Of course, it’s always a bit dangerous to commit to something like this publicly, but that’s part of my reason for wanting to do it… I hope it will encourage me to actually do some stuff and not just think about it! And it will give me something nice and constructive to write blog posts about, instead of Brexit 😉 . It might take me a while to get the first post up, because as anyone who’s used WebGL (or done any OpenGL coding without using the fixed pipeline) knows, it takes quite a lot of code to even display anything at all. But once the basics are done it should be possible to build on it incrementally and progress a bit more rapidly.

Wish me luck!