Making the Online Botanic Gardens Station Model (Part 2: The Viewer)

Last time, I talked about how the 3D model itself was made. In this post, I’ll discuss how I embedded it into a web page so it can be explored in a web browser.

Not so long ago, it was difficult or impossible to produce real time 3D graphics in a web browser, at least it was if you wanted your page to work in a variety of browsers and not require any special plug-ins. That’s all changed with the advent of WebGL, which allows the powerful OpenGL graphics library to be accessed from JavaScript running in the browser. WebGL is what’s used to render the Botanic Gardens Station model.

The finished WebGL viewer

The finished WebGL viewer

There are already a number of frameworks built on top of WebGL that make it easier to use, but I decided I was going to build on WebGL directly – I would learn more that way, as well as having as much control as possible over how the viewer looked and worked. But before I could get onto displaying any graphics, I needed to somehow get my model out of Blender and into the web environment.

I did this by exporting the model to Wavefront OBJ format (a very standard 3D format that’s easy to work with), then writing a Python script to convert the important bits of this to JSON format. Initially I had the entire model in a single JSON file, but it started to get pretty big, so I had the converter split it over several files. The viewer loads the central model file when it starts up, then starts loading the others in the background while the user is free to explore the central part. This (along with a few other tricks like reducing the number of digits of precision in the file, and omitting the vertex normals from the file and having the viewer calculate them instead) reduces the initial page load time and makes it less likely that people will give up waiting and close the tab before the model even appears.

How not to convert quads to triangles

How not to convert quads to triangles

Once the model is loaded and processed, it can be displayed. One feature of WebGL is that (in common with the OpenGL ES API used on mobile devices) it doesn’t have any built in support for lighting and shading – all of that has to be coded manually, in shader programs that are compiled onto the graphics card at start up. While this does increase the learning curve significantly, it also allows for a lot of control over exactly how the lighting looks. This was useful for the Botanics model – after visiting the station in real life, one of my friends observed that photographing it is tricky due to the high contrast between the daylight pouring in through the roof vents and the dark corners that are in the shade. It turns out that getting the lighting for the model to look realistic is tricky for similar reasons.

The final model uses four distinct shader programs:

  1. A “full brightness” shader that doesn’t actually do any lighting calculations and just displays everything exactly as it is in the texture images. This is only used for the “heads up display” overlay (consisting of the map, the information text, the loading screen, etc.). I tried using it for the outdoor parts of the model as well but it looked rubbish.
  2. A simple directional light shader. This is what I eventually settled on for the outdoor parts of the model. It still doesn’t look great, but it’s a lot better than the full brightness one.
  3. A spotlight shader. This is used in the tunnels and also in some parts of the station itself. The single spotlight is used to simulate a torch beam coming from just below the camera and pointing forwards. There’s also a bit of ambient light so that the area outwith the torch beam isn’t completely black.
  4. A more complex shader that supports the torch beam as above, but also three other “spotlights” in fixed positions to represent the light pouring in through the roof vents. This is only used for elements of the model that are directly under the vents.
The full brightness shader in all its horrible glory

The full brightness shader in all its horrible glory

Although there’s no specular reflection in any of the shaders (I suspect it wouldn’t make a huge difference as there’s not a lot of shiny surfaces in the station), the two with the spotlights are still quite heavyweight – for the torch beam to appear properly circular, almost everything has to be done per-pixel in the fragment shader. I’m not a shader expert so there’s probably scope for making them more efficient, but for now they seem to run acceptably fast on the systems I’ve tested them on.

Can’t see the wood or the trees

In Part 1, I mentioned that the trees weren’t modelled in Blender like the rest of the model was. I considered doing this, but realised it would make the already quite large model files unacceptably huge. (Models of organic things such as plants, animals and humans tend to require far more vertices and polygons to look any good than models of architecture do). Instead I chose to implement a “tree generator” in JavaScript – so instead of having to save all of the bulky geometry for the trees to the model file, I could save a compact set of basic parameters, and the geometry itself would be generated in the browser and never have to be sent over the internet.

A Black Tupelo with no leaves

A Black Tupelo with no leaves

The generator is based on the well-known algorithm described in this paper. It took me weeks to get it working right and by the end I never wanted to see another rotation matrix again as long as I lived. I wouldn’t be surprised if it fails for some obscure cases, but it works now for the example trees in the paper, and produces trees for the Botanics model that are probably better looking than anything I could model by hand. I didn’t mean to spend so much time on it, but hopefully I’ll be able to use it again for future projects so it won’t have been wasted time.

A Black Tupelo with leaves

A Black Tupelo with leaves

(Blender also has its own tree generator based on the same algorithm, called Sapling. I didn’t use it as it would have caused the same file size problem as modelling the trees manually in Blender would).

Spurred on by my success at generating the trees programmatically (eventually!), I decided to apply a similar concept to generating entire regions of woodland for the cutting at the Kirklee end of the tunnel. Given a base geometry to sprout from and some parameters to control the density, the types of trees to include, etc., the woodland generator pseudo-randomly places trees and plants into the 3D world, again only requiring a compact set of parameters to be present in the model file.

The viewer also contains a texture overlay system, which is capable of adding graffiti, dirt, mineral deposits or whatever to a texture after it’s been downloaded. This is achieved by having a second hidden HTML 5 canvas on the page on which the textures are composited before being sent to the GPU. (The same hidden canvas is also used for rendering text before it’s overlaid onto the main 3D view canvas, since the 2D text printing functions can’t be used directly on a 3D canvas).

Why not just have pre-overlaid versions of the textures and download them along with the other textures? That would work, but would increase the size of the data needing to be downloaded: if you transferred both graffiti’d and non-graffiti’d versions of a brick wall texture (for example), you’d be transferring all of the detail of the bricks themselves twice. Whereas if you create the graffiti’d version in the browser, you can get away with transferring the brick texture once, along with a mostly transparent (and therefore much more compressible) file containing the graffiti image. You also gain flexibility as you can move the overlays around much more easily.

A selection of the station model's many items of graffiti

A selection of the station model’s many items of graffiti

The rest of the code is reasonably straightforward. Input is captured using standard HTML event handlers, and the viewpoint moves through the model along the same curve used to apply the curve modifier in Blender. Other data in addition to the model geometry (for example the information text, the parameters and positions for the trees, etc.) is incorporated into the first JSON model file by the converter script so that it can be modified without changing the viewer code.

So that’s the viewer. Having never used WebGL and never coded anything of this level of complexity in JavaScript before, I’m impressed at how well it actually works. I certainly learned a lot in the process of making it, and I’m hoping to re-use as much of the code as possible for some future projects.

 

Making the Online Botanic Gardens Station Model (Part 1: The Model)

One of my “fun projects” this year has been to make an interactive model of the abandoned Botanic Gardens Station in Glasgow. Although I’ve dabbled in 3D modelling before, including making a documentary video about Scotland Street Tunnel last year, the Botanics project turned out to be by far the most complicated 3D thing I’ve made, as well as by far the most complicated bit of web coding to make a viewer for it. It’s been a lot of fun as well as a hell of a learning experience, so I thought I’d write it up here in case anyone is interested.

The finished model, viewed in Chrome for Linux

The finished model, viewed in Chrome for Linux

In Part 1, I’ll talk about making the actual 3D model. Part 2 will cover the viewer code that actually makes it possible to explore the model from the comfort of your web browser.

I made the station model using Blender, a very capable free, open source 3D package. While various software and hardware now exists that allows you to generate a 3D model automatically from photographs or video, I didn’t have access to or knowledge of it, and I’m not sure how well it would work in a confined and oddly shaped space like the Botanic Gardens Station anyway. So I did it the old fashioned way instead, using the photos I took when I explored the station as a reference and crafting the 3D model to match using Blender’s extensive modelling tools.

The whole model in Blender

The whole model in Blender

I tried to keep the dimensions as close to reality as I could, using one grid square in Blender per metre, referring to the published sizes of the station and tunnels where possible, and estimating the scale of everything else as best I could.

It was actually surprisingly easy and quick to throw together a rough model of the station itself – most of the elements (the platforms, stairs, walls, roof, etc.) are made up of fairly simple geometric shapes and I had the basic structure there within a couple of hours. But as with a lot of these things, the devil is in the details and I spent countless more hours refining it and adding the trickier bits.

The beginnings of the station model

The beginnings of the station model

Because there’s quite a lot of repetition and symmetry in the station design, I was able to make use of some of Blender’s modifiers to massively simplify the task. The mirror modifier can beĀ used for items that are symmetrical, allowing you to model only one side of something and have the mirror image of it magically appear for the other side. (In fact, apart from the roof the station is almost completely symmetrical, which saved me a lot of modelling time and effort). The array modifier is even more powerful: it can replicate a single model any number of times in any direction, which allowed me to model a single short section of roof or tunnel or wall and then have it stretch away into the distance with just a few clicks.

Tunnel, modelled with array modifier

Tunnel, modelled with array modifier

Finally, the curve modifier was very valuable. The entire station (and much of the surrounding tunnel) is built on a slight curve, which would be a nightmare to model directly. But thanks to the curve modifier, I was able to model the station and tunnels as if they were completely straight, and then add the curve as a final step, which was much easier. (I still don’t find the curve modifier very intuitive; it took quite a lot of playing around and reading tutorials online to get the effect I wanted, and even now I don’t fully understand how I did it. But the important thing is, it works!).

Tunnel + curve modifier = curving tunnel

Tunnel + curve modifier = curving tunnel

Texturing the model (that is, applying the images that are “pasted onto” the 3D surfaces to add details and make them look more realistic) turned out to be at least as tricky as getting the actual geometry right. The textures had been a major weak point of my Scotland Street model and I wanted much better ones for the Botanics. Eventually I discovered the great texture resource at textures.com, which had high quality images for almost everything I needed, and under a license that allowed me to do what I wanted with them – this is where most of the textures for the model came from. The remainder are either hand drawn (the graffiti), extracted from my photos (the tunnel portal exteriors and the calcite), or generated by a program I wrote a while ago when I was experimenting with Perlin Noise (some of the rusted metal).

The fiddly part was assigning texture co-ordinates to all the vertices in the model. I quickly discovered that it would have been much easier to do this as I went along, rather than completing all the geometry first and then going back to add textures later on (especially where I’d “applied” array modifiers, meaning that I now had to assign texture co-ordinates individually for each copy of the geometry instead of just doing it once). Lesson learned for next time. At first I found this stage of the process really difficult, but by the time I’d textured most of the model I was getting a much better feel for how it should be done.

The model in Blender, with textures applied

The model in Blender, with textures applied

(The trees and bushes weren’t in fact modelled using Blender… more about them next time!).