Last time, I talked about how the 3D model itself was made. In this post, I’ll discuss how I embedded it into a web page so it can be explored in a web browser.
Not so long ago, it was difficult or impossible to produce real time 3D graphics in a web browser, at least it was if you wanted your page to work in a variety of browsers and not require any special plug-ins. That’s all changed with the advent of WebGL, which allows the powerful OpenGL graphics library to be accessed from JavaScript running in the browser. WebGL is what’s used to render the Botanic Gardens Station model.
There are already a number of frameworks built on top of WebGL that make it easier to use, but I decided I was going to build on WebGL directly – I would learn more that way, as well as having as much control as possible over how the viewer looked and worked. But before I could get onto displaying any graphics, I needed to somehow get my model out of Blender and into the web environment.
I did this by exporting the model to Wavefront OBJ format (a very standard 3D format that’s easy to work with), then writing a Python script to convert the important bits of this to JSON format. Initially I had the entire model in a single JSON file, but it started to get pretty big, so I had the converter split it over several files. The viewer loads the central model file when it starts up, then starts loading the others in the background while the user is free to explore the central part. This (along with a few other tricks like reducing the number of digits of precision in the file, and omitting the vertex normals from the file and having the viewer calculate them instead) reduces the initial page load time and makes it less likely that people will give up waiting and close the tab before the model even appears.
Once the model is loaded and processed, it can be displayed. One feature of WebGL is that (in common with the OpenGL ES API used on mobile devices) it doesn’t have any built in support for lighting and shading – all of that has to be coded manually, in shader programs that are compiled onto the graphics card at start up. While this does increase the learning curve significantly, it also allows for a lot of control over exactly how the lighting looks. This was useful for the Botanics model – after visiting the station in real life, one of my friends observed that photographing it is tricky due to the high contrast between the daylight pouring in through the roof vents and the dark corners that are in the shade. It turns out that getting the lighting for the model to look realistic is tricky for similar reasons.
The final model uses four distinct shader programs:
- A “full brightness” shader that doesn’t actually do any lighting calculations and just displays everything exactly as it is in the texture images. This is only used for the “heads up display” overlay (consisting of the map, the information text, the loading screen, etc.). I tried using it for the outdoor parts of the model as well but it looked rubbish.
- A simple directional light shader. This is what I eventually settled on for the outdoor parts of the model. It still doesn’t look great, but it’s a lot better than the full brightness one.
- A spotlight shader. This is used in the tunnels and also in some parts of the station itself. The single spotlight is used to simulate a torch beam coming from just below the camera and pointing forwards. There’s also a bit of ambient light so that the area outwith the torch beam isn’t completely black.
- A more complex shader that supports the torch beam as above, but also three other “spotlights” in fixed positions to represent the light pouring in through the roof vents. This is only used for elements of the model that are directly under the vents.
Although there’s no specular reflection in any of the shaders (I suspect it wouldn’t make a huge difference as there’s not a lot of shiny surfaces in the station), the two with the spotlights are still quite heavyweight – for the torch beam to appear properly circular, almost everything has to be done per-pixel in the fragment shader. I’m not a shader expert so there’s probably scope for making them more efficient, but for now they seem to run acceptably fast on the systems I’ve tested them on.
Can’t see the wood or the trees
In Part 1, I mentioned that the trees weren’t modelled in Blender like the rest of the model was. I considered doing this, but realised it would make the already quite large model files unacceptably huge. (Models of organic things such as plants, animals and humans tend to require far more vertices and polygons to look any good than models of architecture do). Instead I chose to implement a “tree generator” in JavaScript – so instead of having to save all of the bulky geometry for the trees to the model file, I could save a compact set of basic parameters, and the geometry itself would be generated in the browser and never have to be sent over the internet.
The generator is based on the well-known algorithm described in this paper. It took me weeks to get it working right and by the end I never wanted to see another rotation matrix again as long as I lived. I wouldn’t be surprised if it fails for some obscure cases, but it works now for the example trees in the paper, and produces trees for the Botanics model that are probably better looking than anything I could model by hand. I didn’t mean to spend so much time on it, but hopefully I’ll be able to use it again for future projects so it won’t have been wasted time.
(Blender also has its own tree generator based on the same algorithm, called Sapling. I didn’t use it as it would have caused the same file size problem as modelling the trees manually in Blender would).
Spurred on by my success at generating the trees programmatically (eventually!), I decided to apply a similar concept to generating entire regions of woodland for the cutting at the Kirklee end of the tunnel. Given a base geometry to sprout from and some parameters to control the density, the types of trees to include, etc., the woodland generator pseudo-randomly places trees and plants into the 3D world, again only requiring a compact set of parameters to be present in the model file.
The viewer also contains a texture overlay system, which is capable of adding graffiti, dirt, mineral deposits or whatever to a texture after it’s been downloaded. This is achieved by having a second hidden HTML 5 canvas on the page on which the textures are composited before being sent to the GPU. (The same hidden canvas is also used for rendering text before it’s overlaid onto the main 3D view canvas, since the 2D text printing functions can’t be used directly on a 3D canvas).
Why not just have pre-overlaid versions of the textures and download them along with the other textures? That would work, but would increase the size of the data needing to be downloaded: if you transferred both graffiti’d and non-graffiti’d versions of a brick wall texture (for example), you’d be transferring all of the detail of the bricks themselves twice. Whereas if you create the graffiti’d version in the browser, you can get away with transferring the brick texture once, along with a mostly transparent (and therefore much more compressible) file containing the graffiti image. You also gain flexibility as you can move the overlays around much more easily.
The rest of the code is reasonably straightforward. Input is captured using standard HTML event handlers, and the viewpoint moves through the model along the same curve used to apply the curve modifier in Blender. Other data in addition to the model geometry (for example the information text, the parameters and positions for the trees, etc.) is incorporated into the first JSON model file by the converter script so that it can be modified without changing the viewer code.
So that’s the viewer. Having never used WebGL and never coded anything of this level of complexity in JavaScript before, I’m impressed at how well it actually works. I certainly learned a lot in the process of making it, and I’m hoping to re-use as much of the code as possible for some future projects.