Lab Notes

Making Movies: Design and Code and the Web Browser

Posted by Helios on October 8, 2015.


There are often several stories within a story or about a story. Take the recently released interactive documentary The Deeper They Bury Me  for an example. This a 20 minute online piece featuring the words and thoughts of the late Herman Wallace. One could tell a story about the miscarriage of justice and racial inequality, or speak of the creative collaborations that have gone on over the years to bring this story to light, or we could talk about web-browsers.

The Web Browser?

Perhaps it’s not the sexiest thing in the world to talk about the role the web-browser plays in the interactive story-telling and design process, but let’s strap on our Nerd Goggles and go for it.

We (Helios) see the web-browser as a platform for making movies. Maybe not the kind of movies that you might see in the theatre (yet), but an experience that is cinematic all the same. These are definitely NOT web-sites. Web-sites have buttons,and forms and data and users. The interactive movies we like to make have cameras, edits, pacing, directors, and instead of users, an audience.

The Browser is a Visual Compositor.

The modern web-browser, ie the one capable of dealing with a decent set of HTML5 api’s is the Adobe AfterEffects of the internet, capable of real-time compositing. Granted there’s not much of a graphical interface for the designer; in this case, it’s a good text editor, some nifty javascript libraries gleaned from Git, some mad Google search skills, some like-minded nerds close at hand. Clunky perhaps, but with a certain amount of luck and perseverance, one can indeed pull off some pretty cool things.

In the case of The Deeper They Bury Me,  a cool thing was to layer transparent video  on top of interactive 360 panoramas. We started with one of the oldest tricks in the book, a video split into two halves, the top half as the colour or  RGB channel, the bottom half, the mask or Alpha channel. You could use canvas to render the transparency. The operative code would look something like this:

				buffer.drawImage(video, 0, 0);
				// this can be done without alphaData, except in Firefox which doesn't like it when image is bigger than the canvas
				var	image = buffer.getImageData(0, 0, width, height),
					imageData =,
					alphaData = buffer.getImageData(0, height, width, height).data;
				for (var i = 3, len = imageData.length; i < len; i = i + 4) {
					imageData[i] = alphaData[i-1];
				output.putImageData(image, 0, 0, 0, 0, width, height);

But, alas, this adds a huge processing overhead, as getImageData is computationally expensive. and things grind to a halt when the videos approach full screen,  But we can do it using webGL. Most people who think about webGL (a strangely small number, to be truthful) think of it as a means of providing 3d rendering in a web-browser, as popularized by the javascript library Three.js. But it’s super useful for 2d rendering as well. WebGL uses “shaders” which are pieces of code executed on a computer’s graphic card that define the position of colour on a shape. They usually work in pairs.

A vertex shader tells us where:

  <script id="2d-vertex-shader" type="x-shader/x-vertex">
    attribute vec2 a_position;
    attribute vec2 a_texCoord;
    uniform vec2 u_resolution;
    varying vec2 v_texCoord;
    void main() {
       vec2 zeroToOne = a_position / u_resolution;
       vec2 zeroToTwo = zeroToOne * 2.0;
       vec2 clipSpace = zeroToTwo - 1.0;
       gl_Position    = vec4(clipSpace * vec2(1, -.5), 0, 1);
       v_texCoord     = a_texCoord; // pass the texCoord to the fragment shader

A fragment shader us tells what colour goes where as dictated by the vertex shader:

 <!-- fragment shader for split videos (16:18 colour on top, alpha on bottom) -->
  <script id="2d-fragment-shader-split" type="x-shader/x-fragment">
    precision mediump float;
    uniform sampler2D u_image;
    varying vec2 v_texCoord;
    void main() {
      vec4 origColour  = texture2D( u_image, vec2( v_texCoord.s, v_texCoord.t/2.0) );
      vec4 alphaColour = texture2D( u_image, vec2( v_texCoord.s, 0.5+v_texCoord.t/2.0) );
      gl_FragColor = vec4( origColour.r, origColour.g, origColour.b, ( alphaColour.r + 0.6) );

And then printed to screen with javascript via this library here. True, your laptop fan will go crazy. But it looks beautiful.

The intent of this apparent technical hocus-pocus is to blur the lines between what is video, what is still imagery, what it text, and what is code. The important think here is that together, all these things react to an audience input, via mouse or touch, the the movement one sees on screen is a result of their interaction.

The Browser is an Audio Mixer.

A large part of the narrative drive of The Deeper They Bury Me is provided by the voice of Herman Wallace himself, as recorded over the course on a number of phones calls, his primary contact with the outside world. Most web-sites are silent. but our interactive movies rely almost as much on sound as they do on visuals to provide story and emotional impact. And quite simply, without Herman’s voice, The Deeper They Bury Me would not make much sense.

But people will hear more than just a voice. The audio, like the visuals, is a layered composition of foreground and background elements, the phone call, sounds of outside and inside, scraps of jazz, and laughter and voices. Once again, the modern web-browser can help us out here in the form of our old friend, the webAudio api, which allows us to load in and play any number of audio tracks, assign each track a volume, and a stereo position, and loop it or end it. We can connect our audio mixer to the visuals and audience interaction. And, you can too (steal this code)

Everything audio starts like this:

var Mixer = new heliosAudioMixer();

Mixer.createTrack('track1', { source: 'path/to/audio/file' });


This snippet doesn’t really do much, but the important thing to note here is that code likes to talk with other code. Our interactive 360 panorama is essentially a code camera, controlled by mouse or swipe that provides a first person point of view. It outputs tracking information to the audio mixer, which is made entirely out of code, which then accordingly positions the specific track in the stereo mix. In effect 3d, interactive, immersive sound. And once again, your little laptop, with its fan blazing away, and its latest version of Chrome, is making movies.

The Browser as Web App.

Let’s get really gone.

For a project like The Deeper They Bury Me, we want to distance ourselves from the “web-site” paradigm, where one selects an item from a menu, to in order to leave one page to navigate to another. This is when an application framework like AngularJS comes into play. Now our Nerd Goggles are glowing with fiendish intensity!

Angular describes itself as a “Superheroic Javascript MVC Framework”. It goes well beyond the scope of this post to discuss Models and Views and Controllers; we’ll leave that to the superheroes, but the important thing to take away is that Angular allows us to create a user-journey through what is called a single page application. This means in effect, stuff comes to the viewer, rather than the viewer going to the stuff. So, “navigation” can become “transition”, the audience can stay inside the experience, with distinct deep-linked url for every stop along the way. We can also control time. One of the important aspects of The Deeper They Bury Me is that it lasts for exactly 20 minutes, with the intent of providing a story-arc in a non-linear story-telling experience (which often don’t have story arcs)

What Does It All Mean?

Code, like design, like story-telling, should never be an end in itself, but be used as a support for the other elements that go into making an experience. And as creators we should learn from what we have created. A lot of what what we put into The Deeper They Bury Me are the inelegant products of  painfully learned lessons, but they also plant the seeds for other projects moving forward, and gives us the groundwork to do other cool things in the browser, like facial recognition, virtual reality (webVR), and encryption, and file sharing, and astronomical data visualization.


At the end of this story, it all comes back to the world we live in, to a real person, with a real voice, with real things to say, without whom none of the above would mean anything.

Thank you Herman Wallace, wherever you are.