The first in a series of posts around some of our recent experimental work with WebVR and WebGL, towards an idea of volumetric imagery.
Admixture: the action of adding an ingredient to something else.
Admixture describes much of the Helios work process. As visual designers, we have come to see design and code as something almost like a lens through which we can look into spaces, real or imagined, in potentially different ways than we have come to expect.
Maybe it’s perversity, but we’ve always been interested in creating motion with still imagery, and depth with flatness.
Often this work is starts with experiments, visual essays that in of themselves have no narrative or meaning beyond such abstractions like: “How can we make dimensional spaces with non-dimensional spaces like still photos, particles, or even … math!!??”.
Our attempts at answering questions like these might not pay the bills directly, an important consideration important for design studios like ours, but it does help us to make sense of technologies that impact the way we design and tell stories.
This is our take on “Hello World”, a classic coding scenario used by programmers to demonstrate vestigial syntax and structure of a computing language. In our version, sheets of Perlin noise do nothing more than drift and collide and stretch off into the distance.
The experience as whole might is not much the story-telling department but it does introduce the glimmerings of how light, patterns and movement can work together to create a sense of space, filled with volume.
Put on your cardboard or gear if you have, or just your imagination if you don’t and check it here.
Let’s get representational and see what happens when we start mixing in sheets of photographic information to our clouds of Perlin noise.
Again, a simple and basic experiment in space and volume created entirely out of flat images and particles. Somehow it seemed boring to just create a 3d box and place a code camera inside. This example uses trig functions to build a composite photo and particle mosaic around our code camera. As we shift our point of view, so shift the image planes, re-focussing on a central point behind the camera.
From this still abstract, primitive and somewhat fractured reality, possibilities start to present themselves. There is a recognizable space one can move through, but how does that space become more recognizable?
One easy solution would be to use 360 degree imagery, in the form of panoramic spherical projections, either as still or video. We’ve done this already in the OffshoreVR project, where we created a dark ominous oil rig mixing CGI imagery, 3D meshes, and live action film. Yet for all the immersion offered by the experience, there remains a sense of separation from one’s surroundings, of being inside a sphere (which of course the viewer is), of a lack of volume.
Another not-so-easy solution would be to build an entire scene as a 3d model, but a well-known visual paradox arises in this direction: as the harder one strives for reality, the less real the intended experience becomes.
Enter photogrammetry, the combination of photo imagery and particles, or rather, a process through which photo imagery becomes particles, and how code to places these particles, and the volume they create onto a mobile screen.