Shooting for New Technical Horizons
Making Of Report for the VR Project „Volcanos“
A text by Sebastian Scherrer (ZDF) and Philipp Clermont (Faber Courtial)
Even the bravest volcanologist will never approach a volcano as close as in the VR film „Volcanos“. The purely computer-generated film shows in a photorealistic display and extraordinary detail what happens at a volcanic eruption and takes the viewer on a completely safe journey into the world of volcanism.
Visualization of the work in progress for a computer generated volcanic eruption.
Working in 3D space
The VR film was entirely created with the software 3DS Max.
Animations used for the two-parter “The Power of Volcanoes” were further developed to be fitting for the VR movie. It was entirely created with the software 3DS Max and is heading for new shores in many areas. The aim was not only to establish a “virtual reality” all around the viewer (360° film), but also a 3D film (stereoscopic images) with spatial effect at the same time.
This is a technical term for images with depth effect, commonly referred to as “3D”. In the animation industry the separated concept is necessary in order to differentiate spatial films from 2D films that were just created with 3D animation software. Early Disney works such as “Toy Story” for example, are 3D animated movies, but no “stereoscopic” movies with depth dimension, as they are generated in 3D, but eventually are just visible as a “flat”, two-dimensional image on a TV screen or canvas.
Planning Scenes with Spatial Impression
The additional effort already starts with the concept phase: not every motive is suitable for the spatial impression. Looking at such large objects like a volcano, it is the wide shot over the summit, which is stereoscopically difficult to implement. Expressed as an example: Everyone who has looked at a landscape from a mountaintop once before, knows that distances are quite hard to estimate. In order to achieve an impression of depth, clearly separate foregrounds and backgrounds, to make the depth noticeable, are required. Out of this need, the temple scene was established, which gives a distinctive foreground that makes the volcano appear more powerful.
Another issue is the enormous effort for the scenery around the viewer. 360° videos need a different staging than “normal” film productions. Instead of thinking in isolated shots, lead by cuts and the camera, you have to create complex spaces where the viewer can look around freely. So scenes must be created that do not play on a screen, but an entire room, and so the viewer is animated to look around. A particularly good example of such a presentation is the sequence in which the viewer encounters a flock of birds. They draw the eye of the viewer in new perspectives. But this free view increases the burden on the graphic designers considerably, because it must always be designed and kept for all viewing directions.
Elaborate Particle Simulations
Of course, a problem already ambitious in the concept brought significant technical challenges for the team at Faber Courtial. A volcanic eruption consists of fire, lava – and especially smoke and rock ejection. In 3D animation you create something like this with particle simulations. Everybody who knows a little about the world of rendering knows how much effort particles mean. Usually you are working with tricks to stay within the limits of elaborate simulations regarding particle physics. But with the volcano all around the viewer, this was not possible. The creation of the animations could not be limited to parts of the image, but had to also illustrate the outbreak clouds outside the line of sight. For this, the stream of particles over the entire length of the animation has to be simulated physically. Memory usage, rendering time and the level of detail of the simulation increased enormously.
In order to calculate the particle clouds on the workstations at all, it was necessary to divide the entire volcanic eruption into twelve individual simulations. Smoke, big bombs, small bombs, lumps and sparks were simulated individually for each layer. Each of these simulations was again divided into roughly ten render layers. Despite this enormous computational effort, the physics simulation did not go short: Philipp Clermont, who was responsible for the project at Faber Courtial, explains: “lava and ash interact with the environment, the high temperature of the crater interior generates lift and turbulence. The smoke particles split into smaller particles, which further increases the complexity, however, promise even more detail and thus give a better impression of the enormous size of the ash cloud.”
“Super Sampling”: Optimizing Detail Sharpness and Size
Another factor had to be regarded to keep the level of detail in a streaming capable resolution – a tremendously larger memory usage had to be used to enable supersampling. This means that an algorithm is used to compare adjacent pixels in the rendering process, resulting in much smoother transitions between objects. Lower settings are calculated faster, but flickering pixels around the edges are likely to occur, creating distracting artifacts in the image. For standard films an average quality setting is sufficient, but for VR it must be turned up even more, as you see a magnified section of the total image.
Maximum control over each frame should be kept to fine-tune all specifics – even if the size of a single frame in the sum swelled to an incredible 250 MB. In 4K resolution that is – multiplied by two, because each eye had a slightly different perspective and had to be recalculated to allow the spatial impression. The high resolution, the stereoscopic format and particle simulations added up to enormous masses of data for the animation result. In individual layers, not only the visible image content, but also color channels, channels for image depth, the position of individual pixels in space and the pixel speed are stored. Each frame required in the sum about three hours calculation time for the computer. Only by the distribution on several computers these masses could be overcome in the relatively short production time window to publication. Overall, the animation took about 5.5 terabytes of storage, distributed across multiple disks.
Research Project with University of Glasgow
There was one last hurdle to overcome. As it turned out in the production, there was no virtual camera for the final rendering of the scene that could cope with the undertaking. Success was only possible in a co-development with the University of Glasgow, developing an own camera shader. The problem: For the output of a perfect stereoscopic image in VR you need a very complex radiation function. In a normal film, where you can not look around freely, it is easy enough to calculate two cameras for the two eyes a few centimeters apart. The depth effect then adjusts itself. Since you can but look in all directions in VR, that does not work here. The further you deviate from the originally calculated viewing direction, the more reduced the spatial impression. Unfortunately, at the beginning of the project, there was no camera in 3DS Max, which could solve this problem.
An existing contact to the University of Glasgow was the breakthrough. The novel approach to generate a new camera for stereoscopic panoramic images in a 3D software fascinated the Scots. An official research project was born, which was funded in cooperation between Faber Courtial as a company and the University. This collaboration finally brought a solution to the problem – which by the way remains unsolved to this day in the field of VR live-action film. Ultimately it is clear that the thrilling minutes in the finished video, leading to a volcanic eruption, caused quite a headache in the production.