Tibrogargan Dreamtime story retold with virtual reality

LinkedIn
Creative animation of Tibrogargan Giant.

Creative collaborators

We’ve interviewed Chris, James, Nicky, and Jack who collaborated to deliver the Tibrogargan Dreamtime VR experience showcased at Affector’s Mythology Exhibition last June.

In our latest work, we’ve reimagined the dreamtime story of Tibrogargan through virtual veality, casting gigantic moving mountains as main characters struck by a tsunami larger than you could imagine. Focusing on the perspective of the user relative to the scale of the characters to elicit empathy from people who have never felt any smaller.

Here’s a quick preview:

Creative direction in virtual reality

What was your role?

I was fortunate enough to play the role of the director on this one.

What were reasons and motivations behind this project?

James Briscoe, an artist from Drawing Book Studios, and our team were lucky enough to be invited to collaborate and create a story for an Affectors Mythology Exhibition in June. We had never worked together, or submitted before, but we were all excited to try something new.

Where was the story from?

It’s a unique dreamtime story that originates from the Jinibara, Gubbi Gubbi, and Kabi Kabi peoples. Traditional owners of the Glass House Mountains which has a deep spiritual importance.

Why was this story chosen?

It’s a great story, from a fantastic time where the mountains are giants and waves are tsunamis that cast the world in shadow. But it’s also a very human story about family, courage, and betrayal.

These days, Western and Northern mythology are fairly well represented in popular culture and they’ve always caused me to go away and dig deeper into the stories, who they’re about, and what their people were like. We don’t see the same representation of our own indigenous history here and we hope sharing our interpretations of a story like that of Tibrogargan might have the same affect on other people.

Why was VR the technology chosen?

VR has an amazing ability to hijack your senses and become an empathy machine, which gives you amazing potential to engage an audience.

Storyboard sketches of Tibrogargan Giant.

To make a story about giants seem larger than life, VR gives us that ability to make the audience feel like an ant. With the latest production pipeline tools, it’s been more achievable than ever. I think the immersive nature of VR really helps the suspension of disbelief with an audience which helps the narrative immensely, creating a more powerful message – something were looking to explore further in future work.

Animation brings characters to life

  • James Briscoe, Animator

What’s your expertise? What was your role in this project?

My expertise is as an illustrator, character designer and animator. My role included the character design, character modelling, and animation of the three characters. As well as establishing the colour palette for the lighting of the environment.

What was your motivation to start this project?

My motivation was such that the project began as a joint piece of work with S1T2 for the Affectors exhibition. Chris and I talked about different mythologies and what appealed to each of us. He suggested aboriginal mythology, at which point I mentioned the legend I had heard years ago about the Glasshouse Mountains. The project was born from there with the added incentive of creating a Virtual Reality experience.

What was the inspiration for the appearance of the characters? Can you describe each character?

The inspiration for the appearance of the characters was taken from ancient aboriginal artwork and then influenced by the mass of the mountains themselves…

Sketches of Virtual Reality characters.
  1. Coonowrin who’s the teenage son has a profile that’s reminiscent of the top of Mt Coonowrin, in addition to this he is taller, leaner and more gangly than the other characters.
  2. Tibrogargan I wanted to have the look of an older warrior. He’s getting on in years but is still strong and powerfully built as befits his position as head of the family and he’s still strong enough to be able to put his son in his place as the story develops.
  3. Beerwah was the easiest character to develop and was again inspired by the shape of the mountain, she’s heavily pregnant in the animation and so struggles to move quickly and is dependent upon her family for help.

How do you bring character to life?

I bring a character to life by thinking about their relationships with the other characters, their motivation and how they might be feeling. I enjoy trying to put myself in the character’s shoes.

Screenshots of Tibrogargan Virtual Reality.

Lighting + environment set the scene

What was your role in this project?

Usually, I’m an animator at S1T2, but this time I had a chance to work on environment and lighting.

What were the references that you used?

We have various references correlating to specific aspects of this project. For modelling style, we used Monument Valley to create a low-poly 3D look. For lighting, we looked to the game Journey to affect the mood and tone of the story.

Animated Virtual Reality modelling moodboard.
  1. 1st Scene – Warmth and closeness of the family to a dramatic disjoint. From a soothing light to a darkness to elicit drama.
  2. 2nd Scene – Since the characters didn’t have any dialogue, the environment changed to red to reflect the character’s rage and anger.
  3. 3rd Scene – The story ends in a sad peacefulness because the family would not be complete anymore. The lighting is soft, gentle, but a bit dull.
Animated Virtual Reality lighting moodboard.

How did you make sure that the character was the right size?

It was through a tedious process of back and forthing between everyone. We used different programs such as Maya and Unity, and the conversion became a challenge because of the file size limitations for mobile performance.

Different people collaborating alongside meant that making everything cohesive is tougher.

Here’s how the process went: I would the models from James (who animated the characters) to put in my scene, then scale my scene with the characters to get the proper size, then Jack (lead developer) comes in with a scene scale guideline size. There are times that after putting everything together, I realised Tibrogargan’s size is nowhere close to being big enough. This led Jack to create an illusion, faking the characters to make them look bigger. On the other hand, I modelled everything in the scene in miniature to make the characters look massive.

How did you manage to put the elements of your work together?

This project was basically enabled by collaboration, we were lucky enough to have Jack (the developer) as the main man who glued all the elements together. After my work with the environment model, colour, and lighting, I passed those assets over to him. He then used his magic to compile my work and the rest together. I came in at the final week to touch up the lighting and model for the final project.

How important is the lighting in cinematography for this project?

In my opinion, lighting is this project is crucial. Without staging the light for each particular scene, it won’t set the proper mood and effect to portray the emotions of the story. But it’s also the combination of other elements – textures, music, voiceover, and camera angles that wrapped everything together.

Lighting process for Tibrogargan animated Virtual Reality.

Are there any problems translating the project across different VR headsets?

Initially, the Tibrogargan Dreamtime VR experience was developed for the Oculus Rift DK2. We wanted to make this more mobile friendly, so we converted the files over to suit the Gear VR.

The problem is, we found the lighting results between Gear VR and Oculus Rift DK2 to be quite different. It wasn’t only for the colours, but also for sharpness, and quality. After the Mythology exhibition, we went back to the project to redo the lighting and textures across every scene to tell the same story seamlessly across these devices.

Developing story-driven VR

  • Jack Condon, Real-time developer

What’s your expertise? What was your role in this project?

I am a game developer and visual artist from Sydney. And I was the lead developer on the Tibrogargan Dreamtime VR experience.

I was an early adopter of the Oculus Rift DK2 with the intention of looking both into its application from a gaming perspective, as well as looking for ways to integrate it into my art practice. The project was really exciting for me because I got to focus on primarily integrating all the artistic elements of the story and having them realised in first person stereoscopic 3D.

How did you manage multiple people working to a certain timeline?

Working with people through correspondence can be tricky at times, especially with bleeding edge technology, where a lot of the ground you’re stepping on is undocumented. Because of the way the art assets were put together (more in a movie like format, which made sense for the project), it limited some choices if we wanted to keep a really easy pipeline.  I think in the future I would want to work a lot more closely and side-by-side with artists in projects like these.

How did you translate the size of the characters into scale in VR? Did you come across any limitations in technology?

When working with VR, one of the key things that really seems to be working is the experimentation with scale. One of the main inspirations for this kind of gigantic scale was Shadows of the Colossus, where the player is dwarfed by these titans and moves slowly through vast landscapes.

Translating that feeling into first person virtual reality had a lot of challenges. Because of the nature of stereoscopic VR, showing depth became somewhat easier to display ideas of scale than traditional media. However, the very nature of stereoscopic viewing only really shows depth of objects up to about 3 meters, so in the end, we had to rely on all sorts of ‘perception hacks’, some unique to the VR Experience, and others old classics,  to really push the idea of scale.

Performance tuning is part of any real time rendering, it’s always a balancing act of compromising effects and constant testing. Working with VR introduced some interesting new challenges that were more about how the optical cortex and the human eye deal with information and finding balances that would work to get the needed effect.

For one, scale is only really relevant with visual reference. The Shadows of the Colossus has a native advantage here, for instance, because it is a third person game, at all times your avatar is being compared to the scale of the titans. We were showing from a VR first person perspective, which meant that we had to rely on other environmental factors to show the scale.

It’s tricky to render in real time gigantic scenes, especially Oculus Rift DK2, you need to double render (one render for each eye) essentially halving your performance. To create characters that had the scale we wanted would mean that we would need to employ some clever tricks.

I drew inspiration from a recent release called Among the Sleep, where you play in VR as a baby walking around an empty house. The game really plays with scale as everything is so much taller than you, making chairs and tables somehow very very spooky.

The technique used here was scaling up all the models to make the player feel smaller. For the scale we wanted in the Tibrogargan Dreamtime VR experience, it would be impossible to extremely scale things because of the fear of running into performance issues. So I also attempted to shrink the player, however, stereoscopic rendering essentially places two cameras in the world for each eye and changing the scale of the ‘virtual head’ can lead to some strange depth perception problems.

Through a lot of experimentation a ‘sweet spot’ was found between reducing the size of the player and the scale of the models.

To get a sense that the wave was growing and getting closer, we just played with the scale of the wave rather than its actual position in the game world.  By changing its vertical height we were able to make the wave look like it was building from far in the distance.  From a different perspective, one could see the wave never actually moves. The shifting vertical scale combined with the sound and a dynamic texture on the wave generated becomes a pretty neat illusion of movement and depth.

Were there pain points working across different programs and platforms?

When is there not? The art pipeline into a complex engine like Unity can always be tricky, especially when you want to port to both computer and Android in a short amount of time. Using a popular engine for the project was key.

I wanted to work with Unreal Engine 4 to extend my skills with its VR pipeline, but that would have been impossible to work with the James, the lead animator. In this way, all the files were designed in Maya to give him tighter control (which made sense considering the project was primarily art focused). Unity has much more lenient way of working with assets and the key to this project was a good art pipeline so we could experience things quickly through the Oculus Rift DK2 and work out what worked and what didn’t.

Were there any problems translating the project across different VR headsets?

The first thing we noticed when we moved the project to the Gear VR was that our colours were really out. The Oculus Rift DK2 screen was not ready for consumer release, and their colours are pretty far from calibrated. We designed around this and with constant testing, Nicky was able to really achieve some beautiful colours. When we moved to the Gear VR the environments had to be done again completely.

Early on in the project we decided on using dynamic lighting to suggest a movement of time with the environment starting dark then growing brighter. We really did push the limits of the scale for realtime rendering, and it did mean performance trade-offs and certain things were not going to work, especially if we wanted to port to the Gear VR later on in the project (and we did!).

The biggest sacrifice when we moved to Gear VR was the shadow rendering and Dynamic GI, but the main priority of the project was to achieve an intense scale and really play with the advantages of VR.

The other big thing of course was the resolution the Gear VR provided really made some of our environmental elements look flat. My decisions for performance compromise for the DK2 did not translate the same to the Gear VR. Different performance choices needed to be made. For example it was very hard to implement the dynamic lighting into the Gear VR since phones have a lot less power. The biggest thing visually was we lost post processing with the Gear VR, some of the effects on the DK2 version really helped deliver the idea of a dream-like time, whereas the Gear VR felt sharp by comparison.

If you’d like to see Tibrogargan Dreamtime VR or to talk about an interesting virtual reality project idea that you have, shoot us an email at: creative@s1t2.com.

Read more articles

Explore our featured articles.

Stay in the loop

Subscribe to our newsletter to receive more insights, how-to’s and updates from S1T2 and the S1T2 team.

Welcome to the party