This is a companion blog post to the presentation I gave, “Welcome to the World of Mixed Reality”. There’s a link to a video of that presentation and the slides at the end of the post.
There has always been a gap between the Virtual and the Physical, between what we imagine and what exists. Story telling is how we bought what was virtual to life. We used our imagination to realise what we can dream and express and turn it into our experience. And we’ve used technology to seek higher fidelity representation of the what is virtual – painting, text, photography, film, audio and special effects.
Constructing these virtual realities is something we have always tried to do because it provides a way to contextualise our experiences, to expand our thinking, to go beyond the mundane.
Computing has been part of our future imagining. Since the 50s computing has been melded into our science fiction, shaping the way we think and what the future could (and should) be. Science fiction has often gone way ahead of contemporary technology to imagine the possible futures that computing opens up. It took a long time before digital environments were capable of doing anything beyond text. But that didn’t stop us imagining. And then it happened, the technology got to a point were we could create the virtual via a wholly digital means.
Computer generated 3D graphics have allowed us to create purely digital objects that could be merged with the real world. At first we could just add what wasn’t possible via digital means because the time and expense was too great. But as processing power increased animation added and eventually whole worlds could be created and explored. At first low fidelity – polygon based worlds, but increasingly higher fidelity. Today 3D worlds are ubiquitous in gaming in film, and the technology has reached as point where not only is it realistic, but it is stunningly beautiful.
How did all this happen? The underlying technology is 3D. The ability to map and create objects in space and across the XYZ – the 3 dimensions. Once we got to this point in computing we had the ability to create space by mimicking the our physical reality. We could place objects in space, give them volume and dimensions.
Once we had the computing power we could connect digital objects, place them together and in relation to one another. We could create context and situate objects within a larger environment. The number of items that can co-exist has gone up considerably, increasing the fidelity of what we see. We can add complexity to our models, given them physical properties, provide them with data on how they interact, move, change and flow. We can animate computationally rather than physically
The main convention however is that we have been stuck with viewing these visuals on a screen – looking at a plane at the imagined reality.
The virtual has only ever lived on the screen – behind the glass. They maybe beautiful but they are unattainable, something that we see through another media. But technology is catching up.
Today what we can do is strap on some googles and immerse ourselves in a virtual world. We’re not looking at a screen, but through the screen. The first wave of VR was pretty disappointing, we just didn’t have the computing power to display a virtual space with any fidelity. What’s possible today is a far cry from that. We can for the first time replace our viewing experience of the virtual world with something that is more natural. We can look around, physically. If you turn your head with a screen you simply end up looking away from the screen. With the googles on it tracks your field of view. You can change that view point, move through the space. See things from a different angle.
And these 3D environments don’t just have to be just imagined. 360 video allows you to record the real world and view it in 3D. This ability to change view points, to see the world from a different aspect, to look around and beyond what’s simply in front of you provides a new experience.
Virtual Reality has a long history and we are a long way off being able to replace the physical with something entirely imagined like the matrix. What we do have now though is a way to visually immerse ourselves and explore 3D environments. By strapping on a pair of googles we replace our field of vision with an entirely constructed environment. Sure the controls are a bit weird and it lacks the physical sensations to provide feedback on your interactions within this space, but those are the next wave of technologies.
But strapping on a set of goggles isn’t necessarily practical nor useful a lot of the time. There are other ways to access the virtual, and to bring it into the physical space – Augmented Reality.
The mobile phone provides us with a digital object in a physical space. It’s array of sensors, gyroscopes, GPS, microphones and cameras allow it to sense the physical world around us. And this allows it to map our physical space and translate that data into a 3D environment. This is where things get interesting. Once we can map the physical and the digital we can begin to mix the two realities and attach virtual objects to physical ones. AR has been around for quite a while now, but it’s recent inclusion into the both iOS and Android operating systems means that it is now supported at a hardware level, meaning that its more accurate and well supported than ever before.
In AR the mobile screen act as windows – we peer virtual through it and can see the interaction between the physical and the virtual. We can use this functionality in a variety of ways.
The most famous and widely known use of Augmented Reality is using it display 3D objects on top of a physical plane, like a floor or table top. This allows objects to be displayed to scale, or purposely made out of scale in order for them to be viewed from multiple angles and directions. You can use this to display objects in your home, play 3D games in a real environment and even hijack as space.
Perhaps the more interesting use of AR is not necessarily bring in virtual objects but displaying information within the physical space. You can overlay physical objects with additional and contextual information – like points of interest, business cards, device instructions or items in a shop. But perhaps more interesting is the ability for AR to extend a physical object. In this way we can turn static, ‘dumb’ objects into dynamic, plastic ones that can reshape and reform themselves for different purposes. Printed object can come alive, newspapers can deliver multimedia content and devices can extend beyond their physical constraints.
Mixed Reality is just that – a blend of both virtual and augmented technologies. Combing the the physical mapping capabilities of a mobile device with visual apparatus of a set of googles, Mixed Reality allows virtual objects to be anchored within the physical world. Instead of looking at the screen of a mobile, you look through screen and see both the physical and virtual world overlaid on top of it. Microsoft’s HoloLens demonstrates how this could look like and work. This technology is still embryonic but it has great potential, and as display technologies progress we may see this technology become more discrete and built in.
We want to explore how these technologies could be used in an educational setting for a number of reasons. They have the ability to be immersive, allowing people to go beyond the physical constraints and possibilities. They have the potential to provide practical experiences in a safe environment, one where students can make mistake without consequence. Much like the flight simulators that have been an embedded part of pilot training for the last couple of decades.
There’s also the ability to provide greater context for students, and to do this in their learning, not as feedback, but live and in-situ. Providing spaces that are virtual allows students to think beyond the constraints of place and time, to expand their thinking, doing and being. Finally, the technology can provide a way to excite learners, to get them more involved in their learning. What can we do when students want to engage?
The rest of the WeImagine Mixed Reality Season aims to explore this space. A range of events, discussions and presentation will provide an insight into the possible and provide our learning community with a way of engaging with the technology, to test drive it and play. Come and join us!
Further readings are available and will be added to throughout the season.
The slides from this presentation:
The video from this presentation:
Tim Klapdor, Online Learning Technology Leader, uImagine, Division of Learning & Teaching, Charles Sturt University