When I was a kid, I read about virtual reality... the promise of this technology and its ability to carry us off into a wonderland of hitherto unknown experiences where we could simply reach out and *touch* a different world was simply wonderous... I grew up with
Star Trek and the
Holodeck, where our intrepid adverturers would enter a small room, that on command, would instantly be transformed into a completly different virtual space, sometimes the size of an entire world. For a small kid, this simply transported me from reality, and in my mind the amazing adventures continued after the TV program ended. Zoom forward to today, where we have Virtual Reality, Mixed Reality, Augmented Reality ... oh my! .. so what's the difference between each of these, and where do developers start?
When we look through our human eyes, we generally see what's in front of us (depending on if we have had a beer or not, of course!) ... the concept of these other technologies, is that we can overlay other information in our sight-line, and either merge something artificial with the real world, or replace the real world with something generated by a computer. The objective of these technologies is different - sometimes it's to assist us do something, other times the real trick is to make us believe we are truely looking into a different world and *are there*, immersed in the experience. No matter the end goal, it seems that each iteration of these different forms of technology bring us closer and closer to an authentic holodeck experience!
Augmented reality
Enhanced, or augmented reality was my first real life introduction to this family of technologies. I had a mobile app called 'house finder' and the concept was I could hold my phone in camera mode up in the air, point it in a particular direction, and it would show me small flags filled with information on houses for sale, overlaid on the screen/camera image. We now have many other examples of this where, for example, you can hold your phone up and see what's around the area, places to eat, buy a newspaper, get cash from an ATM. The concept of how these apps work is simple - take a database of known objects (say shops) and their geo-locations, get the current position of the user's phone geo-location, and use this to query local objects in the database, and then overlay the text of the objects of interest on the screen to show whats in the view of the camera.
There are many other examples of use of this kind of technology in the gaming world including Pokemon-Go, and Ingress
The objective of this technology is as its name states, to augment your reality (duh!) ... this means that it enhances and adds value to it by making information avaiable to you in real time, overlaid on your current view of the world. This is generally achieved through harnessing various mobile and backend technologies on a mobile device. There are many uses for augmented reality, not only with games, but also like the example I gave above, to enable localised objects of interest be overlaid on a live view/map of your current location. Imagine landing in a city and wanting a quiet place to work, and being able to hold up your phone and direct it to show you, overlaid on the real live view of the street, where you should go. This could be done quite easily by harnessing a backend technology like the Bing or Google maps API to give you information about objects of interests (cafes!) in the immediate geo location and filtering down according to the users requirements. While virtual and mixed reality (which mostly involve headsets) are very sexy and hot technologies, augmented reality remains a very untapped market for developers.
Virtual Reality
I've read stories over the years of advances in Virtual Reality, but the nearest most of us ever came to being able to 'reach out and touch' was while wearing 3D glasses. Virtual Reality is meant to be a completly immersive experience, and has presented itself in various guises over the past number of years. The ultimate goal of Virtual Reality, is that you should be able to be released (at least visually/mentally) from your normal real world, and transported into another reality that has been virtually constructed. The first major recent advance in the area seems to have been popularised by the Occulus Rift. This was originally a crowd-funded project that became very successful and was eventually purchased by Facebook.
3D glasses may be old school, but they allowed us to perfect concepts that help underpin virtual reality technology today.
3D is based on the concept that to allow us to see the world as we do, using and being able to guage depth and perspective, our eyes are set apart, thus giving slightly different/shifted views of the world to each eye. These images are combined by our mind to give us a sense of depth, perspective, etc. When we look at a photograph, or a movie, we are seeing things on a flat two dimensional plane. In order to trick our minds into thinking we are seeing three dimensions, a 3D camera takes two pictures of an object, one slightly offset from the other (same as our eyes), and presents the two images to us shaded in two different colours. When we put on our 3D glasses, the colour of the lens acts as a filter, and our mind combines the two images into a three dimensional image - this is known as stereoscopic vision.
Virtual Reality, which currently requires us to were a headset of some kind, is based around the same concept, albeit at a far more advanced level. Whereas typically, 3D is a guided viewing experience, that is, we can only generally view what is projected on a screen, when coupled with a smartphone, we can harness the X/Y/Z gyroscope axis co-ordinates of the phone to determine where the device is in space, and automagically adjust the image/video presented accordingly, thus giving a more immersive experience where the user is in control and not neccessarily guided. It is important to note that the quality of the experience you get, is dictated by the quality of the smartphone you are using to drive your VR app.
We can currently break virtual reality technology into two different levels:
Low-end involves sticking a smartphone in landscape rotation in front of your face, and using it to show two images/videos, both as very slight offsets to each other. The smartphone is typically slotted into some kind of headstrap device to hold it still on your head. Both Microsoft and Google have brought out cheap as can be cardboard holders for your phone that you can use to experiment with the technology ... its a great entry point! Google have also brought out a more advanced headset for the phone called Daydream.
If you have Visual Studio 2017, then you can use the Xamarin platform to get started. Here is a great video introduction to the technology used: https://www.youtube.com/watch?v=HLFn7Y_y_HI
Mid/high end usually translates to higher cost (seems obvious :P), with more features. In this range the devices move away from having to mix it up with Smartphones, and have dedicated headsets. While the concept of a using a smartphone for VR is great and very convienent, we have to remember that its first and foremost a phone and Internet access device, therefore will never be as good as something that is dedicated completly to the task at hand. People are different, their eyes can be spaced wide or narrow, and some may have better vision than others. HIgher end dedicated devices take this into account with onboard methods for adjusting focus and the distance between eye pupil distance.
The main downside of dedicated VR headsets, is that as of the time of writing this article (June 2017), they require a lo of computing power, usually in the form of a reasonably powerful computer attacted to the headset via HDMI and USB cables. As you can guess, you are restricted in how far you can move from your PC by the length of your VR cables. There are two solutions to this. The first is wireless communication with the headset - this is problematic due to the huge volumes of video and control data that need to be exchanged between host PC and headset. The second solution is to have a slim, but light and powerful PC in a backpack that you can move around with. Both are emerging at the moment so it will be interesting to see how it evolves.
Arguably the best known dedicated VR solution would be the Oculus Rift, others include the HTC Vive and Sony Playstation VR.
Dedicated VR hardware supported by a powerful PC (or gaming device such as XBox or Playstation) can offer a very immersive experience. The quality of video motion and rendering can be superb and you can really see where the extra money you spend on these devices is going. In addition to the headset, a high end 'VR rig' usually includes a means of very accuratly tracking the movement of the users head while wearing the device. This allows for very precise tracking between the users head and the video being played - critical for not getting 'motion sick' - something you can experience easily on lower end solutions.
Early experiences with VR involved the user sitting at their desk with the headset on, and using their keyboard/mouse or perhaps game controller to interact in virtual space. Recently however, we have seen the emergence of peripheral devices/controllers that fit into the hand and are also tracked in virtual space, so that when we move our hand for example in real life, we can see this reflected in the image on the screen.
Current controllers use additional hardware, however the Leap Motion have produced a device that attaches as an accessory to common headsets and detects the hands without the need for a physical controller.
The final thing to note about current VR is that experience split mostly into three categories ... sitting, standing and 'room scale'. With the first two, you are more or less restricted to using the device in the immediate vacinity of your desk. With room scale technology however, the devices you wear are tracked using diagonally placed 'Beacons' which are using to track your position X/Y/Z in the space allocated. The HTC vive uses this system to great effect.
Here are some great places to get started with VR as a developer,
Unity is also a fantastic platform for building games in general and also covers VR for most platforms.
Finally, we shouldn't leave the VR world until we talk about OSVR. This is an open source VR community that claims an 'universal open source VR ecosystem for technologies across different brands and companies'. The OSVR 'kit' is designed to be developer friendly, hackable, pluggable, and more. Its an interesting option for developers who want to play with more than a single manufacturer provides.
Mixed reality
Microsoft came out with the term Mixed Reality to describe its 'Hololens' and other related headset devices. Where augmented reality can be seen as mostly residing (now, commercially, leaving aside Google Glass) on mobile devices
Microsoft came out with the term Mixed Reality to describe its 'Hololens' and other related headset devices. Where Augmented Reality can be seen as mostly residing (now, commercially, leaving aside Google Glass) on mobile devices, and Virtual Reality is going immersive and excluding the outside world, Microsofts view is that MIxed Reality pushes beyond these and brings the best of both worlds. There are two major ways in which the hololens differs from current VR headsets. First, it is self contained and does not require a connected separate computer, and second, the headset does not block your vision of the outside world, instead, it interacts with the environment that you are seeing. The device projects a 'holographic image' into your view, and is aware of your surroundings. In this way your perception of reality is 'mixed'.
The best way to understand this is to look at a video ... heres a good one to start.
Hololens is showing immediate benefits in the industrial/commercial world, not just gaming - and this is particularly interesting. I personally like playing computer and VR games, but am not interested in developing them. However, I am a commercial developer, and can see many applications I would like to develop for the Hololens.
This video shows some of the uses for Hololens in the medical sector.
There is no doubt that these technologies can be quite expensive. If you factor in the cost of a high end PC with a HTC Vive, it is getting close to the price of a Hololens. However, you dont need to have a hololens in order to develop for one. Development organizations could for example buy one for testing and developers can schedule the use of them.
The Hololens emulator runs on Hyper-V. It allows you to interact with a virtual Hololens and emulate position, hand tracking etc by replicating these with keyboard, controller or mouse. This unity blog post on hololens gives a great short into to using the emulator with Unity to make a game and sharing hardware with your team.
I hope this has been helpful to someone in clearing the somewhat blurred lines between these very interesting technologies. The market for this area is growing very fast, and if you are at all interested in it, you should consider setting some time aside to roll up your sleeves and crank out some code!