Everyday-AR: The emergence and resurgence of Augmented Reality
Seemingly prevalent in the childhood of the last 1.5 generations, from Nintendo 3DS to Pokémon GO to Snapchat: AR is theoretically successful in its endeavor to attract early adopters. Although Nintendo 3DS was easier on the eye with the 3D switch off, and Pokémon GO performed smoother with the 2D map, we will give face filters a solid pass for accessibility.
However, there is still a key element lost in translation from the creative industry to the home environment: practical use. We too have conducted various prototypes and had client projects exploring the facets of AR, such as retro gamification, build assistance, retail merchandising, homeware shopping, automotive demonstration, and pure art. With so much potential for innovative use cases, it feels like a step away from seamless use in our daily lives, yet still hasn’t evolved from the ‘gimmick’.
The emergence of spatial recognition
Let’s take a quick rewind to 2014, where Google’s Project Tango is the first to introduce innovative 3D image tracking based on laser sensors. Software engineers and Medium articles rejoice, prototypes boom, and AR moves one step forward. At this point Apple remains a little quiet, until we get the grand release of ARKit (and farewell of Google Tango) in late 2017: Apple proceeds to bring Augmented Reality to the masses, with hit-testing technology built into every new device, alongside their open-source software for developers alike.
Fast forward to today, it looks like Apple is shaking things up again by revisiting the idea of 3D scanning. The new iPad Pro dropped overnight with their first emphasized mention of LiDAR (Light Detection and Ranging) technology. A camera feature with true depth-sensing technology “so advanced that NASA will use it on the next Mars mission,” according to Apple themselves. Though it is worth noting that NASA’s LiDAR techniques actually date even further back than AR’s origins, TLDR: the Apollo 15 mission to the Moon in 1971.
The resurgence of contextual AR
Other industries have also used LiDAR before, such as archeology, agriculture, and architecture for topographical scanning. Most notably, autonomous driving almost exclusively relies on the laser technology, although Musk isn’t buying into it for Tesla. Regardless, depth sensory room scanning truly is a step up from iOS 11’s plane detection. This could eliminate the disturbance of unstable anchoring and pseudo-believable object blocking - that’s one giant leap for mankind!
Speaking of "leap", this brings us much closer to the spatial computing efforts of Magic Leap. With the release of their contextually aware sensors, some of us jumped (and maybe overstepped a little) at the opportunity to create fully interactive stories and worlds that communicate with the users physical environment. Through the prototyping process, we soon found that even though the Magic Leap 1 headset does blend the virtual with the physical, there are limitations in the device's narrow field of view, even when far superior to Microsofts first Hololens. Not to mention, an ongoing bug in Unity3D still causes the VFX-Graph to render in one eye only - quite the immersion deal breaker.
The next level of immersive technology?
With Apple’s latest built-in camera optics, we can essentially carry real-time world sensing in our pockets, or at least backpacks until the next iPhone release - and let’s not forget the potential of smart glasses circa 2022. The instant accessibility of AR-enabled mobile devices to ‘the people’ right now is undeniable. Apple has even broken down their AR use cases into Productivity, Play, and Learning; which is the right experience benchmark for combatting what we could coin, the pro-active versus practical dilemma.
Now that everyday-AR does feel closer than ever, only time will tell whether this is the moment we’ve all been waiting for, or if Augmented Reality shall remain in NASA's sci-fi realm for the distant future.