Advanced 3D augmented reality technology offers transformative ways to access digital content. The third Insight in our digital future series (adapted from an article written for the 2019 Bristol Technology Showcase) looks at how interaction and access could alter profoundly.
Huge amounts are being invested by both large tech businesses and well-funded start-ups in new ways for humans to interact with the digital world. Advanced 3D augmented reality technology, also called “mixed reality” or “spatial computing”, offers a hands-free, voice- or gesture-based way to access and interact with digital content.
Currently, state-of-the-art devices are bulky and are directly linked to processing units which are either integrated into the headset or carried/worn by the user and wired to the headset. Wireless connections are possible, but powerful real-time processing is constrained by the latency in the device. If the visual images transmitted through the headset do not refresh and update faster than the human eye can perceive a lag, then the effect of the digital content being integrated into reality is compromised (plus the user may experience motion sickness).
Next generation augmented reality/spatial computing
The latency problem disappears with next-generation connectivity. 5G offers very low latency, which will enable augmented reality headsets to be physically separated from the computer processing units, which may be in the cloud or on the edge (i.e. still separated from the device but in much closer physical proximity). This brings the potential for “headsets” to become barely more bulky or remarkable than a pair of ordinary glasses.
Next-generation augmented/smart glasses will be equipped with sensors capable of reading the movements of the wearer’s eyes, the gestures of their hands and the layout of their immediate surroundings. They will have incredibly sophisticated lenses capable of conveying 3D, coloured images into the wearer’s field of vision that are integrated with the wearer’s surroundings and responsive to their gestures. The potential is amazing.
AR as an application for industry…
Early advances into this field are less about the consumer market (in which early iterations of smart glasses have not been very successful) and more about enterprise applications. Imagine an engineer working on bespoke or complex machinery. Checking the technical drawings or manuals means putting down tools to use a laptop or check a manual before picking them back up again. Time is wasted.
With augmented reality, even with 2D smart glasses, the technical information can be conveyed directly into an engineer’s field of vision and consulted by a verbal instruction to turn the page, a tap on the headset or a gesture of the hands. The advantages are obvious. Applications of augmented reality headsets can also involve connected tools so that the movements of the engineer can be measured and correct lengths, torque, or other measurements can be indicated through the headset. A manager in a digital manufacturing facility can have real-time data about the productivity, performance and output of each unit relayed directly into their line of vision as they move around the facility.
…and for enterprise more generally
Wearing a bulky headset is perhaps less noticeable in an industrial environment where health and safety considerations may already require hard hats and safety goggles. It might be less readily accepted in an office environment. On the other hand, there are strong benefits for designers or architects to able to see, adjust or discuss a design with colleagues or clients also using connected augmented reality devices. It is potentially transformative to be able to use 3D digital design tools via a 3D interface, not a 2D screen.
Once augmented reality glasses are possible, ever-greater avenues open up. In a professional services environment, if each person’s vision is augmented and connected to the office systems, the needs for screens and actual keyboards falls away. As people walk around the office, shared virtual team notices would be visible on the walls, personal photos might be virtually pinned above a desk (but only the desk user can see them), the data, document or design that a team is working on could be shared as it is discussed and updated in real time as someone works on it.
A benefit of augmented reality glasses would of course be that you can take them off and disconnect. This would not be possible with digital connections made directly into the human brain. Such devices are in their very early stages and regulatory requirements currently limit them to therapeutic medical applications. But some see direct computer/brain interfaces as the future for augmented humans, boosted by interconnected digital processing. This would undoubtedly be transformative, but embedded neural interfaces would be a much greater step than augmented reality wearables, and would clearly raise considerable ethical and societal challenges.
Transformative technology often gives rise to issues around which there is not yet a societal consensus as to the rights and wrongs of the tech. The ethics of neural interfaces are one such example, as are some applications of artificial intelligence, such as real-time facial recognition in closed circuit TV or the generation of “synthetic reality” content which appears real, but never actually happened (for example, deep fakes). Regulation can set the parameters for what is acceptable and permissible, but there is a recognition that a broader dialogue and engagement than the usual public consultation processes for new regulation are needed where technology gives rise to broader societal issues of ethics and morals (as acknowledged in the White Paper on Regulation for the Fourth Industrial Revolution issued by the Department for Business, Energy and Industrial Strategy).
This article is part of a series, “a digital and transformed future”. For an overview of the series, click here.