By Michael Thomas, SAS Systems Architect, SAS
(Excerpted from IIC Tech Brief.)
AR is the digital augmentation of the physical world. The term is most popularly applied to wearable headgear that projects the digital augmentation directly on the wearer’s view of reality. Commonly, this is achieved by projecting the augmentation onto glass that the wearer looks through, which is called optical see-through. But devices can also use digital see-through, the wearer sees proximate reality as low-latency high-definition video and their eyes do not receive any natural light. Then the augmentation is a digital overlay on top of a digital stream of frames.
There are three types of displays that can support AR. Typically, AR is displayed through a special headset; however, AR can also be done with smartphones where the smartphone screen is used as a window on the world with AR. This form of AR is neither heads-up nor hands-free, but the user does not need to wear or buy a special headset. Spatial AR is a third form of AR that does not require a headset or a smartphone. Instead, light is projected onto physical reality.
A simple use case is to project a red dot on to an assembly in a workspace to draw a worker’s attention. Spatial AR could include holograms too. While wearable and totable computers are not required with spatial AR, projectors must be built into physical reality and thus spatial AR is not mobile. These three forms of AR are shown in Figure 1, “An Intelligent Reality System for Proximate Workers and Remote Experts.” Headsets are wearables. Totables include smartphones and tablets. Ambient computing can drive spatial AR. AR applications of any type can access both edge services as well as cloud services.
The edge meets augmented reality in three main ways:
- AR devices are IoT “things” on the edge. AR headsets are loaded with cameras and other sensors. They can fit into IoT architectures like other digital things. For example, the remote user could receive the video feed of a field technician so that they can help fix a problem.
- AR directly consumes edge analytics. An AR device can connect over Wi-Fi to edge gateways that are receiving data directly from assets in the workers’ view. Streaming analytics and AI can process the incoming data and pass it to the AR device. Depending on the workload, the AR augmentation could be rendered with no noticeable latency to the user.
- AR devices utilize the edge for computing resources. Wearable and totable computers are constrained devices. Wearables especially need to be lightweight. Both are constrained by battery life. While such AR devices are themselves edge devices, they can benefit by Augmented Reality and the Edge 5 utilizing computing power of other less encumbered edge devices. Such edge devices can provide raw computing power as well as act as a cache for contextualized content that is retrieved from the cloud. For spatial AR, the direct augmentation of physical reality is performed by computers that live at the edge.
Implementation Considerations
As is usually the case, the expected use cases should drive the architecture of AR at the edge. With specific goals in mind, architects should ask:
- What are the latency requirements for users? If you want a signal from a piece of equipment to be rendered in an AR headset without noticeable latency, all the processing and transmission cannot take more than a few milliseconds. Human perception is not as rigorous as machine-to-machine communication, but the feeling of real-time rendering can still be difficult to achieve. Using the motion-picture standard of 24 frames per second, you have no more than 40 milliseconds for the rendering to be no more than one frame behind the origination of the signal. That does not leave a lot of time for a roundtrip to the cloud. But if the use case can tolerate several seconds, minutes or even hours of latency, the architecture will be easier to implement.
- How will security concerns be handled? User Interfaces (UIs) need to meet enterprise security standards. If AR devices are connecting directly to edge devices though, the AR UIs are dissimilar to desktop UIs connecting to secured cloud services. Authentication of AR users may prove more difficult when they lack a keyboard for typing in a password. For spatial AR, the devices projecting light onto physical reality are performing indiscriminately for all users in the physical space and do not have a straightforward way of checking user authorization levels. Along with latency, security concerns can constrain AR architectural possibilities.
- Must the AR application function independently of remote services? If mission-critical AR applications are dependent on cloud services for authentication, authorization, or data, then the overall operations also become dependent on those remote services. This could mean that a manufacturing line might have to stop if a service fails or even if there are simple password problems. Proper usage of edge computing may provide backup mechanisms so that mission critical AR can continue even if the regular usage expects some interfacing with the cloud.
AR can provide an active Human Machine Interface (HMI) connecting user and the IIoT Services. For example, for remote coaching in maintenance, an experienced master can pinpoint the spot instantly. These experiences can be further enhanced if edge intelligence is also added in a real-time or near real-time basis.
The IIC Tech Brief was designed to help manufacturing leaders keep pace with the rapid emergence of new technology. It highlights advancements driven by the IIC’s Manufacturing Industry Leadership Council (MILC), IIC working groups, and IIC members.