Augmented reality promises to bring the power of the Internet to the physical world, overlaying computer-generated content on real-world experiences.
Smartphones began this process, with users leveraging the technology’s portability and connectivity to access contextually relevant information whenever and wherever they needed it. Augmented reality, however, takes this process one step further, raising contextualization to a new level and changing the way information is presented and consumed.
This transformation is being driven by the convergence of augmented reality and head-worn computing devices. The blending of these two technologies moves the computer to where many of the senses are located, enabling rich context-dependent experiences that engage the wearer without monopolizing the use of hands or line of sight.
“Up until now, data has always been viewed and presented on its own,” said Soulaiman Itani, founder and chief technology officer of Atheer. “Smartglasses and augmented reality change that because now everything that is displayed has to contend with and integrate with the real world behind it. What an application doesn’t display — leaves blank for the real world to come through unobstructed — is as important as what it displays. Of course, having stereoscopic displays also means that 3D data can now be viewed much more realistically than before.”
To complete this transition and take the technology into the mainstream, developers must first create content that resonates with enterprise users and consumers. “Engaging content and applications will drive the adoption of augmented reality,” said Soren Harner, chief product officer at Meta. “First, developers will create applications to solve specific workplace problems — such as medical training, remote assistance, and 3D visualization.
This initial step provides practical experiences that assist or simply adapt to the user’s reality, providing information via a fluid and intuitive interface. Ultimately, proponents of the technology expect that smartglasses will become part of consumers’ everyday lives.
“Augmented reality not only will assist users with tasks, but actually supplement, expand, and improve their day-to-day experiences by providing a digital reality when desired — and then disappear when the technology is not wanted or needed,” said John Haddick, chief technology officer at the Osterhout Design Group. “While this end goal is emotionally compelling to augment human experiences, it isn’t required for the early markets, where productivity and safety lead the near-term value proposition.”
Another issue impacting the evolution of the technology is the availability of investment capital—significant funding has only just begun. When more talented people pursue augmented reality research and the financial community provides a steady influx of capital, the market will see a broader assortment of applications, which in turn will accelerate adoption.
“Even today, if you go to graduate and undergraduate schools, you won’t find the needed buzz about this emerging industry,” said Haddick. “It is coming, but one of the main reasons you aren’t seeing more academic focus yet is because the augmented reality jobs are just beginning to materialize, and that is because investment funding has been sparse and unfocused.”
All this may boil down to the need for a “killer app” to drive interest and adoption, opening the door for people to explore and create additional applications that integrate the system deeper into their lives. To reach this point, developers will have to optimize both the hardware and software to give users a reliable and engaging experience.
A Broader View
One of the areas of augmented reality in need of improvement is the field of view (FOV). Experience has proven that a narrow FOV diminishes the experience with smartglasses and limits the applications that they can perform.
The problem is that as the FOV increases more pixels are required to achieve the same effective resolution because each pixel is stretched over a wider angle. Broader FOVs typically require larger displays and optics; however, even though there are ways to bend these rules of thumb with non-traditional optics, alternative systems require serious tradeoffs in image quality. This confines their use to specific situations and limited resolution densities.
The good news is there are several promising technologies on the horizon, including new optical engines and display techniques. For instance, Meta uses a wider reflective surface to create the image, boasting a 90-degree FOV. One of the broadest FOVs demonstrated so far has been a prototype developed by the Osterhout Design Group. This device sports a cinema-wide FOV and promises to deliver 1080p resolution in each display.
In addition to these developments, the industry has seen advances in the sensing paradigm, including RGB cameras, depth sensors, motion and location sensors, and even biosensors, These improvements significantly increase accuracy and power efficiency while reducing latency.
But even with these advances, developers still have a ways to go before the FOV issue is addressed and smartglasses are ready for prime time.
In Search of Resolution
As with FOV, smartglasses developers still struggle with issues dealing with resolution. To understand the challenges, you have to factor in the relationship between the FOV and resolution. As the FOV increases, the system requires greater resolution to keep the visual experience at an acceptable level. If the system cannot maintain resolution within this threshold, the display becomes pixelated.
But the issues involved in improving resolution go beyond simple resolution measurements. For example, the point at which poor resolution bothers a user depends largely on the type of content displayed. The effective resolution as seen by the eye plays as big a role as the resolution of the display.
To better understand what “resolution” means within the context of smartglasses, think in terms of angular resolution, which is similar to how the eye sees the world. “The angular resolution of a system is the total FOV divided by the number of pixels,” said Haddick. “Your eye can resolve roughly one arc-minute of resolution, or 1/60th of a degree. For most large FOV systems, the current maximum resolution is about 1080 pixels wide per eye, which — depending on the system — gives about 4 arc-minutes per pixel — about four times larger than what the eye can see, or 1.4 arc minutes per color sub-pixel.”
Another challenge is that smartglasses render content against the backdrop of the real world. This side-by-side comparison makes it almost impossible for anything but high resolution to be acceptable.
As with efforts to expand the FOV, the providers of smartglasses have been making steady improvements in the resolution of their systems. But the remaining issues in this area still keep smartglasses from playing a bigger role in the market.
Providing high-end display performance represents only one of the challenges confronting developers of smartglasses. Most providers see ungloved hand technology as the key to ensuring natural interactions with head-worn systems. “Coupled with a 3D display, hand interactions tap our sense of proprioception to give us a much deeper sense of movement and space than 3D vision alone,” said Harner.
In fact, many argue that the technology will play a big part in moving smartglasses into the mainstream. “Having the correct, human-centric user interface is very important,” says Itani. “A big part of that would be a natural, accurate, ungloved hand interaction.
While a lot of the groundwork has been done to enable this augmented reality technology, there is more still to be accomplished. Current hand-tracking systems are powerful for certain niche use cases, but they still require specific action gestures to perform events, like a click. The technology does not yet enable a fully natural interaction. Poor resolution and limited frame rates are only part of the problem. The biggest obstacle seems to be the lack of understanding of the environment around the user, which is required for a fully natural experience.
Tethered vs. Untethered
Another factor that affects the natural experience of smartglasses is whether the device is tethered or untethered. Tethered smartglasses connect to a separate computer, which ensures plenty of compute capability and an almost limitless power supply. The second type is a stand-alone, an untethered, device, which relies solely on local processing and power assets for its operation.
The differences between the two types of designs greatly impact the capabilities provided and the applications supported. “Battery performance has a direct effect on run times when using an untethered system, but very little effect on performance, which is much more limited by the type of processor that can be used in a mobile form factor,” said Haddick. “As battery capacity improves, the number of practical applications for mobile augmented reality will grow, but compact systems will benefit most from more efficient processors that do more in a smaller package while generating less heat.”
The contrasts in the designs also reflect distinctly different market strategies and design philosophies.
“There is an experience versus portability tradeoff,” said Harner. “Immersive experiences require high-powered GPUs, which need power and generate heat. We [Meta] chose first to create an immersive experience and then to optimize size, heat, and power.
Other providers see the untethered approach as fundamental to the augmented reality experience. “We are passionate that the only way to help people actually engage with the world through technology is to provide the most compact stand-alone systems possible,” said Haddick.
Growing Into the Mainstream
Augmented reality smartglasses technology will mature and deliver more and more compelling experiences over then next five years. During the early stages of its adoption, workers will use the technology to access relevant information to better perform their jobs.
In the later stages, consumers will grow to expect a seamless and refined experience.
The sensors, algorithms, processors, and display technology will have to evolve to meet those needs. “Display brightness and resolutions will increase dramatically, as will optical technologies,” said Haddick. “Embedded processors will become far more efficient and task-optimized while at the same time faster streaming connectivity will enable a rich global connection between people, information, and resources that can be processed remotely and delivered efficiently to the user in real time.”