There’s no way around it. Context matters. Data without context for a specific situation or condition provides little value. The emergence of Big Data makes this particularly clear. Users now have access to mountains of raw data in a broad assortment of media. The challenge for users and technology providers alike is to find ways of deriving meaning—and thus value—from the data. One of the primary ways of doing this is by using context-aware computing.
This technology uses situational and environmental information to anticipate the user’s immediate needs and to offer enriched, relevant content, services and experiences. In doing so, context-aware computing adds value to consumer, enterprise and healthcare systems.
While the technology has proven its worth in such applications as Amazon’s recommendation engine and Google’s search engine, context-aware computing really hit its stride with the global adoption of mobile devices. These platforms not only injected mobility into the equation, but also access a range of sensing devices and repositories of individual preferences and personal data. Armed with these resources, context-aware computing devices can serve users in many ways, ranging from helping them avoid traffic congestion and finding a nearby restaurant to locating friends in the area. The rise of augmented reality-enabled devices takes context-aware computing one step further by superimposing relevant, real-time digital information over the user’s view of the physical world .
A Convergence of the Senses
The greatest current enabler of context-aware computing is location. System designers rely on such technologies as GPS, cell tower triangulation and Wi-Fi localization to deliver location-based services.
Relatively new to the mix, Bluetooth beacons provide location data for micro-location settings (for example, indoors), where traditional means often fall short. Context-aware systems can use beacons to determine proximity for context. The technology can be effective even with moving objects, adding an extra dimension to context awareness.
These systems can also combine proximity and location data with point-of-interest information. For example, one of these intelligent systems could determine if someone was watching a movie by knowing the person was next to a beacon inside a movie theater and that a movie was scheduled to be shown at that time.
In addition to these wireless-positioning systems, mobile devices can determine context by leveraging motion sensors, such as accelerometers and gyroscopes, and position sensors, such as magnetometers. There are, however, drawbacks to using these sensors.
“These devices are complicated to work with,” says Chas Wurster, chief technology officer at Gimbal. “But the recently introduced features in some phones leveraging sensor fusion in a low-power way to give more high-level services like motion or activity estimation allow developers to leverage signals like walking, running, biking and driving. These systems offer a lot of promise.”
Also present on mobile phones, environmental sensors—such as barometers, photometers and thermometers—can enrich context awareness. For instance, the system can use a thermometer to determine if someone is outside by looking at the temperature differential. The drawback to using environmental sensors is that their effectiveness can be compromised. In this case, if the phone is being carried in a pocket, the temperature reading will be misleading.
Context-aware technology can also leverage other sensors residing on phones, such as the camera and microphone. Unfortunately engineers encounter issues when they use these devices. “While these sensors provide rich information, there is concern about the privacy of users and of those around them,” says Wurster.
Another important source of contextual information that triggers privacy concerns is the personal information that resides on mobile devices. Context-aware systems can mine email content, browsing history and business transaction records, acquiring granular details about a person’s context unavailable through other means. For instance, knowing that the user has to attend an early-morning meeting can trigger the system to find the best route to the office, factoring in rush-hour traffic.
Activity on social media websites like Facebook also offer rich, contextual information, and developers may use these sources to a greater extent in the future. “Upcoming apps will incorporate social-signal-processing—users' social-site activity—as well as other trends around the activity,” says Raj Tumuluri, chief executive officer at Openstream.
It is important, however, to keep in mind that before developers can tap these information sources, they will not only have to overcome privacy concerns but also provide the computing power and software required to process these types of data.
Making Sense of It All
Industry watchers currently place a lot of emphasis on sensors and their contribution to context awareness, but the “secret sauce” of these systems really lies in the analytics. This software comes in a variety of flavors, including predictive computing, predictive reality and predictive analytics.
“Advances in software algorithms are making the context-aware computing model much more robust,” says James Redfield, senior vice president of engineering at CrowdOptic.
One area where these advances can be seen is in sensor fusion. This technology allows the combination of diverse sensor inputs to provide a more accurate model. Typically, in mobile devices, sensor fusion implemented at the DSP level enables higher-level models to be computed in a power-efficient way.
“Sensory-input fusion becomes critically important in interpreting intent and providing ease of interaction in context-aware systems,” says Tumuluri.
Room for Improvement
While hardware and software advances have enabled context-aware computing to gain momentum in a number of industries, developers must still overcome key challenges before the technology can come close to reaching its full potential. These include refining the process of federating data gathered from multiple sources and improving the reliability of indoor navigation.
In terms of federating data, the challenge centers on the need for standardization. To open the door for more companies to take advantage of context-aware computing, vendors, government agencies and industry will have to work together to establish clear location and context information taxonomies. Once all parties agree on how information should be described, the work of federating data will become much more manageable.
Developers must also come to grips with security issues that arise when personal information is incorporated into the federating process. This will require the introduction of privacy controls to ensure that users’ information is protected and anonymized. “Even a request to a federated server can leak information about the requestor,” says Wurster.
As for improving indoor navigation, the problem has long been inadequate sensor coverage. To rectify this situation, developers need better indoor sensors. These include interior GPS and more robust environmental sensors for physical and chemical variables.
Beacons, on the other hand, present different challenges. One of the main barriers to beacon-enabled indoor navigation is the absence of an information infrastructure. For example, indoor positioning via beacons can tell an application that someone is standing in front of the Levi’s display at a department store. But for that to work, someone must define which beacon is at the display. While some enterprises have begun this process, still more has to be done.
“One other aspect of this centers on business value,” says Wurster. “A department store may be interested in their mobile application and their partner’s being able to leverage this information to provide value to their shoppers, but they would not like to have their competitor be able to use their infrastructure. The issue becomes one of not just federating data but also managing who has access to the data.”
Leaders in the context-aware computing field contend that soon developers will enhance the technology by giving systems the ability to acquire relevant information from nearby devices. This could be done today. Unfortunately the technology that supports these functions has not been deployed widely enough. But providers have already made moves to set this shift in motion. For example, soon almost everyone will be able to use a smartphone to remotely interact with the environmental control systems in buildings and homes.
One thing is certain: this type of functionality will find a home in a broad spectrum of industries. Currently technology providers have begun to focus much effort on enterprise and healthcare applications.
Where Does the Cloud Fit in?
As with many other technologies that try to impart intelligence to electronic devices, context-aware computing represents a work in progress. Currently the cloud plays a critical role in the technology’s support infrastructure, but that may change. “The cloud is a necessary component of context-aware computing, but in the future, it is possible that devices will be able to contain enough information to make the cloud unnecessary,” says Redfield.
The demand for better indoor navigation and technology that can adapt to real-time changes in context make the cloud the “only game in town.” For now, only a combination of local sensors and cloud-based data can provide the granularity and flexibility required to meet the demands of richer use cases. But as the novelty of the technology wears off, consumers will increasingly call for reduced latency and greater security and privacy. This will shift the focus back to device-resident intelligence.