Thursday 6 April 2017

Towards Wearable Cognitive Assistance

Overview:

This paper is about developing an architecture for offloading computations and processing from wearable devices to cloudlets; they developed a prototype implementation of a cognitive assistance system, Gabriel, based on Google Glass devices to help people with cognitive decline. They describe a system to assist patients with conditions such as Alzheimers' and people with traumatic brain injury who face mild cognitive impairments, in recognising people, objects etc.

The main challenges they see in developing such a system are providing,
1. tight end-to-end latency bounds in wearables with limited battery capacity.
2. easy back-end customization for the devices for different applications
3. graceful degradation of services in network failures.

Contributions:

1. The authors propose a fully featured system for cognitive assistance for users with cognitive decline by performing real-time scene interpretation with Google glasses.
2. VM-based (cloudlets) scalability model that can support wide variety of Operating Systems and applications available.
3. The prototype designed is provides cognitive assistance from totally  unmodified environments - without using any embedded sensors, but relying on computer vision from a wearable computer to provide user assistance. This makes it available at all times and space.
4. A temporary fall-back mode when offloading services are not available to the user, during which the user will perceive a graceful degradation of service instead of an abrupt disruption.

Design:

1. Google glasses which are powered with multiple sensors like cameras, acclerometers etc. are used to capture different information from the user environment.
2. The sensor streams from the wearable device are offloaded to cloudlets to perform complex operations like computer vision and machine learning techniques to perform object and face recognition tasks.
3. The glasses offload over Wi-Fi to nearby cloudlets directly or to a laptop or netbook to handle the offload.
4. The architecture with separated VMs for cognitive engines to handle various sensor streams independently allows a coarse-grained parallelism that can be exploited to speed up the processing by increasing the number of cores.
5. A control VM to receive and preprocess the various input streams; it is followed by different cognitive VMs that run different computer-vision algorithms for our needs; and finally a user-guidance VM that integrates the results from various cognitive engines to understand the context and give better cognitive-assistance to the user.
6. A PubSub model for enabling inter-VM communication in the system. The Glass detects the Gabriel infrastructure using UPnP queries with public IP address and DeviceComm server, which is responded by the UPnP server at the control VM.
7. To manage the several queues along the stack, they used a token-based filtering technique to limit ingress of items to each data streams.

Comments:

1. The paper addresses a very interesting and useful application of providing cognitive assistance to users with cognitive decline by leveraging the edge-offloading techniques (Cloudlets) and high-end sensing capabilities of the wearable devices (with the help of Google glasses).
2. The prototype includes support for various Computer Vision tasks with different cognitive engines that could be
used for a wide variety of applications.
3. Extensive evaluation of the architecture in different perspectives with different merits such as latency, processing overhead, energy, response time, fidelity and accuracy.
4. The prototype is very focused on Google glass hardware and the evaluation is based solely on that. I think they could have compared with other available hardwares like Hololens etc.
5. The control VM will have a bottleneck when it has to service more number of wearables at the same time, as it would be processing data streams parallely at both the sides (Glass and VMs). More scalability analysis with the system could have been added to evaluate the stress of the control VM.
6. The queuing latency involves a token-based filtering scheme which will add latency to the system. However, the authors mentioned that they are working on a feedback based adaptation mechanism to optimize the performance of the filtering strategy.
7. The cognitive VMs discussed are all video based processing engines which would need complex processing. If multiple data streams from the device are processed parallely by the cloudlets, they would give results at different time; so there should be a aggregating engine to combine the results from all the cognitive VMs at the User Guidance VM, which is not discussed in the paper.
8. I feel that the idea in itself is simple that they just tried Cloudlets framework for Google glass.

1 comment:

  1. Very good analysis. COuld help users indoors, but will cloudlets be available in live outdoor settings?

    ReplyDelete

Note: only a member of this blog may post a comment.