Thursday 6 April 2017

Odessa: Enabling Interactive Perception Applications on Mobile Devices

Summary

Mobile device capabilities are becoming increasingly powerful, creating a new class of mobile interactive applications. These applications use cameras and other sensors to perform perception tasks like object recognition, augmented-reality experience, and natural language processing (think Snapchat and Siri). of course these applications require good performance to be remotely practical.
The solution is Odessa -- a runtime that that automatically and adaptively makes offloading and parallelism decisions for mobile interactive perception applications.

There are a few properties/requirements that come with these applications:

  1. They require a fast response time
  2. They require continuous sensor processing
  3. The algorithms employed are compute intensive
  4. The performance is data-dependent


The solution?

  • Offloading
  • Parallelism (which come in two flavors)
    • Data Parallelism
    • Pipelining


Performance is measured by makespan (the time taken to execute all stages of a data flow for a single frame) and throughput (rate at which frames are processed). Makespan is a measure of responsiveness (thus we want it low) and throughput is a measure of accuracy (thus we want it high).

Design goals in decreasing importance:

  1. Simultaneously achieve low makespan and high throughput
  2. React quickly to changes in input complexity, device capability, or network conditions
  3. Achieve low computation and communication overhead


Odessa is built on Sprout, a distributed framework that makes developing and executing parallel applications easy. Odessa is essentially the brain because it decides when and how much to offload/parallel while Sprout is the muscles because it does the actual offloading/parallelism.

Its decision making is powered by metrics being gathered by its application profiler. The decisions are all based on estimates and is only acted upon when it would better the makespan and throughput. Therefore, it's theoretically possible that there could be better performance since they are not acting upon a decision when only one metric is improved -- this is because it can be more susceptible to errors in the estimate.

Strengths

  • Odessa runs across different mobile platforms when the mobile is disconnected from the server
  • Has self-correcting mechanisms to maintain stability (although the authors contend that rarely might Odessa's decision need to be reversed)
  • Does not require prior information on application performance (in contrast with MAUI) or a set of feasible data partitions for offloading decisions. (Sidenote: more and more I'm realizing how influential MAUI must be as it's come up in so many of the papers we've read. I guess it's nice to be first.)
  • The greedy heuristic authors employ for makespan partitioning has comparable performance to that of optimal decision with complete profiling information -- pretty impressive!
  • The adaptability is one of the key selling points: Odessa is able to adapt to changes in scene complexity, resource availability, and network bandwidth.


Shortcomings

  • The authors state that Odessa only increases data parallelism until makespan and throughput changes are marginal. I wish they would go into a bit more detail on this. How much is marginal? Clearly there is a baseline, but does this baseline apply to every application?
  • What application are they testing when measuring adaptability in network performance? I suppose we'll have to assume that the resulting performance applies to the other applications as well since they didn't test them.
  • I know this runtime is specifically for interactive applications, but it seems the only algorithm they test is Computer Vision based. Perhaps we can incur benefits from a broader range of applications.
  • It's possible that an offloading or parallelism decision that's made may need to be reversed. This leads to a stage bouncing between mobile and server. (Note: this doesn't seem to be too detrimental as they do have a solution for this and it seems to be a rare occurrence.)
  • I wish they at least tested out a more aggressive decision making strategy where they act upon it when only 1 metric is improved. It would be interesting to see compare the 4 results together (Act only when makespan decreases, throughput increases, either better (OR), both better (AND))


Discussion

  • What other applications might be interesting to test? The authors stick to computer vision applications. What about voice-based systems?
  • Can we extend the competing strategies that Odessa is being compared to?



1 comment:

  1. Good summary of strengths and weaknesses. I agree a wider range of applications would be needed to see the wider applicability of the approach.

    ReplyDelete

Note: only a member of this blog may post a comment.