Monday 30 January 2017

Parametric Analysis for Adaptive Computation Offloading


SUMMARY
This paper frames the task of computation offloading as an optimization problem which can be solved using parametric analysis.
Computation Offloading is useful only when it leads to an overall improvement in performance and energy saving. Therefore, when making a decision about whether or not a particular computation should be offloaded there exists a tradeoff between the communication cost and the computation cost. The premise of this paper is based on the fact that for most applications these costs depend on the input parameters of the specific application instance and hence an optimal program partitioning cannot be completely determined a priori. The paper associates with the program a cost formula which uses the runtime values of parameters to efficiently determine an optimal partitioning.


The solution presented by this paper is as follows -
  1. Program Partitioning  -
    1. The program is partitioned into schedulable tasks.
    2. A Task Control Flow Graph (TCFG) is used to represent the program. Each node is a task and the directed edges denote the associated data transfer between these tasks.
    3. ‘Data validity States’ are maintained on each host to avoid redundant data transfers.
    4. ‘Memory Abstraction’ is used to resolve the issue of unknown data dependencies.
  2. Cost Analysis -
    1. Computation, Communication, Scheduling and Data Registration (Validity) costs are expressed as functions of program input parameter values.
    2. These are then used to derive a cost formula which must be optimised.
    3. This optimisation problem is modelled as a ‘min-cut network flow’ problem
    4. A parametric partitioning algorithm is used to solve this problem and the result is an optimal program partitioning, one for each possible range of input parameters.
    5. At runtime, the program’s tasks schedule themselves on either host based on the current parameters and the corresponding partitioning solution from 2.d.


STRENGTHS OF THE PAPER
  1. The Program Partitioning Model could be an important take away from this paper. Most computation offloading schemes require restructuring of the program and the details discussed in section 2 may easily be applied to other offloading techniques. It clearly outlines the task granularity and breakpoints and the TCFG provides a good visualisation.
  2. Most of the program partitioning and cost analysis work is done beforehand which means that this does not add latency unlike frameworks that make offloading decisions completely at runtime. At the same time these decisions are not agnostic to the program execution state. At runtime, the program refers to a predetermined chart to find its optimal partitioning which means that better decisions are made than a purely static system.
  3. The authors provide a lot of details of the experimental setup which makes their results easier to  understand and accept. The experiments are run with 4 different applications which differ in their complexity, the number of input parameters as well as the results of the parametric analysis.
  4. The Results of this approach as discussed in this paper are impressive. There is a 37% performance improvement with offloading when compared to running the whole application locally.
  5. The writing style of the paper makes understanding very involved math and a lot of details relatively easier due to the abundance of examples, edge cases and explanations. A few instances of this are -
    1. The authors state that different input parameters can require different optimal partitioning and they put this point across well with the help of an exact example of an audio file encoder in section 1.1
    2. The authors provide reasoning for their decisions where possible such as they explain why the granularity of a task must be smaller than a function in section 2.  
    3. Edge Cases are handled where possible like section 2.2 provides the ‘Data Validity States’ to ensure communication costs are as close to real as possible.  


WEAKNESSES OF THE PAPER
  1. Program Partitioning -
    1. The program partitioning discussed in this paper analyses the code line by line to find task headers and branches as discussed in section 2.1. This means that considerable programer effort is required to restructure the program to break it into tasks. An even deeper understanding may be required to construct the correct TCFG.
    2. A point that is discussed in [2] states that such methods can only be applied to specific kinds of applications (multimedia). I believe this is true because it may be impossible to model very complex applications as a simple graph.
  2. Cost Analysis
    1. Extensive Experiments are needed to determine the values of the constants in the Cost Formula. These experiments must be re-run each time the application logic is changed and for every new application.
    2. For the more complex applications, user annotations (section 3.4) may be required to express the costs as functions of the input parameters.
    3. The results of parametric analysis are different partitioning for different ranges of input parameters. The range of an input parameter can be very large and more details are necessary to understand how these results are stored/evaluated/searched at runtime.  
  3. Experiments -
    1. Experiments are only conducted with multimedia applications. As discussed in [1], experiments with simpler applications such as text editors could also have been included.
    2. The experiments do not provide comparisons with other similar frameworks and any exact numbers on energy consumption.
  4. Scheduling -
    1. The paper talks about scheduling only very briefly in section 2.
    2. Only one host is active at any point but this unnecessarily adds delay to the application execution. While the overall runtime reduces due to the offloading, such scheduling is not fully exploiting the benefit of a distributed architecture.
  5. There is a lot of additional bookkeeping required in terms of the Data Validity States, the Abstract Memory Locations, the Mapping Tables etc which need to be maintained at both the client as well as the server.
  6. This paper does not address the question of the ‘How’ to offload and hence has no discussion on fault tolerance, security or details of the message passing architecture.
  7. Weaknesses discussed in the paper itself (section 5.2 & 5.3)
    1. The Degeneracy problem which exists due to the nature of the linear systems itself
    2. The Path sensitivity problem which says that the order in which tasks are executed is not considered while making offloading decision.


DISCUSSION POINTS
  1. For the more mathematically minded -
The paper frames the task of finding an optimal partitioning into an optimization problem and then goes on to model it as a ‘min-cut network flow’ problem. The paper does not explain why this approach is selected. So why this approach? Could another technique be used just as well?
  1. Is the 37% improvement in performance really worth all this extra effort? Do other techniques provide comparable improvements with lesser effort?
  1. How does this approach compare to MAUI, CloneCloud and COMET in terms of -
    1. The complexity of applications targeted
    2. Being fully automatic
    3. The offloading decision (dynamic or static)


More Related to this Paper and Computation Offloading -

1 comment:

  1. This is an amazing blog entry! You raised many great points for discussion.

    ReplyDelete

Note: only a member of this blog may post a comment.