Wednesday 22 March 2017

Medusa: A programming framework for Crowd-Sensing Applications

This study proposes a novel approach on crowd-sensing by leveraging high number of smart phone users. Crowd-sensing is the idea of retrieving sensor data by distributing work to smartphone users. This paper is an innovative integration of two research, I) supporting incentives and worker-mediation (involvement of human workers in the tasks) and II) enabling requestors (job recruiters) to gather sensor data from mobile users with minimal human interference.

The major contributions of this paper is a high level programming language called MedScript for crowd sensing tasks which can even used by non-technical requestors upto certain level of ease, and Medusa which is a runtime engine for cloud as well as smartphone. The way it works is that requestor comes up with the task divided into certain stages with incentives written in MedScript, where workers can than join in (using Medusa runtime engine on smartphone) to fulfill that stages in order to benefit from the incentives. The workers than have the option to chose which part of the data should be sent to cloud for the requestors to observe and verify. Requestors can specify which stages of the tasks requires human intelligence. Also this approach supports reverse incentive, which means that workers can pay requestors in order to nominate themselves for tasks.

In designing these, authors followed three core architectural principles which were:
I) Partitioned Services: Providing collection of services for both cloud and smartphones,
II) Dumb smartphones: Minimizing amount of task execution task state on smartphones, to prevent data-loss from failure (or simply turning off) of smartphones, and
III) Opt-in Data transfers: Before uploading any data to the cloud, user permission is required to opt-out of sending parts(or any) data

Strengths:

1) The paper in general is well written with good detailed examples. Moreover it was really to have a single example through out the paper to explain so many different components.

2) The evaluations metrics chosen are appropriate to justify the importance of Medusa. Moreover it showed two orders of magnitude improvement in number of lines of code with corresponding standalone applications which is pretty impact-ful. It was also impressive to see that authors didn't hesitate from mentioning the bottlenecks their approach has and where it can be improved. For example authors mentioned about sending SMS to the workers for tasks as a huge bottleneck.

3) The Stage libraries (stages are elemental operations required to complete a task) provided are extensible, that is, if requestors needs more functionalities, they can implement it themselves.

4) The high level language MedScript is really flexible, and it is easier for non-technical requestors to implement a task using this language.

5) The failure handline is explained really well and sounds reasonable for each of the component and its functionalities.

Discussions:

1) Worker acquisition cost: One of the things that authors have not discussed that much about is acquisition cost of the workers (smartphone users). It was confusing sometimes to understand how workers will be acquired and at what cost. For example, authors mentioned that requestor now need to hire 50 people in hope of getting 15 right. However it is also noteworthy to mention that requestors also need to spend on curating the jobs of 50 workers instead of 15.

2) Scalability: Authors have discussed scalability in terms of number of tasks instances running which was really nice. It would also be interesting to see how it will perform if there are many worker scenario where the number of workers required are huge to achieve a common goal

3) Liability: This system can quickly turn to liability misery if not handled properly. The reason is because the requestor now no longer direct control over the tasks performed by the workers upto the end when the data is uploaded to the cloud. For example, in the video documentation what if the workers plagarizes the videos from somewhere else and if requestor have no way of verifying that.

4) Frequent failures: What if the tasks have timeline associated with it, but that timeline was not met because of frequent failures (turning of) of smartphones.

1 comment:

  1. Thank you for blogging on this paper under short notice.

    Good points about liability/plagarism. Accuracy is another issue.

    Is AMT payment enough of an incentive?
    Random thought: could we create a game that would require users take video? (this would apply to the CMU paper as well).

    ReplyDelete

Note: only a member of this blog may post a comment.