Wednesday 26 April 2017

CrowdAtlas: Self-Updating Maps for Cloud and Personal Use

Overview:


This paper talks about providing the solution for accurate navigable digital maps (based on GPS), which includes the paths of off-road driving, cycling, hiking, and skiing maps. This is achieved using the crowdsourcing technique where people (fleet management, telematics and smartphone users) would share their navigations to provide training samples. By this, they solve the problem of updating the maps at regular intervals (automatic) where an update may include unexplored new roads or closed roads due to any construction. They also provide a CrowdAtlas app for users without internet connection and for those who want to create and use customized maps.

Key Points:


  1. CrowdAtlas employs a Hidden Markov Model (HMM) based map matching algorithm (offline) which detects any discrepancies between GPS samples and roads, and applies a clustering-based map inference algorithm to update the maps.
  2. Unmatched segments are used to infer new roads, using the map inference algorithm. This uses unmatched traces clustering algorithm (single linkage) to form clusters for the most used roads.
  3. CrowdAtlas invokes the polygonal principal curve algorithm to extract the road centerline from the cluster. This is done by selecting the appropriate values of the support threshold. Douglas-Peucker algorithm is used to remove any unnecessary nodes in between and find the start and end points of the connecting path.
  4. It also extends the map inference algorithm to infer missing features like intersections, new turn possibilities, and one way road directions and corrections, by iteratively updating the maps and finding out the changes in the roads of a specific area.  
  5. CrowdAtlas uses the traces that match the map to monitor for road closures and fix road geometry. It uses tight clusters of trace segments from many vehicles that do not match the map in order to infer missing roads that connect to existing roads. The existing roads provide good segmentation of the traces to produce high quality clusters, enabling the automated (and even unsupervised) addition of missing roads.
  6. CrowdAtlas applies different error radius thresholds to matched and unmatched segments, since the goal is to capture all unmatched segments as well as match GPS sample to the right roads, which is also crucial for detecting walking/cycling trails, which are often closely parallel to drive roads.
  7. Dynamic sampling is implemented in the CloudAtlas app (for only the data transmitted to the server), where high sampling rate is for unmatched segments and low sampling rate is for matched ones.

Strengths:

  1. Main strength of this paper is the motivation of the problem. The authors have used the unsupervised learning to solve the navigation problems which are seen frequently these days.
  2. The evaluation done on the specific area in Beijing is intensive and seems very useful if extended to other parts of the world.
  3. The authors have addressed the challenges/limitations pretty well, which are mostly intrinsic to the GPS based navigation.
  4. CrowdAtlas App can be very useful for personal use specifically, since it covers all the paths like cycling/skiing/ off-road driving or even walking.

Weaknesses/Discussion Points:


  1. As identified by the authors, map matching algorithm is fairly intensive which is the bottleneck of the whole process. The parallelization techniques also have been discussed. One thought here is, Can we use local edge servers (or fog computing) architecture like DeepCham, where these computations can be brought closer to the people using this service?
  2. Not many users would be willing to share their data about where they are traveling, and updates would solely be based on only certain areas (support threshold value plays an important role here).
  3. GPS coverage in some areas(even heavily populated) can also be a limitation since many developing countries still don’t have it.   

Customizable and Extensible Deployment for Mobile/Cloud Applications

Summary:

This paper focuses on the challenge faced by modern applications which work in a heterogeneous environment and in a distributed manner. Apart from providing application features the developer needs to worry about issues like fault tolerance, code-offloading and caching. This creates a dependency between application requirements and deployment decisions leading to complicated application logic in the code, consequently it inhibits the process of evolution of the application. Sapphire, a system presented in this paper is a distributed programming platform which removes the need of incorporating complex logic in the application code for managing multi-platform environment and provides a simpler way to handle this. This is achieved by segregating deployment logic from the application code.

Design:

There are two main components of Sapphire:

1. Deployment Kernel : It stitches together heterogeneous mobile devices and cloud servers using best effort RPC communication, failure detection, and location finding.
2. Deployment Manager : It provides the deployment kernel functionality to the application layer based on the deployment needs. For example: decision about how many, when to created, and when to delete a replica is taken by DM but the actual work of creating, and deleting is done by DK.

Sapphire's design of separating deployment logic from the application code gives programmers the choice to change deployment options easily as and when the requirement changes.

Steps for building a Sapphire based application:

1. Build the whole application logic as a single object.
2. Break down the application into few objects whose subset(distributed components) will be declared as sapphire objects.
3. Apply deployment managers to the sapphire objects as per the need.
The communication between the SOs(Sapphire objects) are location independent.

Strengths of the Paper:

1. Separating application logic from deployment logic greatly improves programmers ease as far as distributed management of resources is concerned.
2. Use of inheritance for creating a more complicated(required by few applications) DM is very useful and will make it easy to expand the DM library.
3. With the help of DMs we can solve different deployment issues like caching, code offloading, replication separately but combining all of them gives us a unified solution which is more extensive than systems like MAUI, COMET which are specific to code offloading or Bayou, Simba which focuses only on client-side caching.
4. Paper actually showcases how concise it is to implement a full-fledged application like shared text editor, twitter. Here object oriented way of handling DMs helps.

Weaknesses and Discussion Points:

1. The deployment policy for each SO now is static, but with current applications where requirements change almost in realtime, it would be better to have a dynamic assignment of deployment policy to each SO.
2. There is no comparison between systems with specific focus and Sapphire with only corresponding DM. Paper compares different applications for code-offloading but didn't compare it to other systems like MAUI and COMET.
3. Is it possible to manage multiple SOs from the same DM? This will help reduce the overhead of instantiating all the DMs each time a server hosting those fails.
4. It would have been interesting to see how an application can be broken down into sapphire objects. Is there a rule of thumb? How do a programmer determine the division?

Customizable and Extensible Dployment for Mobile/Cloud Applications

 



What is Sapphire?

Sapphire is a general-purpose distributed programming platform that greatly simplifies the design and implementation of applications spanning mobiles and clouds. The key concept of it is the separation of application logic and deployment logic. It is also a new architecture that support pluggable extensible deployment managers.

Motivation

Modern applications implement difficult distributed deployment tasks. Application programmers must coordinate data and computation across different nodes and platforms. They must hide performance limitation and failure. And they must manage different programming environments and hardware resources.

Architecture:

The system is a three-layer architecture. It contains Sapphire application layer, deployment manager layer and deployment kernel layer. OTS is the objects tracking system. The main architecture is showed in the following picture:
1.     Sapphire application is partitioned into sapphire objects, which run in a single address space with RPC. The sapphire object can be executed anywhere and can be moved transparently. It also provides a unit of distribution for deployment managers.
2.     Deployment kernel provides best-effort distribution services, including sapphire object tracking, mobility and replication. Making and routing RPC to sapphire objects. Managing, distributing and running deployment managers.
3.     Deployment management layer consist of deployment managers, which extends the functions and guarantees of deployment kernel. Interpose on sapphire object events.



Deployment manager is the key part for this system and it consists of three components, which are created, deployed and invoked by sapphire kernel. The three components are:
1.     Instance manager: it is co-located with the sapphire object. And it is responsible callee-side tasks
2.     Proxy: co-located with remote references. It is responsible for caller-side tasks.
3.     Coordinator: co-located with fault-tolerant object tracking service. It is responsible for centralized tasks.
How are instance manager, proxy and coordinator created by deployment kernel?
 
The DK will first create Sapphire object and instance manager first. Then it will create stubs which can call instance manager through proxy using RPC. Every reference of sapphire object will go through proxy. Then DK creates the coordinator. 

Advantages:

1.     From the previous papers, we know some methods to deploy application in cloud or edge servers. For example, MAUI, CloudClone. However, they don’t not focus on how to separate application into different objects and dispatch this objects to cloud.
2.     This paper give a clear logic on how to separate application logic and deployment logic. This is a little like Mesos or Yarn.
3.     DM layer is the key part for this paper. And they implement several libraries for this layer and these libraries benefit lots of applications.

Disadvantages & discussion:

1.     MAUI consider energy efficiency. But this paper doesn’t mention this.
2.     If the application programmers do not know what functionalities they will use at the beginning of the project, how should they choose DM layer for each Sapphire object?
3.     Scalability. For example, Mesos and Yarn can handle many simultaneous jobs in the same cluster, can this system do the same thing? It this system can do, what is the max number of jobs can be run simultaneously?
4.     Can this system handle SO transportation fluctuation? If Nowadays, many resource managers are using machine learning algorithms, such as Microsoft Apollo.

Tuesday 25 April 2017


Finding your Way in the Fog: Towards a Comprehensive Definition of Fog Computing 

Overview

As the name of this paper suggests it provides a comprehensive definition of fog computing and what challenges fog computing brings.The emergence of ubiquitous devices or IOT devices is one of the main motivations behind fog computing, close to 50 billion IOT devices by 2020. Moving from a more centralized entity in cloud computing, fog computing focuses on migrating the cloud to the edge of the network and this evolution is termed as fog.A more comprehensive definition of fog computing is -Fog computing is where a huge number of ubiquitous and de-centralized devices communicate and co-operate among themselves, providing compute, storage and network utilities without an intervention of third parties. The cooperating devices which host these services get incentives for doing so.


Important Ideas

  • Device ubiquity as one of the main driving forces of behind fog computing.Advances in IOT devices, research in the areas of device size and battery lifespan is creating venues where more and more devices will be part of the edge.
  • Discussion on the major challenges like those related to service and network management functionalities of billions of heterogeneous devices which will be a major issue in fog computing.Many technologies are evolving to cope these challenges like:
    • Softwareisation of Network Management which can provide a more generic way to handle heterogeneous devices network and service management functionalities through the use of NFV, SDN and emerging technologies like IOx.
    • Fog computing subsumes the idea of cloud at the edges where edge devices and a subset of networks can communicate and cooperate to act as a virtualization or service platform and can cater the needs of other edge devices.
    • Another idea is to have more of distributed management of edge computing rather than of centralized management as in the case of edge computing.Ideas from P2P networks can inspire such distributed management of edge computing
  • Challenges related to physical and network connectivity of edge devices, due to billions of edge devices, networks can become a bottleneck.Advancement in network technologies like 4G LTE, Bluetooth 5.0, better LAN/WAN networks will be needed. Fog computing will benefit from the ongoing research in IOT protocols which are designed for the goals: low resource consumption and resilience to failures.
  • Challenges related to the scenarios where a centralized cloud is needed.On one side users can benefit from fog computing in making their data more secure but trust and privacy issues can be one of the major research areas in fog computing as the model suggest any devices can run can any code.Also, there are not yet any standards yet set in this industry and no general programming models which fit in the realm of fog computing.

Advantages of fog computing

Fog computing will bring some better capabilities compared to the cloud or centralized computing like:

  • Applications which require low latency can make use of fog computing devices.
  • Applications which require data locality and geographical context information can make use of fog computing.
  • Distributed architecture so can be highly scalable.
  • More agile network and service manageability.
  • Users can incentivize through this model by taking part in the fog computing by joining the network and letting other users utilize the computing capabilities of their devices.
  • Can provide more security as the data will not be forwarded to a centralized location for processing rather edge devices will be employed for the processing.Privacy can be a great incentive to adopt fog technologies.
  • Full utilization of edge devices can be possible.

Strengths:

  • Presents a comprehensive definition of fog computing.
  • Presents a broad overview of why fog computing is emerging.
  • Provides a good list of challenges with their possible solutions in the context of fog computing.
  • Better security model as presented by fog computing.

Weakness:

  • Paper missed one of the challenges where evaluation of fog computing can be a real challenge.To fully understand the advantages good evaluation is necessary which will need a huge number of devices.
  • The paper presents a broad overview of fog computing but doesn't cover a single existing system or implementation which can show the practicality of fog computing in terms of availability, scalability, and fault tolerant.Although many existing technologies are out there which will enable fog computing, the paper doesn't talk about of any existing systems or their own implementation, the main reason of that can be because fog computing is still in its emerging phase.
  • Not all applications will benefit from fog computing, applications which are latency tolerant, compute intensive tasks or industrial workloads will still use cloud computing or other forms of centralized computing.This is not a general weakness though, more like application requirement specify.

Discussion Points

  • Use cases of fog computing
  • Tradeoffs between fog and cloud computing
Mobile Fog: A Programming Model for Large–Scale Applications on the Internet of Things 

More ubiquitous and IOT devices are bring new challenges and emerging technologies like fog computing.They are enabling a wide range of novel, large scale, latency sensitive applications often termed as Future Internet Applications.One of the major challenges is a programming model which can meet the demands of IOT programming.The main demands are:
  • As IOT devices are geo-distributed systems applications should be built accordingly to make use of geo-context.
  • Large scale nature of systems comprising of IOT devices.
  • And Latency sensitive applications.
Old programming models like existing PAAS which meets the needs of web services deployed in centralized data centers can't fulfill the demands of IOT applications.Mobile Fog is a PAAS programming model for large-scale situational awareness that provides a simplified programming abstraction and supports applications dynamically scaling at runtime and can provide a programming model suited for IOT applications.Mobile Fog has two design goals:
  1. Provide a high-level programming model that simplifies development on a large number of heterogeneous devices distributed over wide area
  2. Allow applications to dynamically scale using the resources from fog or cloud.

Application Model of Mobile Fog

In mobile fog each application consists of distributed Mobile Fog processes which are mapped on distributed edge devices and cloud devices, each having certain application task.Also as the devices are geo-distributed they often are utilized to handle close affinity workloads.

API's of Mobile Fog

Application code consists of a set of predefined event handlers and functions that applications can call.This generic API model in Mobile Fog enables application developers to write applications which can be run on any heterogeneous device.This set of functions and event handlers enables developers to query underlying network resources, communicate to other resources on the network and application management operations.

Mobile Fog provides application lifecycle management features as well so the developers can compile and generate a Mobile Fog process image that can be deployed and run anywhere.Management interface provides by mobile fog can facilitate the application lifecycle.


Evaluation Insights

  • Fog based approach shows better latency and less net traffic relative to the cloud.
  • Fog significant advantage when query range is within 0.5km and cloud based approach bettwe when query range is large.

Strengths

  • Programming model for large scale, geo-distributed, and latency sensitive applications.
  • Mobile Fog API's provide a comprehensive set of API's to enable writing of IOT applications which can dynamically scale.
  • Mobile Fog provides developers an interface so that they can write same applications which can be run different heterogeneous devices so they don't have to worry about the different requirement of heterogeneous devices.
  • Mobile fog provides an interface to manage application deployment and management.


Weakness

  • The programming model doesn't address any security and trust mechanisms.Like in the case of dynamic scaling which resources can be used for auto scaling.
  • The evaluation is pretty weak and is done in a simulated environment.
  • Details on the experimental setup are missing.Are the devices all located at a central place?Or geo-distributed?
  • Correct measurements of latency and performance can be better evaluated in real distributed fog systems which have high churn, unreliable and insecure, which their evaluation doesn't account for.
  • Evaluation doesn't use dynamic placement strategies.

Monday 24 April 2017

SDFog

Summary:

This paper proposes a service-oriented middleware “SDFog” that distributes service hosting throughout the fog environment. All the participating nodes - from the edge to the cloud - capable of hosting services. The system provides an interface to applications for submitting service orchestration tasks which can be bound by QoS constraints. The system performs QoS aware deployment by scheduling flows between services that satisfy specified QoS constraints.
Virtual network functions (VNFs) are used for establishing healthy user experience, which are allowed to run on the same infrastructure used by user applications and access resources of the underlying devices through a hypervisor. Distributed service discovery is performed, followed by QoS aware flow creation and installation by the SDFog controller.This article extends the concept of SDN to the application layer, and into a ”Software Defined Fog” (SDFog), which is able to execute dataplane functions dealing with compute, storage and network resources on fog nodes.
It describes an exemplary use-case using a prototype framework “HSH” where the user experience can be improved by installing VNFs at appropriate points in the network.

Strengths:

1) The paper extended concept of SDN to application layer, and into the “Software Defined Fog”.
2) The service metadata for services hosted on a fog node allows for service selection under constraints imposed by applications.
3) Applications have an advantage of specifying QoE parameters during their execution.
4) Because to highly unpredictable physical network bandwidth, flow creation builds an overlay network of orchestrated services over the physical network topology in a QoS-satisfying fashion.
Network bottlenecks such as jitters, delays, and packet loss can be mitigated either by shaping traffic or changing the forwarding path.  
5) Network function virtualization allows flexible placement of network functionality at different points in the network topology, which can be leveraged to deploy QoE enhancing functions like traffic shapers or WAN accelerators at points where congestion can deteriorate user experience
6) A great illustration of the paradigm using the prototype Health Smart Home (HSH).  
7) A good portion of this research is built on the top of well researched areas such as SDN, VNF and fog computing.

Weaknesses/Discussion points:

1)  There should be robust mechanisms for service orchestration, which should ensure consistency across services.Multiple applications might send conflicting actions to the same actuator service. The actuator service should have a way to figure out the best action which doesn’t heavily impact the application performance.  
2) Currently, the authors have only explored the networking side of SDN, that is they are only utilizing SDN networking API using NFV. Considering resources such as compute and storage will help in extending this paradigm to a framework which controls fog infrastructure.
3) Since the environment is dynamic and heterogenous, there should be robust  service discovery, deployment and dynamic reconfiguration mechanism in place.  
4) Applications should be able to specify QoS parameters for at an abstract level, which should then be translated into low-level network decisions by the SDFog controller.

Dynamic Resource Provisioning Through Fog Micro Datacenter

Summary:

In present scenario many heterogeneous devices constitute IoT. It highly unpredictable that how much resources would be consumed and whether the requesting node, device, or sensor is going to fully utilize the resources it has requested. Due to this uncertainty, the authors have incorporated Relinquish probability (the probability of a user releasing the resource) while performing resource estimation in their model. The proposed model presents user characteristic based resource management for Fog, taking into account the type of service, overall service relinquish probability, and service oriented relinquish probability. They have also considered variance in relinquish probability to know the exact deviation and irregularity factor in give-up probability. This methodology helps determine the right amount of resources required, avoiding resource wastage and profit-cut for the CSP as well as the Fog itself.

Strengths:

1) This study is the first of its kind to explore resource management in fog. Previous studies mostly focus on resource management in cloud.  
2) This framework not just does resource allocation, it also predicts resource utilization based on user’s profile which includes his/her past usage trends and probability of using those resources in future.
3) Variance of service oriented relinquish probabilities has been considered to account for fluctuations in resource utilization by CSCs. This provides better results as it determines the actual behavior of each customer.
4) The amount of resource allocated to a CSC also depends on the type of service. For each customer a Virtual resource value (VRV) is calculated. This value is then mapped to actual resources.
In the scenario when a CSC has already been a customer of CSP before, but requested a particular service 𝑆 for the first time, resources are estimated differently. In this case, Fog allocates resources keeping in view the available record, but assuming that the CSC is going to be somewhat loyal in utilizing current service 𝑆. Main idea is to incorporate available historical data as much as possible, so that the CSC is dealt accordingly, with fairness and CSP and Fog have minimum possible risk.  

Weaknesses/Discussion Points:

1) An enhancement that can be done to this framework is that resource allocation should be done based on other parameters such as whether the CSC is a premium customer or not. In that case more complicated measures should be put into place instead of just considering the past usage.
2) There should also be consideration for moving resources across resource pools based on the usage in each resource pool. This is a sort of load balancing within resource pools.
3) For CSCs, there should be an option to ask whether they would require the amount of resources as predicted by CSP or if they have some special needs for this particular request. This would help in predetermining any aberrations in the usage pattern of a user and will eventually lead to a better user experience.
4) The overall performance of this framework will deteriorate if the usage patterns of CSCs are highly fluctuating, as resources have to be requested and relinquished with high frequency causing overhead.

Thursday 20 April 2017

HomeCloud: An Edge Cloud Framework and Testbed for New Application Delivery

 This paper aims at solving a problem, cloud computing would face in the future, where there would be a lot of devices with computational intensive tasks. An edge cloud framework is presented and the architectural details are discussed. The framework integrates two complementary technologies; SDN and NFV, to enable an open and efficient application delivery in edge computing.
Summary:
The foresee the challenges that will be faced by the conventional centralized cloud computing solutions.
1.     Networking among a massive (much more than that exists today) number of devices would require transferring huge data volumes from the edge to remote data centers.
2.     The latency issues surrounding the transfer of this data, between edge devices and remote data centers.
3.     Applications today are usually not portable across platforms, obscuring innovation and leading to a monopoly.
The following points are discussed in the paper:
1.     The paper claims that an open edge cloud framework is necessary to break monopoly and handle the IoT paradigm shift.
2.     The most important challenge in this being enhancements in edge cloud orchestration and application delivery approaches.
3.     Integrate two emerging and complementary technologies –
a.     Network function virtualization(NFV)
b.     Software-defined networking(SDN)
4.The proposed framework – HomeCloud – is discussed with primary focus on orchestration and application delivery framework.
5.A particularly new and innovative technique to use NFV and SDN to enable this framework.
6.Two use cases to show the necessity for such a framework.
A proof-of-concept demonstrating the entire process of cloud orchestration and application delivery.
Strengths
1.     The paper provides a very neat and concise presentation of the proposed framework.
2.     The idea of having an open source framework will encourage the smaller ASPs and help in advancements in research in this area.
3.     The paper foresees a very realistic future of the increasing volume of data transfer and network latency issues, and provides a low-cost and flexible solution to add more devices to the network.
4.     HomeCloud utilizes SDN and NFV to implement orchestration and improve efficiency, and talks about how these complementary technologies can be utilized to benefit edge computing.
5.     A fair idea of the positives of the framework is demonstrated using the chat application, and the use cases discuss the implementation of the framework at a higher level along with the application portability.
Weakness
1.     The test results are not backed up by performance or cost details.
2.     There is no comparison done between the existing centralized cloud platform and the proposed framework.
3.     Securing the network is an area not addressed by the paper.
4.     The chat application, although being a good example to demonstrate that the framework works, does not help in answering questions of scalability and handling computationally intensive tasks.
Discussion
1.     The cost of replacing the existing infrastructure with this new framework and the time that it would take to implement this.
2.     Many other use cases need to be discussed.

3.     How far can this idea break the existing monopoly that the large corporations enjoy today, and  would these corporations even be interested in this?