MLP for IoT Edge Computing

Why is edge computing required?

IoT data volume is often very large at the collection point where the devices and sensors are situated. If the data is too big to transport to the cloud or local cluster, it can be reduced to its gist using edge computing in the form of resampling, signal processing, aggregation, machine learning, etc. without losing any significant information from it. Such reduced insights, features, and aggregates can be sent to the local cluster or the cloud for further processing.

Near real time decisions have to be made often at the edge. For example, closed loop control at the edge requires edge computing. The round-trip to the cloud has too much latency for it to be part of feedback control loop.

Edge based collaboration may be realized using edge computing. For example, how a swarm of drones should work together in a remote location.

MLP for Edge Computing

MLP uses a micro-service architecture with 15+ micro services. The core micro-services of MLP is independent of open source data analysis products such as spark, Hadoop, Kafka etc., although can interoperate with them seamlessly. MLP has its own stream processing engine written in C++, with optional integration to open source tools.

MLP core micro-services lightweight because they are written and optimized in C++. It runs most functions such as cluster management, fault tolerance, algorithms, data flow execution, DC2 (new data collection system) management, direct data input via tunnels, some data integrations, HTTP API for data flow GUI, and self-monitoring. The algorithmic and data analytics capabilities of these core services are provided in .so plug-in libraries which are also lightweight.

Edge only needs MLP core service and .so plug-ins developed in C++ with small footprint. The .so plug-ins implement the components running algorithms and data stream engine.

The MLP core service has a small foot print (several MB), small memory requirements, and low latency of stream processing. The total latency of processing and transport together is between 1ms and a few ms depending on the network and other factors. This makes MLP perfectly suited for edge computing, as it can handle local decisions such as real time feedback control.

An added advantage of our C++ based core services is that there is no need for a Java Virtual Machine (JVM) on the edge computing device. JVM not only demands a lot of resources, but also occasionally freezes real time stream processing when Java garbage collector (GC) kicks into action. MLP on the other hand processes data in real time streams without interruptions.

The expanded micro-services of MLP also allows interoperation and plug-in capability with Java based programs and big data tools such as Spark streaming, Kafka, Hadoop, etc. This integration is often codeless, with just drag-and-drop programming within MLP dataflow GUI.

This architecture allows MLP to be deployed on the edge, on-premise or on the cloud seamlessly.

Copyright © 2018 Yosemei. All rights reserved