Machine learning is the art and science of using example data to create models of systems. These systems are IT operations, factory machines, chemical or energy plants, etc. They can also be a set of users who use IT applications. Once the model is created using example data, one can get many answers from these models. For example, one can find out any occurrences of events in the system that do not conform to the model. These are called anomalies. For example, a user who may be attacking an IT system, or a power distribution transformer that is failing, or an IT data storage system that has a failure on its network interface. One can also use this "learned model" to predict what various entities might do. This is called predictive analytics. For example, a customer that is going to close a bank account, a machine about to fail next week, a cutting tool of a CNC machine that will go out of specification soon, or the CPU of a server that will go over 70% utilization at around noon on a Wednesday when business is busy. Machine learning can also be used to correlate events in a system and find what events may have caused what other events. For example, if power blackout occurred in a power grid, it can help determine what may have been the root cause of it. Another example is to find the cause a slow or failed bank transaction where the cause may be a network router failure.
Machine learning algorithms fall into several categories: regression, classification and clustering, anomaly detection, time-series prediction, text mining, etc. There are hundreds of algorithms that have been invented for a wide variety of purposes, including neural networks, support vector machines, decision trees, etc. The general concepts of these algorithms are similar and have many commonalities. They all have their advantages and disadvantages. Some are suitable for certain sets of problems they are commonly used for, but not for other problems. They all require data or its features to be fed into them. Features are usually numerical quantities that are extracted and transformed from the data so that the algorithms can easily digest the data. Feature extraction is a very important step which makes the data friendly to the algorithm. In the case of IT and IoT problems, data features have to be extracted and formed carefully in order to apply machine learning to get good results.
MLP implements several algorithms that can perform regression, classification, prediction, anomaly detection, root cause analysis, etc. The specialty of the algorithms in MLP are that they are made for stream based machine learning. Ordinarily, data is fed in batches to the algorithms. However, MLP is made to process streams of data. Therefore, MLP implements specially made stream machine learning algorithms. It learns the data and its patterns from the history. Algorithms in MLP can be made to continuously learn from the data, and constantly self-tune. They also continuously perform anomaly detection and prediction. MLP's algorithms also know whether it is a week-day or weekend, and what time of the day it is. These kinds of information are useful for detecting anomalies and for making predictions. MLP also contains an environment to design, develop, test, tune and deploy the algorithms – all using just graphical user interface and drag-and-drop. No software programming required. MLP's data flow modeling environment contains components that can be used to create features from streaming data. MLP's designers have extensive experience in feature extraction and machine learning for streaming data. Therefore, both MLP users and machine learning developers can benefit from this experience, which is packaged into the MLP product itself.