Understanding the latest in machine learning cycles that develop AI from factory data outputs to improve the manufacturing process

As manufacturers increasingly develop artificial intelligence (AI), using daily factory data flows to enhance and refine their production processes, many team members are finding themselves learning the basics of the machine learning branch of computer science. It comes in several flavors as you will see.

Active learning is a supervised or semi-supervised technique in machine learning in which a computer model actively learns an important pattern within an initial sample of manually labeled data and provides the next batch of sample observations with that key pattern for annotation. This step helps to reduce the size and number of annotatable or usable datasets to a more manageable level. Unwanted observations are ignored, and only the relevant ones revealing important hidden patterns in data are used for training the computer model.

In computer science research articles, machine learning is broadly classified into four different learning methods, namely: supervised learning, unsupervised learning, semi-supervised learning, and reinforced learning (Chollet, 2018). These methods are classified based on their learning principles, and each learning method is briefly explained as follows:

Supervised Learning: In the supervised learning method, the machine learning model is trained by providing observation labels during training of the model. These labels help the model understand the pattern between labels (predicting variable) and predictor variables. Moreover, the labels validate and improve the performance of the model. Supervised learning methods are commonly used for classification and regression models.

Unsupervised Learning: In contrast to supervised learning, an unsupervised learning method trains a model for understanding the hidden pattern in the dataset without looking into the labels of the observations. In this method, the pattern between the variables and the magnitude of the variables decides the outcome. Clustering is a well-known example of an unsupervised learning method in which the model creates a different number of clusters based on the relative positions of the data points (observations) in given data dimensions. This process is carried out without assigning extra labels or marks to the observations.

Semi-Supervised Learning: In semi-supervised learning, the model is trained on a small sample of the labeled training set and then tested on an unlabeled test set. Then from the prediction outcome, all those observations that are predicted with high confidence by the model are added into the training set for further repetition of the cycle until the model performance is increased to the acceptance level.

Reinforced Learning: This is a relatively new and unexplored learning method, in which the model uses an agent for deciding the outcome of the task. Here, an agent is a mathematical function, which configures based on the dataset. The agent of the model is awarded or penalized based on the performance during the task. Based on whether the nature of the outcome is either positive (improving task performance) or negative (reducing task performance), the agent is awarded or penalized, respectively. In other words, the reinforced learning method trains self-learning models based on the available dataset and the environment of unknown observations that need to be understood and/or predicted.

Active learning is a supervised or semi-supervised machine learning technique, mainly used for training machine learning models with a lesser amount of labeled training data. The annotation or labeling of data depends on the application and machine learning task to be performed. Often the available real-world data is unstructured and unannotated. See the Chart below depicting an active learning cycle.

Training a computer model requires a significant amount of data. That’s why the term big data is so popular. Whether a given amount of data is sufficient depends on the task to be performed on it. For example, training a binary classification model requires proper class distribution and balance in the training set as of the population of the data. This in turn increases the sample size of the training set to cover all the class distributions and maintain the class balance. As the number of observations in the training set increases, so does the labeling task. That means the labeling task sometimes can be labor-intensive and expensive, especially when the annotation of observations requires human involvement.

For example, labeling each pair of machine and part feature as machinable if the given part is machinable by the specified machine, or non-machinable if the part cannot be manufactured on the machine, requires human involvement. For labeling these observations, an experienced machinist needs to read the part dimensions and machine specifications in order to label each observation. Such a task becomes time-consuming and adds costs. Further, if the new data flow is continuous, then the labeling task becomes a continuous never-ending process, which many times turns out to be infeasible. Adoption of advanced technique active learning contributes to overcoming these challenges with machine learning projects in manufacturing.

An active learning model proactively learns the important pattern in a sample of labeled data and then provides the next batch of observations with that key pattern information for annotation, while reducing the volume of usable data. Any unwanted observations are discarded and only observations for learning key hidden data patterns are used for training the machine learning model. The result is high quality actionable business intelligence that can be leveraged by manufacturers to optimize production processes, schedules and output.

Figure 1. Active Learning Cycle (Settles, B. (2010). Active Learning Literature Survey. In Materials Letters. University of Wisconsin–Madison)

As shown in figure 1, an initial sample of the labeled dataset is created by manually annotating the observations. The sample can be of any size, depending on the application, machine learning model used, and nature of the dataset. Using this labeled dataset, the initial machine learning model is trained to predict or classify the outcome. Further, using the trained model, a small sample of the unannotated test set can be predicted. In this prediction, the model performs better with the observations similar to the one present in the training set, since that data pattern was already learned by the model.

On the other hand, if the observation in the test set is different from the training set, the model performs poorly in the prediction task. Such an unlearned pattern of data is still new to the model because they were not included in the training set. To provide an unlearned pattern to the model, the observation for which the model’s predictions are less confident is sent to the user in a query for manual labeling. After manually labeling such observations, they are added to the initial training set and retraining of the model begins. This cycle is repeated until all or nearly all of the important data patterns are learned.