The MLP network represents a type of feedforward artificial neural network (ANN) characterized by three primary layers: an input layer, one or more hidden layers, and an output layer (Fig. 4) [48]. The hidden layer consists of neurons with activation functions defining their behavior. Inputs from the input layer pass through the initial hidden layer, where the number of nodes aligns with the input features [49]. In this layer, the weighted sum of inputs, adjusted by bias values, is calculated using a specified equation [50].
Architecture of multilayer perceptron artificial neural network (MLP-ANN) [48]
Within each hidden layer node, an activation function such as Sigmoid or ReLU is applied to determine the node's output, which is then forwarded to the subsequent layer. The output layer, equipped with an activation function tailored to the desired output type, produces the final result through a process known as forward feeding [47]. The obtained output is evaluated by calculating the error rate, representing the difference between the expected target and the actual output. Minimizing this error rate is crucial.
To refine the MLP's performance, backpropagation is employed during each epoch, adjusting the network weights based on the previously computed error rate [50]. MLP networks are specifically designed to address non-linearly separable problems. Notably, they find widespread application in pattern recognition and play a significant role in predicting and diagnosing diseases [51].
Do you have any questions about this protocol?
Post your question to gather feedback from the community. We will also invite the authors of this article to respond.