The data set was divided into training (80%) and test (20%) groups (train/test split) as published in (Vabalas et al., 2019). For the evaluation of the algorithm the following metrics were used (Rose, 2018): (a) Sensitivity, which provides the probability that, given a positive observation, the neural network will classify it as positive (50); (b) Specificity, which provides the probability that, given a negative observation, the neural network will classify it as negative (51); (c) Accuracy, which gives the total neural network accuracy percentage (52) and, (d) the ROC curve by plotting the sensitivity (true-positive rate) against the false-positive rate (1 − specificity) at various threshold settings. Different authors in other studies as have been used the sensitivity, specificity, and, AUC, for the performance statistics within the independent dataset (Le, Ho & Ou, 2017; Do, Le & Le, 2020; Le et al., 2020).

Where TP, TN, FP and FN denote the number of true positives, true negatives, false positives and false negatives, respectively. In order to analyze the stability of the system in the results obtained, a variance analysis, using (53) was performed, to establish whether there were significant differences in the results. In this analysis, representing the response to the variables, Ti, was the effect caused by nth treatment, and εi, the nth experimental error. The information collected must comply with independence and normality requirements. The variance analysis was performed under a confidence interval of 99.5% (Rodriguez, 2007):

The efficiency of the PST-NN approach was compared with previous published techniques (Mosquera, Parra-Osorio & Castrillón, 2016; Mosquera, Castrillón & Parra, 2018a; Mosquera, Castrillón & Parra, 2018b; Mosquera, Castrillón & Parra-Osorio, 2019), which were applied over the original data included in the present work. Accuracy was the metric used to make the comparison between PST-NN and Decision Tree J48, Naïve Bayes, Artificial Neural Network, Support Vector Machine Linear, Hill Climbing-Support Vector Machine, K-Nearest Neighbors-Support Vector Machine, Robust Linear Regression, and Logistic Regression Models.

Note: The content above has been extracted from a research article, so it may not display correctly.



Q&A
Please log in to submit your questions online.
Your question will be posted on the Bio-101 website. We will send your questions to the authors of this protocol and Bio-protocol community members who are experienced with this method. you will be informed using the email address associated with your Bio-protocol account.



We use cookies on this site to enhance your user experience. By using our website, you are agreeing to allow the storage of cookies on your computer.