A classification deep neural network61,62, which we refer to as C-SBINN, was developed to predict the patient response phenotype based on the results of the feature selection techniques. For the transfer learning protocol, the C-SBINN was first trained on an imbalanced simulated patient data set, which was randomly sampled from the simulated clinical trial (see Systems biology approach: simulated clinical trials) to have a distribution of 22% responders and 78% non-responders to reflect the class imbalance in the ex-vivo data set, see Supplementary Section F. During training, the Python scikit-learn compute_class_weight function with the “balanced” argument was used to mitigate the class imbalance. The binary cross-entropy error was taken as the loss function. For this training step, the simulated clinical data was split into training and testing sets with a ratio of 99:1 and 10% of the training data was reserved for validation. Next, transfer learning28 was implemented, where the pre-trained C-SBINN was re-trained on a subset of the ex-vivo data to improve its prediction accuracy on this data. The ex-vivo training and testing sets were randomly sampled from the entire ex-vivo data set, subject to the constraint that the class imbalance exhibited by the entire data set was approximately preserved (training sets consisted of 28 patients, testing sets were 9 patients). To compare the improvement in prediction accuracy resulting from transfer learning, a classification deep neural network was also separately trained only on the ex-vivo data, where the training and testing sets were identical to those used in the transfer learning approach. Tenfold cross validation was used to validate the results. In all cases, the input features were standardized using the Python scikit-learn StandardScaler. To ensure there was no data leakage, for each cross validation fold, the StandardScaler was fit only to the training set and then applied to the testing set.

In this work, we compared the performance of classification neural networks for several different sets of input features that were determined by the feature selection techniques described in Machine learning protocol: feature selection. Using the simulated clinical data, the feature selection techniques identified a set of features that predominantly characterized the tumor micro-environment, as well as a set of features that characterized the response dynamics to anti-PD-1 immunotherapy. For comparison, we also used input features that were determined directly from the ex-vivo data set. In the latter case, we performed 10-fold cross validation, and for each step, we randomly selected a training and testing set that preserves the distribution of responders and non-responders exhibited by the entire ex-vivo data set. For each cross validation step, we then performed Fisher discriminant analysis on the training set, selected the top six experimental features, and then used those features as inputs into the neural network.

Importantly, the learning parameters and network hyperparameters were tuned separately for each learning approach and set of input features to identify an optimal neural network architecture for each case. The tuning method and the optimal network architectures for all sets of input features and learning approaches, including network hyperparameters and learning parameters, are discussed in Supplementary Section E. For each case, the optimal network was implemented to ensure a fair comparison between approaches and input features (see Supplementary Section E for more details).

Note: The content above has been extracted from a research article, so it may not display correctly.

Please log in to submit your questions online.
Your question will be posted on the Bio-101 website. We will send your questions to the authors of this protocol and Bio-protocol community members who are experienced with this method. you will be informed using the email address associated with your Bio-protocol account.

We use cookies on this site to enhance your user experience. By using our website, you are agreeing to allow the storage of cookies on your computer.