The interpretability has gradually become more and more important in the machine learning community, particularly for the applications on biological researches [46]. We adopt SHAP [26], a well-designed game theory-based method to interpret the trained neural networks of iMAP, through grading each gene to evaluate its importance for the outputs of one cell. We provide two kinds of scores to interpreting gene’s importance, each for building representations in the stage I and removing batch effects in the stage II, respectively. Specially, a three-layer neural network is connected to the output of encoder from the stage I, and trained to classify the external cell type information, while the encoder would not be trained further. Then SHAP is used to evaluate the importance of each gene for the classification outputs (Additional file 1: Fig. S1c). The other three-layer neural network is deployed to discriminate the batch-origin of each cell, and SHAP is again used to assign each gene an importance value for the classification of batch (Additional file 1: Fig. S1d). This importance value is regarded as the surrogate for evaluating the importance of one gene in removing batch effects. We expect these two importance scores could offer primary heuristics about the working mechanisms of iMAP, and the roles of genes on representing biological variations and measuring noises.

Note: The content above has been extracted from a research article, so it may not display correctly.



Q&A
Please log in to submit your questions online.
Your question will be posted on the Bio-101 website. We will send your questions to the authors of this protocol and Bio-protocol community members who are experienced with this method. you will be informed using the email address associated with your Bio-protocol account.



We use cookies on this site to enhance your user experience. By using our website, you are agreeing to allow the storage of cookies on your computer.