The interpretability of optimal ML model

CJ Cong Jiang
YX Yuting Xiu
KQ Kun Qiao
XY Xiao Yu
SZ Shiyuan Zhang
YH Yuanxi Huang
request Request a Protocol
ask Ask a question
Favorite

ML models are often regarded as ‘black boxes’ because it is difficult to explain why they can accurately predict the special cohort of patients. Therefore, we bring in the SHAP value to determine the optimal ML model in this research. SHAP is a new method to explain the contribution of different variable in any ML models (14). Its interpretability performance had been validated in many cancers (2831). In contrast to other methods, the SHAP method is based on sound theoretical groundwork, providing both local and global interpretability (32). We used SHAP values to assess the probability of LNM of whole cohort or an individual.

Do you have any questions about this protocol?

Post your question to gather feedback from the community. We will also invite the authors of this article to respond.

post Post a Question
0 Q&A