Whole-Slide Image Inference Using Deep Learning

MN Mustafa Nasir-Moin
AS Arief A. Suriawinata
BR Bing Ren
XL Xiaoying Liu
DR Douglas J. Robertson
SB Srishti Bagchi
NT Naofumi Tomita
JW Jason W. Wei
TM Todd A. MacKenzie
JR Judy R. Rees
SH Saeed Hassanpour
ask Ask a question
Favorite

All the slides were scanned at a magnification of 40× using a Leica Aperio AT2 scanner (Leica Biosystems). The resulting whole-slide images were fed to a ResNet-18 neural network,27 which was developed and validated to classify colorectal polyps into 4 classes (ie, tubular adenoma, tubulovillous or villous adenoma, sessile serrated polyp, and hyperplastic polyp) with an independent set of 508 slides from DHMC and was previously validated with 238 external slides from 24 different institutions.24,28 The model used a sliding-window approach in which predictions were made on patches of 224 × 224 pixels. These predictions were then used to calculate the percentage of patches, a proxy for the percentage of area, attributed to each class in the whole-slide image. The percentage of patches for each class was then used in a decision tree to determine the overall class of the whole-slide image.24 For our digital system, we extracted the percentage of patches attributed to each class, the coordinates for the regions of interest highlighted by the classifier for each class, and the whole-slide image prediction.

Do you have any questions about this protocol?

Post your question to gather feedback from the community. We will also invite the authors of this article to respond.

post Post a Question
0 Q&A