2.2. Deep Learning HNN Method
This protocol is extracted from research article:
Calibration of the EBT3 Gafchromic Film Using HNN Deep Learning
Biomed Res Int, Jan 31, 2021; DOI: 10.1155/2021/8838401

A hierarchical neural network (HNN) was built using the Keras deep learning Application Program Interface (API), written in Python and running on top of the machine learning platform TensorFlow. The input parameters for the HNN training are R-NOD, red-channel irradiated PV (R-IPV) extracted from the postscan image with the red-channel background PV (R-BPV) extracted from the prescan image, green-channel irradiated PV (G-IPV) with green-channel background PV (G-BPV), blue-channel irradiated PV (B-IPV) with blue-channel background PV (B-BPV), red-channel inverse transmittance (R-IT) with green-channel inverse transmittance (G-IT), and with blue-channel inverse transmittance (B-IT). The inverse transmittance (IT), TW, can be written as

where W represents one of the R, G, and B channels.

Some of the input parameters may depend on each other; however, all have been used for film calibration with different techniques [24, 36, 37] since each has its own advantages. The red-channel PV has the highest sensitivity to the dose range of daily treatment, while the green-channel PV and blue-channel PV have higher dynamic responses to higher delivered doses [37, 48]. As the earliest used parameter, with many published papers, R-NOD was gradually replaced by the IT of the RGB used for the three-channel calibration technique [36, 37]. The three-channel background PV was intended to manage the film aging effect. These ten kinds of inputs were reorganized as five input groups: (1) R-NOD, (2) R-IPV/R-BPV, (3) G-IPV/G-BPV, (4) B-IPV/B-BPV, and (5) R-IT/G-IT/B-IT; as shown in Figure 1.

Simplified deep learning HNN frame using the Keras functional API for film-dose calibration.

This structure can be described with the functions O1, O2O7 : O1 = f (R-NOD), O2 = f (R-IPV, R-BPV), O3 = f (G-IPV, G-BPV), O4 = f (B-IPV, B-BPV), O5 = f (R-IT, G-IT,B-IT), O6 = f (O2, O3, O4), and O7 = f (O1, O5, O6), where O1(.) is approached with one input, 20 neurons in the 1st hidden layer, 10 neurons in the 2nd hidden layer, 7 neurons in the 3rd hidden layer, and one output (i.e., model 1-20-10-7-1); O2(.) is approached with a model 2-10-7-2-1; O3(.) is approached with a model 2-10-7-1; O4(.) is approached with a model 2-10-7-1; O5(.) is approached with a model 3-15-7-1; O6(.) is approached with a model 3-10-7-1; and O7(.) is approached with a model 3-20-6-1. Figure 2 illustrates the detailed structure.

Detailed structure of the deep learning HNN.

“Selu,” “elu,” “relu,” “softplus,” and “linear” are activation functions. The initial weighting was set as a uniform, random number generator seed 435. The optimization algorithm “Adam” is used as an extension to stochastic gradient descent in place of classical stochastic gradient descent to update network weights more efficiently and steadily. Since the training deals with a multiple-regression problem, a mean squared error (MSE) objective function is optimized through the “Adam” optimizer. MSE is also a desirable metric that is used to evaluate performance of the model. The other two metrics used in this HNN are “mean absolute error” (MAE) and “accuracy.” Then, the fitting process was executing with batch size of 20 and 500 epochs. The validation split is 0.45; that is, 45% training data was held back for validation.

The number of hidden layers, neurons, and activation functions were systematically adjusted so all the calculated doses converged to be equal to the delivered doses, which can be examined through the value of MSE and MAE and the illustration of the delivered dose with the calculated dose. The training results using portion I films is shown in Figure 3, where the red line is one calibration data of portion I.

Training results and one training data curve for verification.

Note: The content above has been extracted from a research article, so it may not display correctly.



Q&A
Please log in to submit your questions online.
Your question will be posted on the Bio-101 website. We will send your questions to the authors of this protocol and Bio-protocol community members who are experienced with this method. you will be informed using the email address associated with your Bio-protocol account.



We use cookies on this site to enhance your user experience. By using our website, you are agreeing to allow the storage of cookies on your computer.