A comparison of sequential transfer learning and all-data transfer learning

In the sequential transfer learning experiment, the two trained transfer learning models from experiment were used as starting points for the whole-network fine-tuning method and layer-freezing method, respectively. The second day’s data were fed to the models for retraining, and the models were tested with the third day’s data. This process was then repeated with the 3rd- and 4th-day’s data and the 4th- and 5th-day’s data.

We also compared sequential transfer learning to all-data transfer learning to evaluate the effects of the sequential transfer learning method. In the experiment, all the data from previous days were packed together to retrain the basic U-Net model. For example, to test the model with the 3rd-day’s data, the first-two-day’s data were fed to the network together for retraining. The augmented CBCT images from the two methods were then compared to each other.

Note: The content above has been extracted from a research article, so it may not display correctly.



Q&A
Please log in to submit your questions online.
Your question will be posted on the Bio-101 website. We will send your questions to the authors of this protocol and Bio-protocol community members who are experienced with this method. you will be informed using the email address associated with your Bio-protocol account.



We use cookies on this site to enhance your user experience. By using our website, you are agreeing to allow the storage of cookies on your computer.