2.4.2. Implementation Details

JY Junwei Yang
TK Thomas Küstner
PH Peng Hu
PL Pietro Liò
HQ Haikun Qi
ask Ask a question
Favorite

The number of unrolled iterations in the proposed GRDRN is set to 4. The weighting factor α and λ was, respectively, optimized to be 0.05 and 1 by a limited number of searches. The network performance reaches a plateau within 60 epochs. The training samples are shuffled at the beginning of each epoch and the undersampling masks are generated on-the-fly during training to reduce overfitting. We train the network with Adam optimizer with the initial learning rate of 1e − 4, which is reduced by half every 20 epochs. The network is trained on a single NVIDIA GeForce RTX 3090 graphics card. With batch size of 1, the network training took around 12 h and 19 GB GPU memory.

Do you have any questions about this protocol?

Post your question to gather feedback from the community. We will also invite the authors of this article to respond.

post Post a Question
0 Q&A