Implementation

JH Jiajian Huang
JL Jinpeng Li
QC Qinchang Chen
XW Xia Wang
GC Guangyong Chen
JT Jin Tang
request Request a Protocol
ask Ask a question
Favorite

We use the Adam optimizer with a weight decay of 0.01 to optimize the parameters for 30 epochs in the first stage and 5 epochs in the second stage. The initial learning rate is set to 1e-3. The batch size is 1. Our method is implemented on the PyTorch platform and trained with one Nvidia-A100 GPU.

Do you have any questions about this protocol?

Post your question to gather feedback from the community. We will also invite the authors of this article to respond.

post Post a Question
0 Q&A