For training we used a p3.2xlarge instance from Amazon Web Services with a single V100 GPU while for inference we used a Lenovo P1 Gen 2 laptop. Further we used the Scientific Data Storage (SDS) service from Heidelberg University. Training and inference were performed using a singularity container image based on the TensorFlow Docker container image. For random augmentation we used the respective function in the image python module. The code is available at Link: https://heidata.uni-heidelberg.de/privateurl.xhtml?token=366931ac-50a2-43f9-880f-88d63e07d493.
Do you have any questions about this protocol?
Post your question to gather feedback from the community. We will also invite the authors of this article to respond.