To evaluate the added value of cGAN-aided motion correction, this methodology was compared with a standard PET frame–based motion correction. PET image frames were aligned using the same multiscale MI-based coregistration routine as described above (Greedy module ITK 1.2.4). This routine performs alignment between images starting at a coarse scale, which is then used to initialize registration at the next finer scale, a process repeated until it reaches the finest possible scale. As for the early images (<3 min after injection), the applied multiscale MI coregistration approach failed due to insufficient count statistics, thus we summed the first 3 min of the dynamic sequence to create a reference frame with sufficient statistics. Subsequently, all later frames (>3 min after injection) were rigidly aligned to this summed frame. It is important to point out that this approach (summing of early frames) is frequently implemented in dynamic studies when low-count images that do not contain sufficient data that would allow extraction of an accurate motion vector are analyzed. Because robustness of this coregistration procedure can be improved by low-level smoothing (The ITK Software Guide; Kitware Inc.), our standard registration approach therefore consisted of applying a heuristically chosen 4-mm gaussian filter to the images before registration. However, to assess the performance of cGAN methodology when processing the original (low-count) images, this smoothing step was omitted when testing cGAN-processed images.
Do you have any questions about this protocol?
Post your question to gather feedback from the community. We will also invite the authors of this article to respond.