MRI data preparation and preprocessing

LW Lennart Wittkuhn
NS Nicolas W. Schuck
request Request a Protocol
ask Ask a question
Favorite

Results included in this manuscript come from preprocessing performed using fMRIPrep 1.2.2 (Esteban et al.95,96; RRID:SCR_016216), which is based on Nipype 1.1.5 (Gorgolewski et al.97,98; RRID:SCR_002502). Many internal operations of fMRIPrep use Nilearn 0.4.299; RRID:SCR_001362, mostly within the functional processing workflow. For more details of the pipeline, see https://fmriprep.readthedocs.io/en/1.2.2/workflows.html the section corresponding to workflows in fMRIPrep’s documentation.

The majority of the steps involved in preparing and preprocessing the MRI data employed recently developed tools and workflows aimed at enhancing standardization and reproducibility of task-based fMRI studies, for a similar preprocessing pipeline, see100. Following successful acquisition, all study data were arranged according to the BIDS specification101 using the HeuDiConv tool (version 0.6.0.dev1; freely available from https://github.com/nipy/heudiconv) running inside a Singularity container102,103 to facilitate further analysis and sharing of the data. Dicoms were converted to the NIfTI-1 format using dcm2niix (version 1.0.20190410 GCC6.3.0)104. In order to make identification of study participants unlikely, we eliminated facial features from all high-resolution structural images using pydeface (version 2.0; available from https://github.com/poldracklab/pydeface). The data quality of all functional and structural acquisitions was evaluated using the automated quality assessment tool MRIQC (for details, see105, and the https://mriqc.readthedocs.io/en/stable/MRIQC documentation). The visual group-level reports of the estimated image quality metrics confirmed that the overall MRI signal quality of both anatomical and functional scans was highly consistent across participants and runs within each participant.

A total of two T1-weighted images were found within the input BIDS dataset, one from each study session. All of them were corrected for intensity non-uniformity (INU) using N4BiasFieldCorrection (Advanced Normalization Tools (ANTs) 2.2.0)106. A T1w-reference map was computed after registration of two T1w images (after INU-correction) using mri_robust_template (FreeSurfer 6.0.1)107. The T1w reference was then skull-stripped using antsBrainExtraction.sh (ANTs 2.2.0), using OASIS as target template. Brain surfaces were reconstructed using recon-all (FreeSurfer 6.0.1,RRID:SCR_001847)108, and the brain mask estimated previously was refined with a custom variation of the method to reconcile ANTs-derived and FreeSurfer-derived segmentations of the cortical gray-matter of Mindboggle (RRID:SCR_002438)109. Spatial normalization to the ICBM 152 Nonlinear Asymmetrical template version 2009c110 (RRID:SCR_008796) was performed through nonlinear registration with antsRegistration (ANTs 2.2.0,RRID:SCR_004757)111, using brain-extracted versions of both T1w volume and template. Brain tissue segmentation of cerebrospinal fluid (CSF), white-matter (WM), and gray-matter (GM) was performed on the brain-extracted T1w using fast (FSL 5.0.9,RRID:SCR_002823)112.

For each of the BOLD runs found per participant (across all tasks and sessions), the following preprocessing was performed. First, a reference volume and its skull-stripped version were generated using a custom methodology of fMRIPrep. The BOLD reference was then co-registered to the T1w reference using bbregister (FreeSurfer) which implements boundary-based registration113. Co-registration was configured with nine degrees of freedom to account for distortions remaining in the BOLD reference. Head-motion parameters with respect to the BOLD reference (transformation matrices, and six corresponding rotation and translation parameters) are estimated before any spatiotemporal filtering using mcflirt (FSL 5.0.9)114. BOLD runs were slice-time-corrected using 3dTshift from AFNI 20160207115 (RRID:SCR_005927). The BOLD time-series (including slice-timing correction) were resampled onto their original, native space by applying a single, composite transform to correct for head-motion and susceptibility distortions. These resampled BOLD time-series will be referred to as preprocessed BOLD in original space, or just preprocessed BOLD. The BOLD time-series were resampled to MNI152NLin2009cAsym standard space, generating a preprocessed BOLD run in MNI152NLin2009cAsym space. First, a reference volume and its skull-stripped version were generated using a custom methodology of fMRIPrep. Several confounding time-series were calculated based on the preprocessed BOLD: frame-wise displacement (FD), DVARS, and three region-wise global signals. FD and DVARS are calculated for each functional run, both using their implementations in Nipype (following the definitions by Power et al.)116. The three global signals are extracted within the CSF, the WM, and the whole-brain masks. Additionally, a set of physiological regressors were extracted to allow for component-based noise correction (CompCor)117. Principal components are estimated after high-pass filtering the preprocessed BOLD time-series (using a discrete cosine filter with 128 s cut-off) for the two CompCor variants: temporal (tCompCor) and anatomical (aCompCor). Six tCompCor components are then calculated from the top 5% variable voxels within a mask covering the subcortical regions. This subcortical mask is obtained by heavily eroding the brain mask, which ensures it does not include cortical GM regions. For aCompCor, six components are calculated within the intersection of the aforementioned mask and the union of CSF and WM masks calculated in T1w space, after their projection to the native space of each functional run (using the inverse BOLD-to-T1w transformation). The head-motion estimates calculated in the correction step were also placed within the corresponding confounds file. The BOLD time-series were resampled to surfaces on the following spaces: fsnative, fsaverage. All resamplings can be performed with a single interpolation step by composing all the pertinent transformations (i.e., head-motion transform matrices, susceptibility distortion correction when available, and co-registrations to anatomical and template spaces). Gridded (volumetric) resamplings were performed using antsApplyTransforms (ANTs), configured with Lanczos interpolation to minimize the smoothing effects of other kernels118. Non-gridded (surface) resamplings were performed using mri_vol2surf (FreeSurfer). Following preprocessing using fMRIPrep, the fMRI data were spatially smoothed using a Gaussian mask with a standard deviation (full-width at half-maximum (FWHM) parameter) set to 4 mm using an example Nipype smoothing workflow (see the Nipype documentation for details) based on the SUSAN algorithm as implemented in the FMRIB Software Library (FSL)119.

Do you have any questions about this protocol?

Post your question to gather feedback from the community. We will also invite the authors of this article to respond.

0/150

tip Tips for asking effective questions

+ Description

Write a detailed description. Include all information that will help others answer your question including experimental processes, conditions, and relevant images.

post Post a Question
0 Q&A