For a two-dimensional matrix A, the convolution operation of the matrix coordinates (x,y) could be expressed as Eq (1). In the equation, K was the convolution kernel; S was the convolution result, m referred to the number of neurons in the input layer, and n referred to the number of neurons in the output layer.
a ϵ CN was assumed as a vectorized MRI image with a complex pixel value, and the acceleration of this image had to restore the clarity of the MRI image a through the observed b ϵ CM of the vectorized K-space, then the K-space observation a could be calculated with b = Fμa. Where, FN was the sampled Fourier coding matrix, and Fμ ϵ CM×N (M ≪ N). Then, the calculation method could be written as Fμ = MmF, where Mm referred to the sampling template, and Mm ϵ CM×N; F was the two-dimensional discrete Fourier transform, and F ϵ CN×N. For a general matrix image af in size of M×N, its F could be expressed as Eq (2) below:
When only the K-space observation b was obtained, the above algorithm was difficult to solve for a, and it was necessary to add constraints to a to solve the problem, which could be expressed as follows:
In the above equation, β was the constraint term imposed on a; and k was the regularization parameter. Then, the constraint based on the deep learning model could be expressed as , where f was the deep learning model.
In order to obtain a clear MRI mapping image, the data fidelity item was combined with the CNN. The known information was mainly obtained from the reconstruction result of K-space observation b, which could be expressed as Eq (4) below:
In the above equation, Ω referred to index from K-space observation y, θ represented the parameter of the CNN model, and arec was the prediction result of the CNN model. A training pair (aμ,ag) was formed by the MRI image aμ with artifacts and a clear MRI image ag, then the Eq (5) could be adopted for training. Where, ε was the loss function used to train the network.
In order to ensure the fidelity term could be integrated in the network model, the network model parameter θ was assumed as a fixed value, then the final K-space result output after the fidelity term was added to the network model could be expressed as follows:
In the above equation, the index k was the closed-form solution in K-space, κ was the constant, and was the Fourier transform of the network model output arec. was the Fourier transform of the network model input aμ, which referred to the K-space observation b. was the final output K-space result after the fidelity term was added to the network model, which was performed with the Fourier transform to obtain the MRI image finally outputted by the network model after the fidelity term was added. k ϵ Ω indicated the observed value by transforming the predicted MRI image into the K-space.
For the K-space matrix corresponding to the two-dimensional matrix ain, its final output of the K-space result could be expressed as . Where, was the Fourier transform of aμs, aμs was the matrix form of the MRI image aμ with artifacts inputted by the network. In addition, ∀ was the diagonal matrix, and could be calculated with . The corrected K-space value was transformed into the image domain to obtain the forward transfer process of the K-space correction layer. Then, the calculation method for the result of the forward transfer was given as follows:
The back propagation gradient was calculated further. The effect of the two-dimensional discrete Fourier transform matrix F was regarded as a linear transformation process of ain, then the derivative result of ain by fLP was .
The original K-space data was a complex number data, which could be expressed as y = c + di. Where, c was the real number of the real part; and d was the real number of the imaginary part. When the weight W = C + Di was inputted, the complex convolution algorithm could be written as Eq (8) below:
After complex convolution, the corresponding complex-valued feature map was obtained. The real and imaginary part of the complex-valued feature map could be expressed as follows:
Thus, the real and imaginary part of the complex-valued feature map obtained after complex convolution was Re(W * y) = C * c–D * d and Im(W * y) = D * c + C * d, respectively. For N feature maps, the first N/2 feature maps were the real part, and the latter N/2 feature maps were the imaginary part. The number of output feature maps in the next layer was M. If the weight size was equal to m × m, then the size of both the real part and the imaginary part of the weight was m × m, and then the amount of parameters to learn from the current layer to the next layer was .
Do you have any questions about this protocol?
Post your question to gather feedback from the community. We will also invite the authors of this article to respond.