3.3. The Chambolle–Pock Algorithm

EP Elena Loli Piccolomini
EM Elena Morotti
ask Ask a question
Favorite

The CP algorithm (originally proposed in [36]) solves the constrained minimization problem stated as in (9), by considering an equivalent unconstrained formulation:

where

and the B2 ball is the set B2(ϵ)={yRNdsuch thaty2ϵ} and its indicator function δB2(ϵ) is defined as

In addition, we set:

as the indicator function δΩ(x) of the convex set Ω={xRNvsuch thatx0} to embed the non-negativity constraint in the unconstrained formulation. Equation (17) allows to fully exploit the linearity of the K transform, combining here both the X-ray projector M and the discrete gradient operator, if we build K as the following four-blocks matrix:

where x,y and z are the forward difference operators acting along the X,Y and Z axes, respectively.

Considering the convex conjugate F* of F, defined as F*(y)=maxx{xTyF(x)}, and the proximal mappings of F* and G, i.e.,

the kth CP iteration can be summarized in the following three steps:

compute y(k+1) as proxσ[F*](y(k)+σKx¯(k));

compute x(k+1) as proxτ[G](x(k)τKTy(k+1));

define x¯(k+1) with an extrapolation step: x¯(k+1)=x(k+1)+θ(x(k+1)x(k)) and θ>0.

We denote as (x;y;z) a matrix of size 3Nv×Nv and we use Matlab-like notation to indicate element by element matrix multiplication and division. In particular, the proximal mapping proxσ[F*] can be computed as the sum of two independent blocks, as in lines 5–6 and 7–9 of the Algorithm 3; its detailed derivation can be found in [37] for 2D imaging applications. We have here extended the method to 3D imaging. For the sake of clarity, we remark that the weights w(k), w¯(k) and w(k)¯¯ in lines 7–9 are stored in 3Nv-dimensional variables w=(w1,w2,w3) where wiRNv for i=1,2,3, whereas |w¯| is a vector of size Nv with components

Compute:Γ as an approximation of K2

Initialize:τ=σ=1Γ>0, θ[0,1]

Initialize:x(0)RNv(x(0)0), x¯(0)RNv,y(0)RNd and w(0)R3·Nv to zeros-vectors

fork=0 to maxiter1 do

    y¯(k)=y(k)+σ(Mx¯(k)b)

    y(k+1)=max(y¯(k)2σϵ,0)y¯(k)y¯(k)2

    w¯(k)=w(k)+σ(x;y;z)x¯(k)

    w(k)¯¯=(|w¯(k)|,|w¯(k)|,|w¯(k)|)

    w(k+1)=w¯(k).(λ./max(λ,w(k)¯¯)

    x(k+1)=x(k)τ(MTy(k+1)+(x;y;z)Tw(k+1))

    x(k+1)=P+(x(k+1))

    x¯(k+1)=x(k+1)+θ(x(k+1)x(k))

The proximal mapping of G is defined as:

hence, it is exactly the projection P+(x) of x onto the feasible set Ω (line 11 of Algorithm 3). The updated iterate x(k+1) is computed with an extrapolation strategy as in line 12 of Algorithm 3. The algorithm convergence is demonstrated in [36].

We finally observe that the algorithm needs to compute the value Γ (line 1 of Algorithm 3): to estimate the matrix 2-norm as ΓK2=ρ(KTK) (where ρ is the spectral radius of a matrix), we perform two iterations of the power method for computation of the maximum eigenvalue in module [38].

Do you have any questions about this protocol?

Post your question to gather feedback from the community. We will also invite the authors of this article to respond.

post Post a Question
0 Q&A