# Also in the Article

Weight optimization
This protocol is extracted from research article:
Optimal network topology for responsive collective behavior

Procedure

The optimal distribution of weights A (or equivalently W) for collective frequency response can be obtained by solving$δH2δW=0$(6)with the conditions that $∑j=0Nwij=0$ for all j. This gradient can be written as$∂H2∂wij=[WL†(iω−WF)−1†(iω−WF)−2]i(WL)j+h.c.∂H2∂wi0=[WL†(iω−WF)−1†(iω−WF)−1]i+h.c.∂H2∂w0j=0$(7)where h.c. stands for the Hermitian conjugate of the preceding expression.

If no constraints are imposed on the weights, a trivial solution is obtained in which the response is maximized by having all the agents connected only to the leader. For this study to be applicable to cases where the leader is not known in advance and where it may even change with time, one needs to impose some additional symmetries either in H2 or in W directly.

One option to introduce the symmetry in H2 is to compute the frequency response not by fixing the leader to be a particular agent but to average over all the possible leaders instead. This option is appropriate for small systems, but for relatively large systems, it is more suitable to impose symmetries on W instead to reduce the number of variables (which otherwise grows as N2).

To impose arbitrary symmetries in the system, one can constrain the weights with a given parametrization$wij=F(i,j,{ck})$(8)where {ck} is a set of free parameters. The gradient of H2 with respect to these parameters is$∂H2∂ck=∑ij∂H2∂wij∂wij∂ckWL†(iω−WF)−1†(iω−WF)−2∂WF∂ckWL++WL†(iω−WF)−1†(iω−WF)−1∂WL∂ck+h.c.$(9)

We consider the case where the connection between agents depends only on the topological distance between them. Given a measure of topological distance d(i, j), this condition can be written as a linear parametrization of the form$wij=∑kckmijk$(10)where(11)

With a linear parametrization, we obtain a close form for the gradient as$∂H2∂ck=WL†(iω−WF)−1†(iω−WF)−2MkWL++WL†(iω−WF)−1†(iω−WF)−1MLk+h.c.$(12)where $Mk={mijk}$ and $MLk={mi0k}$.

The normalization of the weights during optimization can be imposed through a Lagrange multiplier. Thus, instead of maximizing H2, we define a cost function of the form$L=H2−λ2∑i=0N|∑j=0Nwij|2$(13)

Using this cost function, the optimization problem over wij with ij (the diagonal terms are fixed to wii = −1) can be written as$∂L∂wij=∂H2∂wij−λ∑l=0Nwil=0∂L∂λ=12−∑ij(W†W)ij=0$(14)

For the linear parametrization of Eq. 10, the gradient of the cost function with respect to ck can be written as$∂L∂ck=∂H2∂ck−λ∑ij(WTMk)ij$(15)

Note: The content above has been extracted from a research article, so it may not display correctly.

Q&A