We compared EPI to two alternative simulation-based inference techniques, since the likelihood of these eigenvalues given is not available. Approximate bayesian computation (ABC) (Beaumont et al., 2002) is a rejection sampling technique for obtaining sets of parameters that produce activity close to some observed data . Sequential Monte Carlo approximate bayesian computation (SMC-ABC) is the state-of-the-art ABC method, which leverages SMC techniques to improve sampling speed. We ran SMC-ABC with the pyABC package (Klinger et al., 2018) to infer RNNs with stable amplification: connectivities having eigenvalues within an -defined −2 distance of
SMC-ABC was run with a uniform prior over , a population size of 1000 particles with simulations parallelized over 32 cores, and a multivariate normal transition model.
SNPE, the next approach in our comparison, is far more similar to EPI. Like EPI, SNPE treats parameters in mechanistic models with deep probability distributions, yet the two learning algorithms are categorically different. SNPE uses a two-network architecture to approximate the posterior distribution of the model conditioned on observed data . The amortizing network maps observations to the parameters of the deep probability distribution. The weights and biases of the parameter network are optimized by sequentially augmenting the training data with additional pairs (, ) based on the most recent posterior approximation. This sequential procedure is important to get training data to be closer to the true posterior, and to be closer to the observed data. For the deep probability distribution architecture, we chose a masked autoregressive flow with affine couplings (the default choice), three transforms, 50 hidden units, and a normalizing flow mapping to the support as in EPI. This architectural choice closely tracked the size of the architecture used by EPI (Figure 2—figure supplement 1). As in SMC-ABC, we ran SNPE with . All SNPE optimizations were run for a limit of 1.5 days, or until two consecutive rounds resulted in a validation log probability lower than the maximum observed for that random seed. It was always faster to run SNPE on a CPU with 32 cores rather than on a Tesla V100 GPU.
To compare the efficiency of these algorithms for inferring RNN connectivity distributions producing stable amplification, we develop a convergence criteria that can be used across methods. While EPI has its own hypothesis testing convergence criteria for the emergent property, it would not make sense to use this criteria on SNPE and SMC-ABC which do not constrain the means and variances of their predictions. Instead, we consider EPI and SNPE to have converged after completing its most recent optimization epoch (EPI) or round (SNPE) in which the distance is less than 0.5. We consider SMC-ABC to have converged once the population produces samples within the ball ensuring stable amplification.
When assessing the scalability of SNPE, it is important to check that alternative hyperparamterizations could not yield better performance. Key hyperparameters of the SNPE optimization are the number of simulations per round , the number of atoms used in the atomic proposals of the SNPE-C algorithm (Greenberg, 2019), and the batch size . To match EPI, we used a batch size of for , however we found to be helpful for SNPE in higher dimensions. While yielded SNPE convergence for , we found that a substantial increase to yielded more consistent convergence at (Figure 2—figure supplement 2A). By increasing , we also necessarily increase the duration of each round. At , we tried two hyperparameter modifications. As suggested in Greenberg, 2019, we increased by an order of magnitude to improve gradient quality, but this had little effect on the optimization (much overlap between same random seeds) (Figure 2—figure supplement 2B). Finally, we increased by an order of magnitude, which yielded convergence in one case, but no others. We found no way to improve the convergence rate of SNPE without making more aggressive hyperparameter choices requiring high numbers of simulations. In Figure 2C–D, we show samples from the random seed resulting in emergent property convergence at greatest entropy (EPI), the random seed resulting in greatest validation log probability (SNPE), and the result of all converged random seeds (SMC).
Do you have any questions about this protocol?
Post your question to gather feedback from the community. We will also invite the authors of this article to respond.
Tips for asking effective questions
+ Description
Write a detailed description. Include all information that will help others answer your question including experimental processes, conditions, and relevant images.