MPNN is a common framework for GNN that was used for chemical prediction in 2017 by Gilmer et al. [54], and it has shown versatility in many applications such as natural language processing, image segmentation, chemical/molecular graphs, and so on. Many recently proposed GNN architectures for molecular property prediction can be formulated in this flexible framework [24, 26, 34, 37]. In theory, MPNN operates the convolutions on undirected molecular graphs G = (V, E) with node features Xv and edge features Ekm. The forward propagation of MPNN has two phases: message passing phase and readout phase. The message passing phase transmits information across the molecular graph to learn a molecular embedding using the message functions Mt and node updating functions Ut, and the readout phase computes a feature vector for the whole molecular graph using some readout functions R to model the properties of interest. More mathematical details are available in the study reported by Gilmer et al. [54] In the training of MPNN, the following hyper-parameters were optimized: L2 regularization (0, 10e-8, 10e-6, 10e-4), learning rate (10e-2.5, 10e-3.5, 10e-1.5), dimension of node feature in hidden layers (64, 32, 16), dimension of edge feature in hidden layers (64, 32, 16), and number of set2set layers (2,3,4). The number of message passing steps and set2set steps were fixed to 6.
Do you have any questions about this protocol?
Post your question to gather feedback from the community. We will also invite the authors of this article to respond.