Title: Training Algorithm Matters for the Performance of Neural Network Potential
Abstract: One hidden issue for developing neural network potentials (NNPs) is the choice of training algorithm. Here we compare the performance of two popular training algorithms, the adaptive moment estimation algorithm (Adam) and the Extended Kalman Filter algorithm (EKF), using the Behler-Parrinello neural network (BPNN) and two publicly accessible datasets of liquid water. It is found that NNPs trained with EKF are more transferable and less sensitive to the value of the learning rate, as compared to Adam. In both cases, error metrics of the test set do not always serve as a good indicator for the actual performance of NNPs. Instead, we show that their performance correlates well with a Fisher information based similarity measure.
The seminar will both be held physically and via Zoom. A link to the Zoom meeting will be posted at the morning of the event on the CIM mail list and when registering to the event. The 40 first to register to the event will also be given a free lunch.
Registration to the event.