Warning

JUser: :_load: Unable to load user with ID: 4841
JUser: :_load: Unable to load user with ID: 5995
Speaker: Professor Mark Tuckermann
Institute: NYU
Country: USA
Speaker Link: https://as.nyu.edu/content/nyu-as/as/faculty/mark-e-tuckerman.html

Professor Mark E. Tuckermann

Department of Chemistry, New York University, 100 Washington Square East, New York, NY 10003

Courant Institute of Mathematical Sciences, New York University, 251 Mercer St. New York, NY 10012

NYU-ECNU Center for Computational Chemistry at NYU Shanghai, 3663 Zhongshan Rd. N. Shanghai 200062 People’s Republic of China

Machine learning has become an integral tool in the theoretical and computational molecular sciences.  Uses of machine learning in this area include prediction of molecule and materials properties from large databases of descriptors, design of new molecules with desired characteristics, design of chemical reactions and processes, representation of high-dimensional potential energy and free energy surfaces, creation of new enhanced sampling strategies, and bypassing of costly quantum chemical calculations, to highlight just a few.  This talk will focus on the use of machine learning in the analysis and creation of new enhanced sampling approaches.  I will first provide an overview of examples of different classes machine learning models, including kernel methods, neural networks, decision-tree approaches, and nearest-neighbor schemes.  I will then compare the performance of these models in the representation and deployment of high-dimensional free-energy surfaces extracted from enhanced-sampling algorithms based on limited sampled data.  I will then show how machine learning can be used to create new enhanced sampling approaches capable of generating kinetic pathways between conformational basins and elucidating mechanisms.  The performance of this new strategy will be illustrated on a solid-solid structural transition in crystalline molybdenum. 

Recording:

Video is available only for registered users.

References

1. Stochastic neural network approach for learning high-dimensional free energy surfaces. Schneider, L. Dai, R. Q. Topper, C. Drechsel-Grau, M. E. Tuckerman Phys. Rev. Lett. 119, 150601 (2017).

2. Neural-network based path collective variables for enhanced sampling of phase transformations. Rogal, E. Schneider, M. E. Tuckerman Phys. Rev. Lett. 123, 245701 (2019).

3. Comparison of the performance of machine learning models in representing high-dimensional free-energy surfaces and generating observables. R. Cendagorta, J. Tolpin, E. Schneider, R. Q. Topper, M. E. Tuckerman J. Phys. Chem. B 124, 3647-3660 (2020).

4. Enhanced sampling path integral methods using neural network potential energy surfaces with application to diffusion in hydrogen hydrates. R. Cendagorta, H. Y. Shen, Z. Bacic, M. E. Tuckerman Adv. Theory & Simulations 2000258 (2020).

3 thoughts on “Machine learning as a tool for analyzing and creating novel enhanced conformational sampling strategies”

  1. Tuesday, 16 February 2021 11:13

    Thanks Mark for the very interesting talk.

    Regarding the comment of Rika and following discussions on appropriate number of training samples, one general rule of thumb which I mentioned and is generally applicable to all parameterization based models, suggests having training samples at least ten times the model parameters as suggested in https://doi.org/10.1207/S15328007SEM1001_6. I verified it specifically for neural networks in an earlier study https://doi.org/10.1016/j.aca.2018.05.015 and have personally found it very effective in several other neural network models I developed just recently.

    Thanks again for the nice talk.

    1. Thursday, 18 February 2021 01:26

      Thanks, Amin.  Interesting.  Do you know if this rule of thumb is robust across a wide range of neural network types and architectures?

      1. Friday, 19 February 2021 09:09

        This rule of thumb is proposed by Jackson as well as some others (e.g. Kline in his famous book "Principles and practice of structural equation modeling") as a general suggestion for parametrization of any model. So, I would say for any neural network model which requires optimizing weight and bias constants, it can be a robust condition.