Representations in atomistic ML

I have spent some time wondering about the fundamental questions about crafting inputs to atomistic ML models, which are also closely tied to their architectures. More specifically, in terms of descriptors, what are the ingredients central to their mathematical representations? How can we incorporate equivariance with the physical symmetries underlying structures in 3D Euclidean space? How to ensure that two structures unrelated by symmetries are mapped to different descriptors? How can we capture long-range (Coulomb) interactions into machine learning models while keeping a local description of atomic environments? Another line of research has been on identifying the similarities and differences between models that rely on these descriptors as inputs and models that work directly with input structures (such as graph neural networks).

2026

  1. op2op.png
    Jigyasa Nigam, Tess Smidt, and Geneviève Dusson
    arXiv.org, Feb 2026
  2. Jigyasa Nigam, and Max Veit
    Journal of Physics: Condensed Matter, Feb 2026
    Publisher: IOP Publishing

2025

  1. Arthur Yan Lin, Lucas Ortengren, Seonwoo Hwang, Yong-Cheol Cho, Jigyasa Nigam, and Rose K Cersonsky
    Journal of Open Source Software, Feb 2025

2024

  1. anisoap.png
    Arthur Lin, Kevin K Huguenin-Dumittan, Yong-Cheol Cho, Jigyasa Nigam, and Rose K Cersonsky
    The Journal of Chemical Physics, Feb 2024
  2. 3c-boron.png
    Jigyasa Nigam, Sergey N Pozdnyakov, Kevin K Huguenin-Dumittan, and Michele Ceriotti
    APL Machine Learning, Feb 2024

2022

  1. unified-mp.png
    Jigyasa Nigam, Sergey Pozdnyakov, Guillaume Fraux, and Michele Ceriotti
    The Journal of Chemical Physics, Feb 2022

2021

  1. long-range.png
    Andrea Grisafi, Jigyasa Nigam, and Michele Ceriotti
    Chemical Science, Feb 2021

2020

  1. nice.png
    Jigyasa Nigam, Sergey Pozdnyakov, and Michele Ceriotti
    The Journal of Chemical Physics, Feb 2020