TLDR: A new research paper proposes a physics-based framework for understanding large language models, demonstrating that a quantity analogous to energy, combining hidden state velocity and next-token certainty, is approximately conserved during inference. This framework reveals distinct dynamical regimes for pre-trained versus random-weight models and leads to Jacobian steering, a control method that minimally perturbs hidden states to improve output quality and semantic coherence.
A groundbreaking new research paper, titled “Momentum–Point-Perplexity Mechanics in Large Language Models,” delves into the intricate internal workings of large language models (LLMs) from a unique physics-based perspective. Authored by Lorenzo Tomaz, Judd Rosenblatt, Thomas B Jones, and Diogo Schwerz de Lucena from AE Studio, with contributions from PIBBSS, Timaeus, & Apart Research, this study offers a novel way to understand and potentially control these powerful AI systems.
The core of the research revolves around observing how the hidden states within transformer models change as they process tokens during inference. Surprisingly, the researchers discovered that a specific quantity, which combines the rate of change in these hidden states with the model’s certainty about the next token, remains almost constant. This quantity is analogous to energy in classical physics.
The paper introduces a “log-Lagrangian” framework, which views transformer behavior through a lens similar to classical mechanics. This framework defines a form of “energy” that is approximately conserved. The kinetic part of this energy relates to the speed at which hidden states change, while the potential part is linked to the model’s point perplexity – essentially, how uncertain it is about the next token. High perplexity means high potential energy, indicating greater uncertainty.
Empirical validation across 20 different open-source transformer models, ranging from 135 million to 3 billion parameters, consistently showed this energy conservation. Interestingly, models with random, untrained weights conserved this “energy” much more tightly than their pre-trained counterparts. This suggests that the architecture of transformers inherently promotes this conservation, and the training process shifts models into a different, more dynamic regime.
The study identified two distinct dynamical regimes. Pre-trained models were found to be “kinetic-dominated,” meaning they exhibit rapid, decisive transitions through their hidden space and have lower next-token uncertainty. In contrast, random-weight models showed a more balanced dynamic, with slower, more diffusive movements and higher uncertainty. Training, therefore, pushes models towards faster, more confident predictions, albeit with slightly more energy variability.
Beyond understanding, this physics-inspired view also led to a practical control method called Jacobian steering. Derived from the principle of least action, this method allows for minimal perturbations to hidden states to favor a specific target token. When tested, Jacobian steering not only maintained near-constant energy in the models but also produced continuations that were rated higher in semantic quality compared to the models’ natural outputs. This indicates that aligning with the model’s natural dynamics can lead to better, more coherent results.
Also Read:
- A Universal Framework for LLM Scaling
- Steering Large Language Models Away From Bias: A New Approach to Safer AI
The implications of this research are significant for AI safety and interpretability. By providing a measurable invariant – this conserved “energy” – the framework offers a principled basis for monitoring LLMs, detecting anomalous behavior, and developing low-risk steering mechanisms. It suggests that by understanding transformers as dynamical systems governed by variational principles, we can make these powerful models more predictable and better aligned with human intent. For more details, you can read the full research paper here.


