December 1740

Refining embeddings with fill-tuning: data-efficient generalised performance improvements for materials foundation models

Pretrained foundation models learn embeddings that can be used for a wide range of downstream tasks. These embeddings optimise general performance, and if insufficiently accurate at a specific task the model can be fine-tuned to improve performance. For all current methodologies this operation necessarily degrades performance on all out-of-distribution tasks. In this work we present ‘fill-tuning’, a novel methodology to generate datasets for continued pretraining of foundation models that are not suited to a particular downstream task, but instead aim to correct poor regions of the embedding. We present the application of roughness analysis to latent space topologies and illustrate how it can be used to propose data that will be most valuable to improving the embedding. We apply fill-tuning to a set of state-of-the-art materials foundation models trained on $O(10^9)$ data points and show model improvement of almost 1% in all downstream tasks with the addition of only 100 data points. This method provides a route to the general improvement of foundation models at the computational cost of fine-tuning.

Geometry from quantum temporal correlations

In this work, we show how Euclidean 3-space uniquely emerges from the structure of quantum temporal correlations associated with sequential measurements of Pauli observables on a single qubit. Quite remarkably, the quantum temporal correlations which give rise to geometry are independent of the initial state of the qubit, which we show enables an observer to extract geometric data from sequential measurements without the observer having any knowledge of initial conditions. Such results suggest the plausibility that space itself may emerge from quantum temporal correlations, and we formulate a toy model of such a hypothetical phenomenon.