Uncategorized

Gravitational time dilation as a resource in quantum sensing

Atomic clock interferometers are a valuable tool to test the interface between quantum theory and gravity, in particular via the measurement of gravitational time dilation in the quantum regime. Here, we investigate whether gravitational time dilation may be also used as a resource in quantum information theory. In particular, we show that for a freely falling interferometer and for a Mach-Zehnder interferometer, the gravitational time dilation may enhance the precision in estimating the gravitational acceleration for long interferometric times. To this aim, the interferometric measurements should be performed on both the path and the clock degrees of freedom.

Metriplectic geometry for gravitational subsystems

In general relativity, it is difficult to localise observables such as energy, angular momentum, or centre of mass in a bounded region. The difficulty is that there is dissipation. A self-gravitating system, confined by its own gravity to a bounded region, radiates some of the charges away into the environment. At a formal level, dissipation implies that some diffeomorphisms are not Hamiltonian. In fact, there is no Hamiltonian on phase space that would move the region relative to the fields. Recently, an extension of the covariant phase space has been introduced to resolve the issue. On the extended phase space, the Komar charges are Hamiltonian. They are generators of dressed diffeomorphisms. While the construction is sound, the physical significance is unclear. We provide a critical review before developing a geometric approach that takes into account dissipation in a novel way. Our approach is based on metriplectic geometry, a framework used in the description of dissipative systems. Instead of the Poisson bracket, we introduce a Leibniz bracket – a sum of a skew-symmetric and a symmetric bracket. The symmetric term accounts for the loss of charge due to radiation. On the metriplectic space, the charges are Hamiltonian, yet they are not conserved under their own flow.

Radiative corrections to the Lorentzian EPRL spin foam propagator

We numerically estimate the divergence of several two-vertex diagrams that contribute to the radiative corrections for the Lorentzian EPRL spin foam propagator. We compute the amplitudes as functions of a homogeneous cutoff over the bulk quantum numbers, fixed boundary data, and different Immirzi parameters, and find that for a class of two-vertex diagrams, those with fewer than six internal faces are convergent. The calculations are done with the numerical framework sl2cfoam-next.

Tabletop Experiments for Quantum Gravity Are Also Tests of the Interpretation of Quantum Mechanics

Recently there has been a great deal of interest in tabletop experiments intended to exhibit the quantum nature of gravity by demonstrating that it can induce entanglement. We argue that these experiments also provide new information about the interpretation of quantum mechanics: under appropriate assumptions, $psi$-complete interpretations will generally predict that these experiments will have a positive result, $psi$-nonphysical interpretations predict that these experiments will not have a positive result, and for $psi$-supplemented models there may be arguments for either outcome. We suggest that a positive outcome to these experimenst would rule out a class of quantum gravity models that we refer to as $psi$-incomplete quantum gravity (PIQG) – i.e. models of the interaction between quantum mechanics and gravity in which gravity is coupled to non-quantum beables rather than quantum beables. We review some existing PIQG models and consider what more needs to be done to make these sorts of approaches more appealing, and finally we discuss a cosmological phenomenon which could be regarded as providing evidence for PIQG models.

Watching the Clocks: Interpreting the Page-Wootters Formalism and the Internal Quantum Reference Frame Programme

We discuss some difficulties that arise in attempting to interpret the Page-Wootters and Internal Quantum Reference Frames formalisms, then use a ‘final measurement’ approach to demonstrate that there is a workable single-world realist interpretation for these formalisms. We note that it is necessary to adopt some interpretation before we can determine if the ‘reference frames’ invoked in these approaches are operationally meaningful, and we argue that without a clear operational interpretation, such reference frames might not be suitable to define an equivalence principle. We argue that the notion of superposition should take into account the way in which an instantaneous state is embedded in ongoing dynamical evolution, and this leads to a more nuanced way of thinking about the relativity of superposition in these approaches. We conclude that typically the operational content of these approaches appears only in the limit as the size of at least one reference system becomes large, and therefore these formalisms have an important role to play in showing how our macroscopic reference frames can emerge out of wholly relational facts.

The nonequilibrium cost of accurate information processing

Accurate information processing is crucial both in technology and in nature. To achieve it, any information processing system needs an initial supply of resources away from thermal equilibrium. Here we establish a fundamental limit on the accuracy achievable with a given amount of nonequilibrium resources. The limit applies to arbitrary information processing tasks and arbitrary information processing systems subject to the laws of quantum mechanics. It is easily computable and is expressed in terms of an entropic quantity, which we name reverse entropy, associated to a time reversal of the information processing task under consideration. The limit is achievable for all deterministic classical computations and for all their quantum extensions. As an application, we establish the optimal tradeoff between nonequilibrium and accuracy for the fundamental tasks of storing, transmitting, cloning, and erasing information. Our results set a target for the design of new devices approaching the ultimate efficiency limit, and provide a framework for demonstrating thermodynamical advantages of quantum devices over their classical counterparts.

How causation is rooted into thermodynamics

The notions of cause and effect are widely employed in science. I discuss why and how they are rooted into thermodynamics. The entropy gradient (i) explains in which sense interventions affect the future rather than the past, and (ii) underpins the time orientation of the subject of knowledge as a physical system. Via these two distinct paths, it is this gradient, and only this gradient, the source of the time orientation of causation, namely the fact the cause comes before its effects.

On the Role of Fiducial Structures in Minisuperspace Reduction and Quantum Fluctuations in LQC

We study the homogeneous minisuperspace reduction within the canonical framework for a scalar field theory and gravity. Symmetry reduction is implemented via second class constraints for the field modes over a partitioning of the non-compact spatial slice $Sigma$ into disjoint cells. The canonical structure of the resulting homogeneous theories is obtained via the associated Dirac bracket which can only be defined on a finite number of cells homogeneously patched together and agrees with the full theory Poisson bracket for the averaged fields. This identifies a finite region $V_o$, the fiducial cell, whose size $L$ sets the physical scale over which homogeneity is imposed, namely a wavelength cutoff. The reduced theory results from 1) selecting a subset of $V_o$-averaged observables of the full theory; 2) neglecting inhomogeneous $vec kneqmathbf0$ modes with wavelengths $lambdageq L$ and $lambda< L$; 3) neglecting boundary terms encoding interactions between neighbouring cells. The error made is of order $mathcal O(1/kL)$. As a result, the off-shell structures of the reduced theory depend on the size of $V_o$ and different $V_o$ identify canonically inequivalent theories whose dynamics though is $V_o$-independent. Their quantisation leads then to a family of $V_o$-labeled quantum representations and the quantum version of an active rescaling of $V_o$ is implemented via a suitable dynamics-preserving isomorphism between the different theories. We discuss the consequences for statistical moments, fluctuations, and semiclassical states in both a standard and polymer quantisation. For a scalar field of mass $m$, we also sketch the quantum reduction and identify a subsector of the QFT where the results of the"first reduced, then quantised" theories can be reproduced with good approximation as long as $mgg1/L$. Finally, a strategy to include inhomogeneities in cosmology is outlined.

Experimental superposition of time directions

In the macroscopic world, time is intrinsically asymmetric, flowing in a specific direction, from past to future. However, the same is not necessarily true for quantum systems, as some quantum processes produce valid quantum evolutions under time reversal. Supposing that such processes can be probed in both time directions, we can also consider quantum processes probed in a coherent superposition of forwards and backwards time directions. This yields a broader class of quantum processes than the ones considered so far in the literature, including those with indefinite causal order. In this work, we demonstrate for the first time an operation belonging to this new class: the quantum time flip. Using a photonic realisation of this operation, we apply it to a game formulated as a discrimination task between two sets of operators. This game not only serves as a witness of an indefinite time direction, but also allows for a computational advantage over strategies using a fixed time direction, and even those with an indefinite causal order.