Off-topic: Israel Gelfand would celebrate his 100th birthday today.Kip Thorne hasn't conceded yet; I think that his position has become indefensible over the years.
In July 2005, Hawking wrote a paper (TRF comments) in which he presented his own arguments why the qualitative outcome was different than he used to think. Even though Hawking would admit that the developments in string theory and especially AdS/CFT were the main advances that made him change his mind, his 2005 ideas were presented as great new insights by Hawking and some of the journalists. Your humble correspondent and most experts in the field were skeptical. I wasn't hiding my skepticism either but it seems clear that I was more sympathetic to those ideas than most others.
The recent discussions about block non-diagonalizability of the black hole evolution operator in a classically accessible basis as well as the impossibility to identify local operators in a background-independent way have strengthened my feeling that Hawking was ahead of time when he pointed out an important feature of the evolution:
For a proper understanding of the black hole information puzzle, it's important to properly (quantum mechanically) treat superpositions of classically distinct black hole microstates; and, which is related, to include the interference between the histories with different intermediate states (black hole is not there; black hole in one location/shape/decoration is present).This important observation wasn't emphasized just by your humble correspondent. I would say that Papadodimas and Raju; Nomura, Varela (and sometimes Weinberg); and Hsu consider the revelation above to be an important part of their knowledge that makes it clear that the arguments that black hole firewalls have to exist are flawed.
A key paragraph from Hawking's 2004 concession speech said:
Information is lost in topologically non-trivial metrics like black holes. This corresponds to dissipation in which one loses sight of the exact state. On the other hand, information about the exact state is preserved in topologically trivial metrics. The confusion and paradox arose because people thought classically in terms of a single topology for spacetime. It was either \(\RR^4\) or a black hole. But the Feynman sum over histories allows it to be both at once. One can not tell which topology contributed to the observation, any more than one can tell which slit the electron went through in the two slits experiment. All that observation at infinity can determine is that there is a unitary mapping from initial states to final and that information is not lost.For Hawking, the interference between the black-hole-containing and black-hole-free intermediate states is what restores the purity of the final state. Most of us had the same feeling as the feelings recently shared by Scott Aaronson:
I should confess that I don’t understand this argument (and apparently I’m not alone — even Preskill, to whom Hawking conceded, said he didn’t understand it!). But Hawking does seem to be clearly asserting that the solution to information loss involves there being a nonzero amplitude for the black hole never forming in the first place. (Though an obvious issue is that he doesn’t say how large the amplitude is: if it were nonzero but exponentially small, that wouldn’t seem to help much.)This complaint against Hawking's "key role" of the black-hole-free intermediate state sounds very natural. After all, it seems intuitively "obvious" that the contribution is either tiny in which case it can't have the potency to convert the near-maximally mixed thermal final state into a pure one; or the black-hole-free intermediate states are dominant but then we don't have any explanation why the evolution looks like any events in a black-hole-containing spacetime at all.
Off-topic: A dancing 3D model of a Calabi-Yau manifold. A girl with a 3D printer may print them for you. Via tweeting Maria Spiropulu.
I have repeatedly written the same objection against Hawking's thoughts in the past although my formulations were never as clear as they are today. However, with some newer realizations, I believe that the complaint is at least morally wrong. The basic weapon that challenges the intuitive explanation from the previous paragraph was articulated in the following blog entry and the paper mentioned therein:
Hawking radiation: pure and thermal mixed states are a micron awayThe text argues that in a "truly generic" basis of the \(\exp(S)\)-dimensional Hilbert space relevant for the CV of a black hole, the pure and (maximally or near maximally) mixed density matrices may only differ by exponentially tiny matrix elements of order \(\O(\exp(-S))\).
There is something that I find a bit demagogic about this December 2012 text of mine today: the mixed density matrix has (diagonal) entries of order \(\O(\exp(-S))\), too. So while it was "small" in an absolute sense, the correction needed to perturb the approximate mixed final state to a pure state has to possess matrix elements that are of order \(\O(100\%)\). There was no wrong claim in my blog entry but I was sort of hiding this fact.
But this update doesn't really invalidate the point of the "micron" essay qualitatively. The point is that the off-diagonal entries that are comparable to the diagonal ones may still be invisible to the semiclassical calculations – in fact, they may be invisible at all finite orders of perturbation theory.
Path integral for mechanics
Let me begin with a physical system that has been understood for quite some time: non-relativistic quantum mechanics. In Feynman's path integral approach, the evolution amplitudes are computed as the functional integral\[
{\mathcal A}_{f\leftarrow i} = \int {\mathcal D}x(t)\,\exp(iS/\hbar)
\] over all trajectories that begin and end at the right places. Note that all trajectories, however weird ones, contribute equally (as far as the absolute value goes). If you made an error and treated the observable \(x(t)\) classically, you would expect that the integrand is only nonzero for the correct classical trajectory but it vanishes everywhere else.
Quantum mechanics says something different. The classical trajectory is "highlighted" in the classical limit because all the trajectories that sufficiently differ from the classical solution tend to have a "random", quickly variable phase as the integrand. These random phases tend to cancel and only the phases \(\exp(iS/\hbar)\) near the extremum of \(S\), i.e. near the classical solution, contribute "coherently" because the phase (the exponent) isn't changing much near the extremum (or extrema).
Back to black hole density matrices
Consider a black hole formed by a collapsed of a star in a pure state \(\ket\psi\). The black hole gets formed and then it evaporates. Hawking's approximate 1974 calculation of the final state reveals that the final state is a thermal one (with increasing Hawking temperature as the black hole shrinks), one given by a near maximally mixed density matrix. This result is likely to hold to all orders in perturbation theory.
We know from the AdS/CFT, Matrix theory, and other explicit constructions that the final state is actually pure. So if you describe it by a density matrix, it must be a density matrix of the form\[
\rho_{\rm final} = \ket{\psi}_{\rm final} \bra{\psi}_{\rm final}
\] which must still be rather close to the approximate density matrix \(\rho_{\rm approx}\) that we claimed to be near maximally mixed. They look "qualitatively different" but this type of "qualitative difference" is one that may actually result from tiny or (in practice) hardly observable "quantitative differences".
Let's sensibly assume that in a "classically natural" basis for the Hawking radiation, the final pure state is "generic". It means that in the relevant \(\exp(S)\)-dimensional Hilbert space, all the amplitudes are of the same order i.e. of order \(\O(\exp(-S/2))\) – which is needed for the normalization condition \[
\sum_{i=1}^{\exp(S)} |c_i|^2 = 1
\] to hold. The relative phases between \(c_i,c_j\) are important although the laymen are often led to believe (by sloppy presentations of the Schrödinger cat thought experiment and other things) that only the absolute values matter. The pure density matrix has matrix elements\[
\rho^{\rm final}_{ij} = c_i c^*_j
\] What would you think about the value of the density matrix if you committed a similar error we discussed in the "non-relativistic quantum mechanics" section above? Well, you would do exactly what the laymen usually do when they think that only the absolute values of the amplitudes in a basis matter: you would just keep the diagonal entries but incorrectly set the off-diagonal entries to zero:\[
\rho^{\rm final}_{ij} = c_i c^*_j \cdot \delta_{ij}
\] Apologies, the Kronecker delta must be interpreted "literally" and the usual checks for indices (repeated indices only occur if they're summed via the Einstein sum rule) don't hold here.
Now, my point is that it is perfectly compatible with everything we know – and, ultimately, inevitable – that the semiclassical approximate calculation ends up with a similarly castrated final density matrix as the density matrix with an extra Kronecker delta factor. Why?
Think about two mutually orthogonal microstates of the black hole (or the black hole radiation that results from them), \(\ket i\) and \(\ket j\), which are very similar to one another in some operational classical way of looking at things. For example, they are two black hole microstates that describe the black hole located at positions that differ by a sub-Planckian distance (which still allows the states \(\ket i\) and \(\ket j\) to be orthogonal if the black hole mass is much greater than the Planck mass, and it should be for the black hole interpretation to be OK); or \(\ket j\) is obtained by a creation of a soft photon or another quantum on top of the structure given by \(\ket i\).
What I want to emphasize is that the difference between \(\ket i\) and \(\ket j\) will be inevitably invisible in a semiclassical approximation to any calculation of the evolution of the black hole. If you think about the spinning Earth, you have no chance to distinguish the states of the Earth with the \(z\)-component of the spin equal to \(J_z\) and \(J_z+\hbar\) because \(J_z\gg \hbar\). So all such things are invisible in a calculation that treats \(J_z\) "classically".
In some cases, you may argue that the quantum evolution operator must be diagonal in such "classically indistinguishable microstates", anyway. This diagonal form may follow from the conservation laws (of the angular momentum, for example). The point is that this is not true for the differences between black hole microstates \(\ket i\) and \(\ket j\) described two paragraphs above.
For example, when a black hole is emitting the Hawking quanta, there is no reason for its center-of-mass location to be exactly conserved. In fact, we know for sure that it is not conserved. The black hole is recoiled once it shoots a Hawking particle in a specific direction. Referring to the black hole's large mass, such recoils have been largely neglected in all the (semiclassical – and sometimes "more ambitious") calculations of the Hawking radiation. But the black hole is actually moving because of these recoils and the motion resembles the Brownian motion at (very long) timescales comparable to the Hawking evaporation lifetime. It can get very far.
While the changes of some internal properties of the black hole such as the precise sub-Planckian location of the center-of-mass or the \(\O(1)\) changes to the number of soft quanta around it (which may be large) may be neglected for some purposes, they surely cannot be neglected if you want to calculate the final matrix element \(\rho_{ij}\) where \(\ket i\) and \(\ket j\) are two classically "nearby" microstates.
The actual behavior of \(\rho_{ij}\) for a pure initial state should be clear to you: in a "classically natural" basis for the radiation, all matrix elements \(\rho_{ij}\) are of the same order, whether they are diagonal ones or off-diagonal ones. This is clearly implied by the purity and genericity of the microstate. All approximate calculations tend to assume that the final density matrix – and/or the evolution operator – is diagonal or off-diagonal in some basis of microstates that look natural or easily accessible for classical measurements in the final spacetime (e.g. Fock space occupation number eigenstates of the radiation). But this assumption is completely wrong and the full, exact calculation shows that the off-diagonal elements are actually of the same order. One may only rightfully conclude that the off-diagonal elements (of the final density matrix or the evolution operator) are "almost zero" if we average them over many classically similar yet mutually orthogonal microstates but if we really treat them accurately, the off-diagonal elements in a basis of our choice are never negligible relatively to the diagonal ones. In fact, I would stress that the off-diagonal elements between "pretty much any two" classically natural states are comparable to the geometric average of the two diagonal states.
I believe that the mistake described in the previous sentences and many paragraphs above them is one of the most widespread and crucial mistakes made by Joe Polchinski and many others who end up with incorrect and seemingly paradoxical conclusions such as the existence of a "black hole firewall". They just treat the black hole's own properties – including the metric tensor around it – classically and they believe that the unitarity should hold in each "superselection" sector (with some classical properties; effectively, in each "exact" background spacetime) separately.
But this can't be the case. To guarantee unitarity, it is essential for quantum gravity to have interference – and nonzero off-diagonal matrix elements – between microstates of a black hole that look "similar in the classical approximation" but whose details differ (location of the black hole center mass measured with a sub-Planckian accuracy and/or infinitely many occupation numbers changing by much smaller additions than their rough classical value, to mention two major examples). Only with this full connectedness of the black hole microstates – nonzero off-diagonal entries through which you can connect (assuming many \(ij,jk,kl,lm\) jumps) a black hole state with any other state (including a black-hole-free state) – quantum gravity is capable of preserving all the principles simultaneously (unitarity, equivalence principle wherever it should hold, and locality in the appropriate approximation).
Hawking repeated and and rebranded (as an anti-firewall argument) his 2004-2005 thoughts on the recent Fuzz-Or-Fire workshop in Santa Barbara. Ironically enough (if you think about the bet), Hawking of 2013 is more in favor "unitarity is true and consistent with other principles" i.e. "anti-firewall" (like LM) than even Preskill. Use the hyperlinks if you don't have a VLC player plugin.
What does it have to do with Hawking's 2004-2005 bet concession? Well, he was talking about the need to consider the interference between intermediate states with a black hole and those without a black hole. While the intermediate states with a "totally eradicated" black hole are probably not enough to completely purify his mixed approximate answer, a generalization of his 2004-2005 thesis is true and very important: If we want to understand how the unitarity in the Hawking radiation is compatible with other cherished principles, it is totally essential to acknowledge the interference between a black hole intermediate states and an exponentially large number of intermediate states that aren't quite the same, even when it comes to a classical description of their appearance.
When studied with the full precision, semiclassical gravity just isn't consistent and treating black holes with slightly different classical properties as "superselection sectors" that can't interfere with each other does mean to make the assumption that the internal black hole observables do behave classically, at least in some respects, which they don't.
And that's the memo.
0 comments:
Post a Comment