When I was living my days in the "physics establishment", it was pretty much true that there was a connected theoretical high-energy theoretical physics community including professors, postdocs, and students that worked hard to learn everything it should learn, that cared about the important new findings, and that cared whether the papers they write are correct ones. You could have taken the arXiv papers from that community pretty seriously and when a paper was wrong, chances were that it would be corrected or withdrawn. A serious enough blunder would be found, especially if the paper were sold as an important one, and experts would quickly learn about it and reduced the attention given to the authors of the wrong paper appropriately.
You could have said that the people around loop quantum gravities and similar "approaches" didn't belong because they have never respected any quality standards worth mentioning. Everything was clear but the "pure status" of the community began to be blurred with the arrival of the anthropic papers after the year 2000 that suddenly made it legitimate to write down some very lousy, unsubstantiated, non-quantitative claims, often contradicting some hard knowledge. I tended to think that this decrease of the quality expectations and the propagation of philosophically preconceived and otherwise irrational papers was a temporary fluke connected with the anthropic philosophy – because it's so "philosophically sensitive".
However, it ain't the case. When one looks at the literature about the black hole information issues, i.e. a big topic that made a tremendous progress in the 1990s, a very large portion of the literature that is completely wrong began to develop. Raphael Bousso just released his 4-page preprint
Frozen Vacuumand it's just so incredibly bad – and so far from the first preprint written by a similarly well-known name that is just awful.
Bousso correctly clumps two paradigms that claim that no black hole firewalls exist. The newer ER-EPR correspondence by Maldacena and Susskind; and the \(A=R_B\) interpretation of the original black hole complementarity principle (the interior's degrees of freedom aren't independent from the exterior ones), a more general approach taken in various earlier papers such as Papadodimas-Raju (who are not cited by Bousso which is a pity because it's a better paper about these BH information topics than anything that Bousso has ever written on that topic himself) and advocated on this blog for more than a year, from the beginning of the AMPS provocations.
Indeed, \(ER=EPR\) is just a more specific and more geometric way to think about the lack of independence between the internal and external degrees of freedom – and about the reasons why the independence disappeared (because wormholes connecting the interior with the distant regions are mass-produced and getting longer, starting from short wormholes representing the entangled Hawking pairs produced near the horizon).
I think it's important to point out that the ER-EPR correspondence wasn't "essential" to show that the AMPS firewall arguments were invalid. Many previous papers have offered valid arguments why AMPS didn't have a solid proof.
Unfortunately, it's the last positive thing I can say about the new paper by Bousso.
He believes that either \(ER=EPR\) or \(A=R_B\) – which are approaches that claim to preserve the equivalence principle, at least much more accurately than in the firewall picture by AMPS – ultimately have to violate the equivalence principle as well because an infalling observer isn't allowed to see any particle excitations near the event horizon.
Needless to say, this claim is entirely preposterous and the arguments backing it are flawed. There can't possibly be any "logical contradiction" hiding in \(ER=EPR\) – it's just ordinary quantum mechanics of degrees of freedom that may be approximately visualized as field theory modes on the background of an Einstein-Rosen bridge. What could possibly go wrong? There can't be any contradiction in the very assumption \(A=R_B\) itself because it's even more general than \(ER=EPR\).
The main reason why Bousso's arguments don't hold water will appear later. But I just can't resist to point out many – and there are indeed very many – other, perhaps more minor things in the paper that just drive me up the wall and that would be enough to throw the paper to the trash bin in the good old times when quality mattered.
The first column on the first page ends with this paragraph:
In order to avoid a firewall at the horizon, one could identify the interior partner \(\tilde b\) with some \(e_b\) or with the exact purification \(\hat b\). This reduces the inconsistent double entanglement to a consistent single entanglement. An out-state-dependent mapping is necessary to ensure that \(b \tilde b\) will not just be entangled, but in a particular entangled state, the vacuum state. This type of map is called \(A = R_B\) [3, 6-8] or \(ER=EPR\) [9], or "donkey map" [4]. It is nonstandard [6, 10], and so already faces a number of challenges;2 moreover, no donkey map exists for out-states where \(b\) is less than thermally entangled [3, 4, 11].Some relationship is "non-standard" according to two papers, one of which was written by the present author. What a heresy. Clearly, the word "non-standard" means that Bousso wants to spit on these claims without having any specific counter-arguments.
Of course that there can't be any contradiction if we're forced to look at the out state whenever we want to restrict our attention to microstates that see the vacuum near the horizon. The field theory modes in a black hole background just don't preserve the exact rules of a local quantum field theory so the Hilbert space can't be written as an exact tensor product of Hilbert spaces from individual "subregions". All these subregions are correlated because they are required to combine to a black hole of a fixed size. That black hole has a finite entropy and its limited number of microstates has to efficiently incorporate various field-theoretical degrees of freedom from the regions, and other, non-field theoretical degrees of freedom as well.
Imagine that you have 1 MB of disk space to compress a 10 MB text file – and this is what the black hole is doing with the "regional" information. The algorithm that achieves such a high degree of compression of the information simply has to depend on the whole 10 MB long text. In the analogy, it has to be out-state-dependent. Moreover, the requirement that there would be an empty space near the horizon at some point isn't a natural constraint on the initial state of the star that collapsed into a black hole. We're just not guaranteed to get an empty black hole if we start with any initial state. It's likely but it's not guaranteed.
So one can't be surprised that the identification of the degrees of freedom with their purification has to be out-state-dependent. It has to be out-state-dependent already because the procedure is required to break down if the initial state actually doesn't produce the vacuum in the region where we want to see the vacuum.
The observation that one can't invent a canonical "donkey map" for a non-maximally entangled state is more or less fine but there's no physical reason why such a "donkey map" should always exist.
The first equation of the paper that makes me slightly upset is equation (1):\[
{\ket\psi}_{b e_b p} \propto {\ket 0}_p \otimes \sum_{x=0}^\infty x^n {\ket n}_b {\ket n}_{e_b}
\] This is problematic at so many levels.
First, the pointer state \({\ket 0}_p\) is "added" to the formula as a simple tensor product which means that Bousso implicitly assumes the exact locality or clustering property for all the degrees of freedom.
Second, the tensor factor in the state that is related to the \(b\) and \(e_b\) degrees of freedom has a completely particular form – only the entangled degrees of freedom are present. Moreover, their coefficients are written as \(x^n\), a very particular function of \(n\), the occupation number.
Such a simple dependence on \(n\) may only be justified in a free quantum field theory on the curved background. In a general interacting setup, and a black hole is strongly interacting and "reshuffles" all the information extremely efficiently, there's no reason to expect that the complex amplitudes for the states \({\ket n}_b\otimes {\ket n}_{e_b}\) should scale like \(x^n\) although \(x^{2n}\) is the scaling of the density matrix eigenvalues in the mixed ensemble (but we're dealing with general microstates here which are more variable). Moreover, when Bousso writes
For modes with Killing frequency of order the Hawking temperature, \(x=\exp(-\beta\omega/2)\) is of order one.he doesn't seem to realize that \(x\) is never "really close" to one. This \(x\) may indeed be comparable to one for modes with Killing frequency that is as low as the Hawking temperature but there are just several such modes and these modes are heavily delocalized – by the uncertainty principle, their spatial size is comparable to the black hole radius as well – which means that these modes can't really be helpful to test the equivalence principle in a region near the horizon that is much smaller than the black hole size (and this condition is needed for the non-uniformities of the gravitational field to be negligible and for the gravitational field to be really indistinguishable from acceleration in a flat space, and this is the equivalence that the equivalence principle is all about). These warnings against "wrong, long-distance tests of the equivalence principle" were raised especially by Mathur and Turton whose papers aren't cited by Bousso, either. This fact itself would also be enough to be sure that the rest of the new Bousso paper can't be a valid argument against the equivalence principle.
The very fact that Bousso assumes a particular state and calls it "the state" although he hasn't defined any special properties of the state indicates that he just doesn't understand the superposition principle of quantum mechanics. In quantum mechanics, any and every complex superposition of allowed ket vectors is equally allowed. A black hole doesn't have a single "the state". It has exponentially many microstates. You may only talk about "a" state. And there are many of them.
It's clear that Bousso is trying to build the microstate of the black hole from the low-energy field-theoretical occupation modes. But this can't be done. A black hole has an exponentially huge entropy and a vast majority of these microstates corresponds to black hole configurations that look pretty much empty inside the horizon. Similarly, a vast majority of the microstates of the Hawking radiation are microstates that closely resemble the thermal mixed state of the radiation. There are no "the" states of either the black hole or its radiation.
For a page or so, Bousso is transforming the pointer state and employs some broken not-so-quantum terminology (and perhaps not only terminology) such as a "collapse" of the wave function. There is no physical process that could be called the "collapse" of the wave function. Moreover, this whole discussion about the pointers and measurements is completely redundant and only adds confusion to the text.
Finally, in the second column of the second page, we read:
Nine years later, a clueless Alice happens to fall through the zone without encountering Bob or the pointers. She does encounter the mode \(b\), ten light-years from the horizon, as well as \(\tilde b\), inside the horizon. She makes no particular measurement but just enjoys the vacuum. After all, her theory of black holes says that \(\tilde b\) must be identified with whatever purifies \(b\), whether or not Alice controls the purifying system or has any idea where it is. By Eq. (3), the purification happens to be a subspace of \(ep\). The associated donkey map is Eq. (5), and the result is the infalling vacuum (6).You read it once, twice, thrice. You try to understand what the argument for the contradiction could possibly be. If you think carefully, you will fail. It makes no sense whatsoever. The first reason why it makes no sense is that Alice doesn't measure anything, she just "enjoys" the flat space. But if she's just on a vacation and measures nothing, her work can't be used to derive any paradox, either. The word "enjoy" sounds like a joke except that it seems to be an important part of Bousso's thinking.
Bousso seems to claim that he has found two derivations of the value \(N\) of an occupation number. One of them gives you \(N=0\) and the other gives you \(N=4\). That would indeed be a paradox except that a necessary condition for him to derive that \(N=4\) is to assume that \(N=4\) in the experiment. And a necessary condition for \(N=0\) is to assume \(N=0\). These assumptions can't hold simultaneously because \(N\) is a well-defined operator on the Hilbert space, or at least on the subspace of the Hilbert space that respects a macroscopic appearance of the black hole from an incoming observer's viewpoint. So there can't be any paradox.
Just try to answer the question: Why does Bousso think that Alice enjoys the vacuum? It's likely that an old black hole has a lot of vacuum with \(N=0\) (the occupation number is measured in a freely falling frame; by the Bogoliubov transformation, an observer trying to sit at a constant \(R\) will see lots of quasi-thermal Unruh/Hawking radiation with nonzero values of \(N\)) near its event horizon, on both sides, because it has already devoured what it could have devoured, but it's just not guaranteed. There's a nonzero probability that \(N=4\) and if \(N=4\), then the measurements with one pointer or 50 pointers etc. will imply \(N=4\). What do the pointers have to do with this simple thing?
There can't be any ambiguity for the value of \(N\). One may imagine that \(N\) is an occupation number of a field-theoretical mode on a curved spacetime resembling the Einstein-Rosen bridge, if we pick the particular terminology of \(ER=EPR\). So they just evolve according to the usual field (Heisenberg) equations. Operators on one slice are functionals of operators on another slice. Those relationships are calculable from the field (Heisenberg) equations. If the observables are related in some way, they are related and the measurements will agree. If they are not related, they are not related and the measurements may disagree. The answer is always unambiguous.
Also, the Hawking radiation modes are linked (with some extra scrambling transformation – geometrically interpreted as a complicated "twisting" of the Einstein-Rosen bridges) – to some of the modes in the black hole interior. In the ER-EPR correspondence, this simply results from their proximity. The regions may look distant in the ordinary black hole spacetimes but because there are wormholes, there is also a sense in which they are very close to each other. So the field operators in these two regions may be seen to be equal, up to differences proportional to the very high-energy modes (that are approximately set to their ground state).
There can't possibly be any paradox.
Moreover, the whole game with the ket vectors is a proof that Bousso is just spreading confusing fog. The "accent" of this text reminds me of the people who haven't learned quantum mechanics well and who believe that it can only be formulated in Schrödinger's picture (with "collapses" that they imagine "materialistically"). If he believes that there are arguments showing \(N=0\) and \(N=4\) at the same moment – two different values of an observable – it must be possible to formulate the proof without any ket vectors. One doesn't need to talk about any ket vectors. He wants to prove a strange claim about the observables so the proof must be based on observables and relationships between them (especially the Heisenberg equations of motion, the spectra of operators, and so on). There's no point in writing the explicit ket-vectors – except if he wants to obscure the situation and introduce lots of wrong assumptions to the game such as the precise tensor factorization of the Hilbert space which doesn't hold – and fails especially when we consider the physics of black holes.
On the remaining pages, Bousso tries to make his arguments with the pointers etc. – that have been a redundant source of fog from the beginning – even more complicated and confusing. The first two pages may at least classified as a spectacularly wrong segment of a paper. But the rest is a case of unspectacularly wrong excessive babbling.
I can't understand why this whole culture of "the equivalence principle has to be totally wrong" has spread in the quantum gravity literature. It's so self-evidently wrong and it's been discussed for decades. For two decades, we have known why similar would-be arguments that lead to paradoxes don't really work. Every expert was saying these things. Why didn't they protest 15 or 20 years ago?
It seems to me that the community of the quantum gravity or high-energy theoretical physics experts is really decaying away and within a few years, you will face a violent backlash even if you write down that \(1+1=2\). Raphael, Joe, others, can't you just stop posting this increasingly awful rubbish to the arXiv?
0 comments:
Post a Comment