TheReference

  • Subscribe to our RSS feed.
  • Twitter
  • StumbleUpon
  • Reddit
  • Facebook
  • Digg

Sunday, September 8, 2013

Likely: latest Atlantic hurricane-free date at least since 1941

Posted on 10:54 PM by Unknown
Originally posted on September 4th. Now, 5 days later, it seems that no currently active systems will grow to a hurricane so the records will be broken – a new record at least since 1941, indeed.
Remotely related: Henrik Svensmark et al. have a new paper (press release) on cosmoclimatology in PLA, experimentally arguing that the UV rays increase the aerosol production from ozone, sulfur dioxide, vapor by the same factor even for nuclei above 50 nm of diameter – which may already be called cloud condensation nuclei. This strengthens his claims that the cosmic rays influence the climate and falsifies some theories about the chemistry of the atmosphere. Via WUWT. See previous TRF text on cosmoclimatology.
I have manually checked the dates of formation of the first hurricanes on Wikipedia pages about the 1851 Atlantic hurricane season (older, sparser data, are available at most on the "one page per decade" basis) through the 2013 Atlantic hurricane season. You should be able to manually edit the year in the URL to get to all the other pages.

This is what I found.

The first 2013 Atlantic hurricane hasn't started to form yet; only two 20-30 percent "glimpses" of a possible depression can be seen and they're likely to be destroyed by their collision with the land (and if they won't be, they will still be too weak for a hurricane). It's September 4th. The probability is therefore high that this situation will continue past September 9th, i.e. next Monday. If that's so, 2013 will beat 2002 (Sep 9th) and the "most recent" hurricane season that could beat our present are 1941 (Sep 17th), 1922 (Sep 13th), 1914 and 1907 (the only two recorded hurricane-free seasons with 1 and 5 named storms, respectively), 1905 (Oct 1st), 1877 (Sep 14th), and 1876 (Sep 12th).




For the sake of convenience, let me mention the first "hurricane birth date" for less spectacular years which were nevertheless late in the season: 1937 (Sep 9th), 1931 (Sep 6th), 1920 (Sep 7th), 1912 (Sep 10th), 1865, 1857 (both Sep 6th). Some seasons were very weak – 1952, 1939, 1930 – but some hurricanes materialized by the end of August, anyway.

Most of the years see the first hurricane forming sometime in July. August births of the first hurricanes are rarer, much like the beginning in September or, on the contrary, June (or even May). Several out-of-the-official-season hurricanes in January etc. mess up with the calendar and I was overlooking them (treating them as if they didn't occur).




See also an Accuweather and WUWT articles about the possibly looming new record.

After the unusually vigorous 2005 Atlantic hurricane season, we were drowning in predictions of ever stronger and ever more frequent hurricanes. A statistical evaluation of the 2006-2013 seasons shows that all the hurricane-related data randomly fluctuate in the usual intervals we have been used to for quite some time. These numbers are very volatile but there's no indication that something is increasing and everything suggests that 2005 was an exceptional year that is unlikely to be repeated often.

So far, we don't know for sure whether the late earliest 2002 hurricane date will be beaten – I would bet that it probably will – and whether 2013 will join the shortlist of hurricaneless seasons (1914 and 1907: how many TRF readers remember those years?) – chances for Yes and No seem comparable to me. And yes, I think it's more likely than not that at least one hurricane was overlooked either in 1907 or 1914, due to the absence of satellites and similar devices, so it's perfectly plausible that the ongoing season will actually be the weakest one since 1851 (unless a hurricane will materialize).

But what is clear is that the absence of any unusually strong hurricane activity after 2005 is just another example of the spectacular failures of the climate alarmists. Like with other failures, they never learn any lesson. They only cherry-pick the data that agree with their scientifically pathological opinion that a dangerous climate change should be underway and simply switch to something else whenever the clash of their idea with the data becomes too self-evident.

Whenever it becomes indisputable that the data have falsified a particular prediction of their "dangerous climate change" framework – and be almost sure that the data ultimately rule out every single one of them – they just switch to something else in which the data aren't sufficiently detailed which is why sufficiently spun, cherry-picked anecdotes may be temporarily used to replace the data. This approach is unscientific and despicable.

It's not too important scientifically or socially whether the first 2013 Atlantic hurricane starts to form before September 9th but I will be personally checking what's going on above the Atlantic Ocean, just for the fun of it. Will you?
Read More
Posted in climate, weather records | No comments

Democrats of Europe, wake up!

Posted on 3:59 AM by Unknown
Daniel Cohn-Bendit is a notorious Franco-German leftist.

In 1968, he would be a fighter at the barricades of Paris; his nickname has been Danny the Red (Dany le Rouge) ever since. In the 1970s, he would "love" children in an "anti-authoritarian kindergarten" which is why he also fought for the sex with the children to be legalized in the 1980s. Germany's Green Party recently made a huge U-turn and it now seems to claim that it was "unacceptable" to demand the legalization of peadophilia.

In a normal society, such a man would probably oscillate in between a mental asylum and a prison but we live in countries that have incorporated themselves into the European Union so this chap is much more than a rank-and-file member of the European Parliament. He co-leads the Greens-Marxists in the EU legislative body and is just planning to create a new, modern incarnation of the Communist International that should overtake Europe.




That's the main "planned action" according to his and Felix Marquard's 5-days-old article (also published by The New York Times),
The Fix For Europe: People Power,
where they plan to screw Europe and take the power from the people and democrats on the continent. Marquard is a dropout who did fine thanks to his rich parents and runs a P.R. agency – a spoiled brat Engels for the Marx-Bendit, I would say. In Poland, the article was reprinted under the headline Get rid of the nation state. Wow.

Normally, I would consider Danny the Red to be a marginal figure who is not worth an answer from many of us. His job in the EU Parliament is good for him (he announced that he won't run again in May 2014 elections) but I would doubt that his power goes beyond his preposterous individual existence. That doesn't mean that his opinions and plans aren't shared by a scary percentage of the people. But I would just feel that this particular politician doesn't have the power to change things today.




But I may be wrong. Czech ex-president Václav Klaus, his aides, and a few pro-freedom European politicians and pundits such as Nigel Farage have initiated the following manifesto:
Democrats of Europe, Wake Up!
You may join the supporters by sending a kind e-mail to democracy@vkinstitute.cz.

Klaus explains that some modernization of the language notwithstanding, Cohn-Bendit's and Marquard's rant structurally mimics Marx's and Lenin's projects and/or a roadmap to rebuild the EU into a federal melting pot of nations analogous to the USSR.

See also Bruce Bawer's analysis, "Europe's Would-Be Masters", of the Cohn-Bendit's and Marquard's rant written for the Front Page Magazine.

I agree with Klaus that those people do think in a more or less isomorphic way to the Marxists, Leninists, and Stalinists. I do agree that they're not a negligible fringe group that may be ignored. I do agree that democrats must show their teeth before the left-wing radicals insert their teeth into the flesh of necks of our European nation states and their democratic regimes.

It's just not clear to me whether the planned creation of yet another radical party is something that crosses the red lines, something that should suddenly energize and unite the opposition to these dangerous plans across the old continent. But maybe I will change my opinion later today. ;-) So far, it just seems to me that he will at most regroup some extremist parties enjoying something like 5% percent in the EU.

Am I wrong?
Read More
Posted in Czechoslovakia, Europe, politics | No comments

Saturday, September 7, 2013

Confusions about the relationships of special relativity and general relativity

Posted on 5:35 AM by Unknown
Sabine Hossenfelder wrote about the confusions surrounding the relationship of Einstein's 1905 special theory of relativity and Einstein's 1915 general theory of relativity. Edward Pig Measure is one of the laymen who are somewhat confused; many others are vastly more confused.



First of all, I find it very important that all the discussions on the two blogs above are about physics topics that have been settled for 100 years, about the high-school understanding of relativity. I think it is desirable to emphasize this point because much of the confusion arises when complete crackpots such as Lee Smolin say or write totally wrong things about relativity and they sell these totally wrong things as a cutting-edge research.




Special relativity is a 108 years old or new theory of space and time that correctly accounts for new phenomena that are known to occur when the observers' speeds approach the speed of light. It is a principled theory that constraints what particular constructive theories of individual phenomena and their classes may say and what they mustn't say.




All other theories must be made compatible with the two postulates of special relativity:
  • Relativity postulate: the laws of physics have the same form in the coordinate systems of all observers moving by constant speeds in a constant direction (inertial frames)
  • Constancy of the speed of light: the speed of light is constant, \(c\), regardless of the speed of the source and the speed of the observer
Maxwell's theory of electromagnetism was actually compatible with those principles before relativity was found; that's why Einstein's good understanding of electromagnetism helped him to discover special relativity. However, ordinary mechanics was only compatible with the first postulate (which is referred to as the Galilean invariance in non-relativistic mechanics); it didn't respect the constancy of the speed of light because the speed of light was supposed to become \(c\pm v\) if an observer was moving relatively to the aether – a preferred environment in which the speed of light is \(c\) (independently of the speed of the source!) – by the speed \(v\). The 1887 Morley-Michelson experiments made it clear that the speed was always \(c\), regardless of the speed of the observer.

So Einstein's special relativity primarily modified mechanics – mechanics was forced to change. For example, the relative speed between two bodies on a collision course on a line whose speeds are \(u,v\) is no longer \(u+v\) but it is \((u+v)/(1+uv/c^2)\). But any other kind of phenomena if there were one aside from mechanics and electrodynamics – hydrodynamics, aerodynamics, thermodynamics etc. (although they're really derived theories from mechanics and perhaps electrodynamics, not really fundamentally new) – had to be adjusted to agree with the two postulates. Particle physics only accepts theories that agree with relativity, too. For particle physics, this is so automatic – quantum field theory and string theory are the frameworks of choice and all of them are relativistic – that we don't even realize how much the possible "theories of particles" have been constrained by relativity.

The postulates imply – and Einstein was able to prove from them – that the length of objects shrinks in the direction of motion; the rate at which (any) clocks are "ticking" is slowing down if the clocks are speeding up; the total relativistic mass is increasing with the speed; its conservation law is merged with the momentum conservation law to the 4-momentum conservation law; this also implies that the mass and energy conservation laws become one identical part of the 4-momentum law and (what we used to call) energy may be converted to (what we used to call) mass and vice versa via \(E=mc^2\), the most well-known equation of relativity among the laymen.

While mechanics (I really mean kinematics, the description of motion influenced by forces that are given and whose origin isn't analyzed) was adjusted to relativity in 1905 – it was the main point of it – the physics of gravity (the description of a particular force that causes the motion – and such descriptions belong to "dynamics", not "kinematics") remained mysterious because (among related problems), Newton's gravity seems to operate instantaneously which violates the speed limit, \(c\), that relativity imposes on the speed of propagation of any usable information.

Einstein spent the decade after the discovery of special relativity, 1905-1915, by attempts to reconcile the laws of gravity with the principles of special relativity. The result of this long but successful work, the general theory of relativity, pretty much inevitably and uniquely follows from special relativity (that is required to hold whenever the gravitational fields are negligible) and the equivalence principle (the statement that all bodies accelerate in gravitational fields by the same acceleration which means that freely falling frames are indistinguishable from the life outside gravitational fields, and must therefore locally preserve special relativity).

I will discuss GR as an unavoidable extension of SR momentarily. But let me first address a more trivial question:
Is SR applicable to phenomena in which objects accelerate?
The answer is, of course, Yes. Special relativity would be useless if it were requiring all objects to move without any acceleration; after all, almost everything in the real world accelerates, otherwise the world would be useless. The correct claim similar to the proposition above is that special relativity has the same, simpler form in coordinate systems associated with non-accelerating observers. But that doesn't mean that we can't translate the predictions of a special relativistic theory to an accelerating frame. Yes, we can. It's as straightforward as a coordinate transformation. Fictitious forces will appear in the description. All of them are fully calculable.

We should point out that if it were impossible to consider accelerating observers, special relativity couldn't tell us anything about the twin "paradox". At least one of the twins, the astronaut, has to intensely accelerate during his life. But the total time measured by his clock – and by the aging of his organs, which is just another type of clocks (not too accurate one) -´is clearly composed of the proper times of tiny line intervals into which his world line may be divided. The infinitesimal pieces of his world line are straight so special relativity simply has to hold. When we compute i.e. integrate the total proper time along the world line, of course that we will find out that the twin-astronaut will be younger than his brother who spent decades on Earth.

We don't need general relativity because the presence of acceleration doesn't mean that there's a gravitational field. The curvature of the spacetime is still zero. Acceleration is locally equivalent to gravity by the equivalence principle but the clever way to use it isn't to envision unnecessary gravitational fields but, on the contrary, to undo the gravity whenever we can by replacing it with acceleration combined with no gravity – and for this combination, special relativity is sufficient.

Not being able to produce this right answer to the twin "paradox" means not to understand special relativity at the high-school level (at least we did learn basics of special relativity at the high school, a pretty ordinary high school). It's not wise, deep, clever, or sophisticated to be doubtful about the usual resolution to the twin "paradox". It is nothing more than a sign of brutal ignorance. (Christine Dantas is among those who believe that special relativity doesn't imply that the astronaut-twin will be younger because acceleration makes it impossible to use the theory. Holy cow. This lady has had a full big mouth about quantum gravity while high school physics is apparently way too hard for her.)

Now, let me switch to general relativity again. Sabine promotes a particular definition of special relativity:
Ask some theoretical physicist what special relativity is and they’ll say something like “It’s the dynamics in Minkowski space” or “It’s the special case of general relativity in flat space”. (Representative survey taken among our household members, p=0.0003). But open a pop science book and they’ll try to tell you special relativity applies only to inertial frames, only to observers moving with constant velocities.
I don't think that it is downright incorrect to describe special relativity in Sabine's way. But I don't think it's the deepest or most natural way, either. More importantly, I do agree with the criticized books that at least something in special relativity does apply to observers moving with constant velocities only – the Lorentz symmetry only mixes the viewpoints of these observers and, consequently, the laws of physics only have the usual simple form in the coordinate systems connected with these observers. The difference between inertial and non-inertial systems is essential in special relativity and if that's the claim that Sabine criticizes, she is completely wrong.

Moreover, her "definition" of special relativity is useless. A definition is meant to be helpful to someone whose knowledge is at a lower level than the level at which the defined object is "obvious". If someone doesn't know special relativity, you won't help him much if your explanation will assume the knowledge of general relativity because, you know, general relativity is harder than special relativity.

But there's another, more conceptual reason why I consider Sabine's definition to be a sign of her (and her spouse's, as we were told) shallow knowledge of the subject. What is the reason? Her definition implicitly says that general relativity is the fundamental set of insights, rules, and principles and special relativity is just a minor corollary of it. While it's true that special relativity is a limit of general relativity obtained for gravitational fields going to zero, the actual "hierarchy of power" is the opposite: general relativity is just one application of special relativity – the incorporation of the gravitational field in a special-relativity-invariant way. While general relativity is arguably the prettiest (and geometrically most non-trivial) classical application of the rules of special relativity, in principle it is on par with Yang-Mills theory or any other (special) relativistic field theory.

This claim of mine may be interpreted as a modern interpretation of the philosophy underlying relativity – and widely appreciated by most of the competent modern theoretical/phenomenological particle physicists (people who were clearly not included in Sabine's low-brow survey). But there's a sense in which it's ancient, too. What's the sense? Well, the insight is ancient because Einstein simply didn't have a choice when he was searching for a relativistic theory of gravity between 1905 and 1915. General relativity is the unique theory obeying the postulates of special relativity that describes the gravitational force – by which I mean a force (and we can prove that it's the force because such a force must be unique for a physical system) that respects the equivalence principle.

The gravitational field must be given by some components of the mass/energy/momentum-encoding stress-energy tensor. Because the strength of the field around a physical system as measured at infinity cannot change (in analogy with the field around a charge in electrostatics), it must be conserved quantities that source the gravitational field/influence. Because our goal is a gravitational force that depends on the mass, it's clearly the whole stress-energy tensor \(T_{\mu\nu}\) that must be involved in sourcing the gravitational field (\(T_{00}\) which must surely influence the gravitational field isn't a Lorentz-invariant quantity and the Lorentz transformations of this quantity involve all other components of the tensor). The corresponding "potentials" of the gravitational field must be organized as a symmetric tensor with two indices, too. It's \(h_{\mu\nu}\).

However, the derivatives of the field \(h_{\mu\nu}\) contribute to the energy as well, like the derivatives of any matter field. We are led to the question how the field sources itself. We're brutally constrained by the equivalence principle because physics in the \(h_{\mu\nu}\) field that linearly depends on the coordinates must be indistinguishable from physics outside any nonzero fields: a freely falling observer (in the linear \(h\)-field) mustn't be able to figure out that he's in a gravitational field at all.

This is only possible if there is a rather large symmetry that is able to identify configurations with different profiles of \(h_{\mu\nu}\) – identify some configurations where this field is nonzero (and even non-constant) with the configuration where it's zero. So this symmetry must be mixing the gravitational field \(h_{\mu\nu}\) with something that was nonzero to start with. It must have the same tensor structure and we conclude that it must be the pre-existing metric tensor \(\eta_{\mu\nu}\). The only symmetry that is able to produce the right number of symmetries acting on these metric tensors is the diffeomorphism symmetry under which the "total metric"\[

g_{\mu\nu}=\eta_{\mu\nu} + h_{\mu\nu}

\] transforms as the tensor field. So we're led to general relativity as the only possible (special) relativistic description of gravity that uses fields.

This was a sequence of arguments that tried to be as classical as possible. Modern particle physicists would present a similar but quantum-field-theory-based version of the ideas. Because it's locally sourced by the stress-energy tensor, gravity must involve spin-two fields. In the covariant, manifestly Lorentz-invariant description, spin-two fields have some positively definite components \(h_{ij}\) and perhaps \(h_{00}\) (which will also be mostly killed, despite its good sign) and some negative-normed components \(h_{0i}\), ghosts. The latter is unacceptable because it leads to the prediction of negative probabilities for some processes. So there must exist a symmetry that decouples all the ghosts. The symmetry has to be local and have a whole "vector" of parameters at each spacetime point. Coordinate redefinitions \(\delta x^\mu\) are the only solution. For gravity in terms of quantum fields, you need spin-two fields and the diffeomorphism invariance is necessary to get rid of their pathological, negative-normed components. The rest of the GR follows; the Ricci scalar is the lowest-order (in the number of derivatives) coupling compatible with the required symmetry but there may also be higher-order corrections (whose effect becomes negligible at long distances).

Some people would declare all the derivations above to be heresies because they think it is a blasphemy to ever write the metric tensor as a sum of two or several pieces because such a blasphemy contradicts the holy beauty of general relativity as written in an unwritten commandment somewhere. ;-) The price they pay for this medieval, unjustifiable, irrational, stupid taboo (the commandment really says "you shall never make your hands dirty by any science that actually applies to a situation in the real world or answers some questions beyond the questions whose answers you have been given to start with, by science that requires you to write anything else than the most beautiful form of the basic equations") is very high: They can't understand some key facts about modern physics, e.g. that and why the general theory of relativity is unavoidable given the validity of special relativity and the existence of gravity sourced by the energy-and-momentum density and their fluxes/currents.

Many people in the Backreaction discussion are confused about many other things.

For example, is a charged object sitting somewhere on the Earth's surface emitting electromagnetic and/or Unruh and/or gravitational radiation?

The answer is, of course, No. If it were radiating in any of the three ways (to be precise, by radiation I mean sending physical photons, gravitons, or other particles to infinity), it would have to lose energy to avoid the violation of the energy conservation law. But the charged object is already sitting at a place where the energy is minimized so there's no way to extract more energy out of the particle.

Relatively to a freely falling frame, the charged object sitting on the Earth's surface is accelerating so it should emit all three kinds of radiation, some people could argue. If it emits no radiation, doesn't it violate the equivalence principle?

No, it doesn't. First of all, the equivalence principle is only guaranteed locally. But in the previous paragraphs, we were asking whether particles are emitted to infinity. This requires us to connect the vicinity of the Earth with infinity, to compare them. But such a global connection turns the existence of the Earth's gravitational field into an objective fact. There exists no flat-space-based equivalent description of a region that would include both Earth's vicinity as well as the asymptotic region at infinity. So the equivalence principle isn't really applicable. There's no justifiable way to argue that the charged sitting object should emit radiation.

There are other ways to argue and reach the same conclusion.

For example, the equivalence principle identifies the experience of a freely falling observer with those of an inertial observer in the flat spacetime. But the identification only holds if "all other factors are equal". The freely falling observer who is going to hit the Earth's surface soon doesn't have "all other factors equal". In particular, there may be some extra radiation coming from the rest of the Universe. It just happens that the radiation is such that it perfectly cancels the would-be electromagnetic/Unruh/gravitational radiation of the charged object sitting on the Earth.

To make this discussion really complete, I would have to describe a formalism that has something to cancel at all and distinguish the different amounts of radiation as seen by a nearby static, nearby accelerating, or infinitely distant detector. The discussion could get unnecessarily messy and repetitive. But my point that shouldn't get lost in this technical material is that only the black holes emit the Hawking radiation. One actually needs the horizon for that. If there's no horizon, there's no energy loss by the Hawking or another acceleration-based radiation. (And this 1999 paper is just wrong. It's not the only one.)

Why does the horizon matter? If there's the horizon, one simple fact holds: the black hole interior can't possibly send any radiation (positive-energy one or a "compensating one") in the outward direction; nothing gets out of the black hole. That's why the frame of an observer who is freely falling into a black hole (with a horizon) is as equivalent to an inertial observer in an empty space as you can get. He could have been freely falling throughout his life which explains that no radiation was going in his direction.

On the other hand, there's no radiation going from the black hole interior, against him, either. It's forbidden by the blackness of the black hole. It's this latter property that doesn't hold for the Earth. The Earth imposes different boundary conditions on the surface than the black hole enforces on the event horizon. If the Earth were a conductor, the electrostatic potential would vanish on the surface. The relevant modes of waves would be standing waves above the Earth's surface. While the condition "killing" one-half of the modes in the black hole case says that "nothing is coming in the outward direction", the conditions are different for the Earth: "no waves are inside the conducting Earth". The latter condition is past-future-symmetric, unlike the condition for the black hole.

The vacuum is Unruh and electromagnetic radiation-free in the "most natural frames". For black holes, it's the freely falling frame because you can just freely fall and you will never notice that something is unnatural about that frame (the singularity kills you before you realize that). That's why there's no radiation in this frame while the frame of an observer keeping himself above the horizon by jets experiences Unruh radiation that penetrates through the black hole's gravitational field and becomes real, physical Hawking radiation at infinity.

For the Earth, the most "vacuum-like" frame is one associated with the surface because the freely falling observer will hit the Earth's surface and the headache will convince him it's not the frame most similar to the empty space. ;-) So the Earth stabilizes all the surrounding fields relatively to its static surface and relatively to this frame, there's no radiation – and frames accelerating relatively to the surface's frame will see some radiation. Of course, a semiclassical analysis of GR coupled to electromagnetism offers you a more reliable but less funny derivation of the same conclusion.

One should emphasize that the Unruh/Hawking radiation for the Earth, even if there were one, would be ludicrously weak. The typical wavelength of the emitted photons would be comparable to \(c^2/g\) which is about \(10^{16}\,{\rm meters}\), not far from a light year. It's clearly just an academic debate for the Earth as the very weak radiation would be totally unobservable – dozens of orders of magnitude weaker than the observable one. But it would still be an inconsistency if stable objects and particles like that would radiate because of some incorrectly applied equivalence principle.



Off-topic: Mr Ilja Hurník (*1922) died. He was a serious Czech composer of highly non-classical music for classical instruments and a piano virtuoso but people like your humble correspondent know him as the author of small pieces such as the "Merry Postman" (yes, he's ringing the bell and knocking the door) and "Little Soldier" above which I liked to play when I was 8 or so. ;-)
Read More
Posted in science and society, string vacua and phenomenology, stringy quantum gravity | No comments

Friday, September 6, 2013

Yo-yo banned in Syria

Posted on 9:35 PM by Unknown
Blamed for drought by Muslims

BEIRUT (Syria), [date]. Drought and severe cold is disastrously affecting the cattle in Syria, and the Muslim chiefs at Damascus have attributed the wrath of the heavens to the recent introduction of the yo-yo.

They say that while the people are praying for rain to come down from above the yo-yo goes down, and before reaching the ground springs up through the subtle pull of the string.




The chiefs interviewed the Prime Minister, and exposed the evil influence of yo-yos, so they were immediately banned.




Today the police paraded the streets and confiscated the yo-yos from everyone they saw playing with them.



Source: Barrier Miner (Broken Hill, New South Wales: 1888 - 1954), 23 January 1933

The censored date at the top was January 21st. I changed "Moslems" to "Muslims" so that it's not clear from the beginning that the text is ancient. Educated readers realized that anyway because Beirut hasn't belonged to "Syria" since 1943 (the correct name of the state should have been The French Mandate for Syria and the Lebanon, anyway) and because Muslim chiefs will only return to command police in Damascus after the possibly looming war.

At any rate, the similarity with the IPCC-backed religion cannot be overlooked. People see two things happening at about the same time (one of them has been happening for billions of years but they don't care), they decide that correlation and even coincidence is the same thing as causation, and the rest is just about making sure that the law enforcement forces enforce this deep life-saving insight. ;-)

The yo-yos probably resemble the rain droplets. Before they can reach the ground thanks to the gravity of the prayers, the force from an evil string returns them to the clouds. The springs, if any, stretched between the clouds and the rain droplets may look too weak but that's OK because they're surely strengthened by positive feedbacks. The debate is over.

Thanks to Steve Goddard for the URL
Read More
Posted in climate, Middle East | No comments

Snowden: Internet encryption useless against eyes in NSA, GCHQ

Posted on 3:01 AM by Unknown
Both HTTPS, SSL, and VoIP only good against little fish

Edward Snowden has provided The New York Times and The Guardian and others with some eye-catching revelations:
N.S.A. Able to Foil Basic Safeguards of Privacy on Web (NYT)

NSA and GCHQ unlock privacy and security on the internet (Guardian)
The two U.S. and U.K. intelligence agencies "are investing in groundbreaking cryptanalytic capabilities to defeat adversarial cryptography and exploit internet traffic" according to Director of National Intelligence who was quoted in the latest Snowden document about the $52 billion black budget. See also skeptical, technically sophisticated remarks by Wired. HTTPS, SSL, and VoIP are no longer safe; the correctly implemented strong encryption seems fine.



Of course, I got a bit excited: Have the agents finally built operational quantum computers? Have they made some progress that proves that \(P=NP\), after all?




Well, not really. Already the subtitle of the article in The Guardian makes it clear that the weapons that the agencies use aren't some groundbreaking advances in quantum computation or classical algorithms. Instead, they abuse the weaknesses of the human factor. Some big progress occurred in 2010, we're told.




So it seems that $250 million is spent every year to "encourage" the tech companies to insert weaknesses (backdoors and trapdoors) into their products. I suppose that to decrypt a message using state-of-the-art encryption programs, you either need to input a very long nonsensical sequence of characters that changes every day or you have to type "My name is Bond, James Bond". ;-) This sounds like a joke but it may be very close to the truth, too. NSA influences international agreements about encryption standards. Lots of supercomputers are running to break the codes by brute force but this hard work would be useless if the agencies didn't have secret agreements with folks in the tech companies.

Analysts aren't allowed to ask or speculate about the sources of the data or methods used to make the data readable. Having watched many superagent movies and having seen that I couldn't complete, I won't ask or speculate, either. NSA claims that without this control, the U.S. couldn't allow the access to the cyberspace to remain unrestricted. This claim surely sounds tough but it may have a point, too. A GCHQ team works with the "big four": Google, Facebook, Hotmail, Yahoo.

Well, as long as I feel that those agencies don't use their behind-the-scenes powerful tactics to harm free individuals for something that should always remain legal, I find the reports above just a little bit chilling. On the other hand, every capability or influence may be abused and what we're hearing seem to be extraordinary powers, indeed. It still sounds a bit more plausible when these powers belong to institutions whose composition may be refreshed according to the desires of the American (and British) voters.



Does this serious Gentleman have his own capabilities, too?

The British GCHQ seems to be among the "top two". That couldn't stop Dmitry Peskov, a Putin spokesman, from overlooking Northern Ireland and calling the United Kingdom "just a small island no one listens to" that plays no major role in the world politics and whose Chelsea and other upmarket London districts is being bought by Russian oligarchs. Cameron et al. claim that they believe that the U.K. continues to be a superpower. It's up to you to decide whose perspective is more ludicrous. ;-)
Read More
Posted in computers, mathematics, politics | No comments

Thursday, September 5, 2013

A universal derivation of Bekenstein-Hawking entropy from topology change, ER-EPR

Posted on 2:16 AM by Unknown
I have been intrigued by topology change in quantum gravity, especially its Euclidean version, for 15 years or so. Since the beginning, I liked a sketch of a derivation (that I invented) of the Bekenstein-Hawking entropy of a black hole that was based on a wormhole connecting two tips of the Euclidean black hole in the \(rt_E\) plane.



Ignore the wormhole-related captions.

Before the ER-EPR correspondence, I would interpret the two planes on the picture above (lower, upper) as spacetimes in the ket vector and the bra vector, respectively, and this need to double and complex conjugate the whole spacetime made the details of the argument confusing because the thermal calculation (which is inevitably connected with the cigar-like Euclidean black hole pictures) inevitably involves a trace over ket vectors (or bra vectors but not both).

Fortunately, one may now present the whole argument without any bra vectors. Thanks to Maldacena and Susskind, the doubling of the spacetime (note that there is the upper and lower plane on the picture above) may be interpreted as the presence of two distinct spacetimes or two faraway regions of one spacetime – or two faraway regions of the same spacetime; it won't really make a difference. With this reinterpretation of the pictures, I am more satisfied with the argument.




Try to calculate a thermal correlation function in a spacetime (or a pair of spacetimes if you really view the two planes as disconnected) of temperature \(1/\beta\) which will be chosen to agree with the black hole temperatures below. The operators in the correlation functions don't matter; assume that they are low-energy operators far away from all the celestial bodies we will consider.

We want to know how much the states with two black holes at places \(A,B\) (in arbitrary microstates) contribute to the correlator; and how much the states with two neutron stars at the same places \(A,B\) contribute. The ratio of the two contributions should be \(\exp(S_A+S_B)\) where the terms in the exponent are black hole entropies times some subleading corrections (all neutron stars' entropies will be negligible). Just to be sure, the contribution from two black holes should be exponentially larger. I will take the two celestial objects to be macroscopically the same so the ratio should be \(\exp(2S)\) where \(S=S_A=S_B\).




To confirm the Bekenstein-Hawking formula means to prove that the contribution from the two black holes is \(\exp(2A/4G) = \exp(A/2G)\) times greater than contribution from the two neutron stars.

By the neutron stars (a nickname chosen for the sake of simplicity), I really mean a celestial body that is on the verge of collapsing to a black hole. I want \(g_{00}\) to be very small right above the surface of this body. Because \(|g_{00}|\) wants to be even smaller in the stellar interior which makes it impossible for \(|g_{00}|\) to be near zero above the surface, I really need to consider a hollow star – a shell that is protected against the collapse by some skeleton or light gas inside or whatever. I hope that these awkward technicalities don't really matter and can be replaced by a less problematic treatment. Maybe it's enough to compare the two-black-hole contribution with the contribution having no objects at those places at all.

For the sake of clarity, let's assume that the black hole radii are equal to a few miles (a solar-mass black hole). The thermal correlators may be calculated from the path integrals\[

\langle \cdots \rangle = \int {\mathcal D}\,{\rm fields}(x,y,z,t_E)\,\exp(-S_E)\, (\cdots )

\] over the Euclidean geometries with Euclidean field configurations in a spacetime whose Euclidean time coordinate \(t_E\) has the periodicity \(\beta\).

Now, we don't want to study the detailed microscopic physics of the neutron stars. Their entropy (and any non-black-hole celestial object's entropy) is negligible in comparison with the black hole entropy. We don't even want to specify what exact short-distance degrees of freedom are responsible for the black hole entropy. Indeed, the goal is to derive the Bekenstein-Hawking formula "universally", for every quantum theory that resembles quantized general relativity in a limit.

But yes, in this geometrized picture of the degrees of freedom, all the entropy is carried by some degrees of freedom – field modes and their generalizations – that may be attached to the stretched horizon, a Planckian vicinity of the region that will host a throat in a minute.

To neglect the short-distance physics, why don't we integrate out all the field modes with wavelengths shorter than 1 millimeter (to be specific again)? When you do so, the two-neutron-star contribution looks like two disconnected pieces, essentially two planes (the upper and lower plane) not connected by the throat shown on the picture at the top. Even if there has been some entanglement between the stars, it was way too weak to produce the smooth throat. Instead, the thin tunnels disappeared as we integrated the high-energy degrees of freedom out. The stellar interior isn't clearly shown on the picture – the picture only shows the stellar exterior – but it's somewhere and the Ricci scalar \(R\) is essentially zero everywhere. Again, maybe I should replace the neutron stars by empty regions of space throughout this argument; I wanted the two compared situations (with and without black holes) to be as similar as possible, however, so that the difference may be blamed on the throat, as we will see momentarily.

What about the two-black-hole contribution?

Maldacena and Susskind taught us that the Hilbert space of 2 similar black holes – essentially \(\HH_{2BH}=\HH_{1BH}\otimes \HH_{1BH}\) – is isomorphic to (really the same as) the Hilbert space of an Einstein-Rosen bridge geometry that connects them. Despite the apparently different topologies of the two descriptions, they're the same Hilbert spaces. The bridge-based description is better for highly entangled states in the Hilbert space; the 2 isolated black hole description is better for the nearly unentangled states of the two black holes. (Note that "highly entangled states" and "almost unentangled states" don't form linear spaces because the properties "entangled" and "unentangled" aren't closed under addition.) The two-black-hole states that strongly entangle the two black holes look like smooth bridges; however, there are highly excited, unsmooth bridges that must describe all the other two-black-hole microstates as well.

In general, the two planes – see the picture at the top – are connected by "some" throat. When you integrate all the field modes shorter than one millimeter out, you also do it for the gravitational modes so the geometry can't be too thin or curved. In effect, the gradual integration out thickens the throat in the black-hole case while it cuts the throat(s) in the stellar case. When you're finished, the throat itself is about one millimeter thick. It was a randomly chosen distance scale that is much longer than the Planck scale but much shorter than the black hole radius.

Looking at the two-neutron-star and two-black-hole Euclidean geometries, they look very similar. The only difference is the throat near the event horizon (or near event horizon in the case of the stars). In that region, the \((d-2)\)-dimensional area of the angular variables is constant, \(A\), which simply enters as an overall factor to the difference of the actions, and the major components of the curvature tensor only exist in the two-black-hole case and in the Riemann components \(R_{rtrt}\) and its three copies dictated by the Riemann tensor's symmetries (\(t\) really denotes \(t_E\) as an index).

(The throat in the black-hole case isn't Ricci-flat; the nonzero Ricci tensor must be blamed on the high-energy matter that resides in the stretched horizon(s).)

So the two contributions to the path integral – from the two neutron stars; and from the two black holes – only differ by the extra "wormhole" in the two-black-hole case. This wormhole is a "handle" of a Riemann surface and the exponent of the Euclidean path integral is more negative in the black-hole-case (I hope) relatively to the neutron-star case by the factor\[

\exp[-(S_E^{\rm BH}-S_E^{\rm neut})] =
\exp\left(\!-\frac{A\int d^2 x\sqrt{|g|}R_{(2)}}{16\pi G}\right)=\dots

\] over the handle (wormhole). But the two-dimensional integral – the Einstein-Hilbert action above – is proportional to the Euler characteristic\[

\chi = \frac{1}{4\pi}\int d^2 x\,\sqrt{|g|}R_{(2)}.

\] Note that a sphere of radius \(a\) has \(R_{(2)}=2/a^2\) and \(\chi=2\). Each added handle (which has a negative curvature \(R_{(2)}\) in average) reduces the Euler character by two and (therefore) the integral of \(\sqrt{|g|}R_{(2)}\) by \(8\pi\). When you substitute this \(8\pi\) decrease above, it becomes an increase of the exponent due to the extra minus sign in the exponent and you will see that the two-black-hole contribution is greater by the factor of \[

\exp\zav{ \frac{A\cdot 8\pi}{16\pi G} } = \exp\zav{ \frac{A}{2G} },

\] exactly as expected from the Bekenstein-Hawking entropy of two black holes. This multiplicative increase implies that there are \(\exp(A/4G)\) black hole microstates per black hole whose precise identity doesn't significantly affect the correlator we agreed to compute. So if we trace over them (and we do so in a thermal calculation), they just influence the result by the simple multiplicative factor (the number of these microstates).

You may have some doubts about the sign of the Euclidean Einstein-Hilbert action used above. I have some doubts as well. I can enumerate about 6 things one must be careful about that may lead you to a wrong sign but I am not sure whether I am not missing some other sign flips. The probability that I keep on committing a sign error here is too close to 50 percent at the end ;-) which is why I must add that a more careful scrutiny is needed.

This argument may arguably be generalized to derive Wald's entropy formula for a more general action including higher-derivative terms. In these cases, one still has \(R_{rtrt}=2\pi \delta^{(2)}(r,t_E)\) per black hole located at the horizon and if we treat this modification of the Riemann tensor perturbatively, the change of the gravitational action produces Wald's entropy formula instead of the Bekenstein-Hawking formula above.

Incidentally, I think that quite generally, the black hole entropy must also be interpretable as the total order/volume of an approximate symmetry group of a given spacetime because a black hole may be interpreted as a codimension-2 "cosmic string" in the Euclidean spacetime (which is analogous to 7-branes in F-theory and requires us to study the monodromies). But why this gives the right results in weakly coupled string theory (where you have a \(U(1)\) for each free field-theory mode produced by the string theory); pure \(AdS_3\) with the monster symmetry group; and in BTZ-black-hole-based \(AdS/CFT\) calculations will be reserved for other blog entries, much like the connections of the ideas above with the representation of microstates as Mathur's fuzzballs.

String/M-theory gave us numerous pictures of the microscopic structure of the black holes. Those usually make it hard to see the locality in the bulk (and even hard to see into the black hole interior) and difficult to assign the degrees of freedom to the locations in the bulk. While unitarity etc. is manifest in these string/M-theoretical pictures, various geometric properties are less clear. Realizations such as the text above are meant to clarify all the remaining secrets of the black holes that are "universal" and independent of the microscopic description of the black holes.
Read More
Posted in stringy quantum gravity | No comments

Wednesday, September 4, 2013

Nathaniel Craig's State of the SUSY Union address

Posted on 10:50 AM by Unknown
I have known Nathaniel Craig since he was a brilliant Harvard undergraduate who was attending graduate courses – at least my string theory course (I believe he was the best student in the room). This young Gentleman has written 37 papers or preprints (if you subtract some namesakes) and the last one among them is sufficiently pedagogic for you to be interested in it:
The State of Supersymmetry after Run I of the LHC
These 71 pages are based on his talk at a June 2013 workshop.




The first section is an introduction. At the end of it, on page 5, Nathaniel summarizes 5 positive reasons why the LHC has strengthened our belief that SUSY is right and relevant and 1 way in which it has weakened the belief.

In the following section, he discusses the expectations – naturalness and parsimony (essentially minimality) of the right supersymmetric models. The section is summarized by an expected ordering of the superpartners' masses and reasons for this ordering.




The third section is about our knowledge, especially various limits. In this section, you start to encounter lots of handwritten yet colorful cartoons that reproduce various graphs that you might think that only computers can draw well. ;-) Colorful, electroweak, third-generation, Higgs-related superpartners are given special attention.

The fourth section is about indirect limits, mainly ones from various rare decays.

Implications of the Higgs and its suddenly known (SUSY-compatible) mass as well as Standard-Model-like couplings are discussed in Section 5.

There have been no signals proving SUSY reported by the LHC yet. This disfavors the minimal naive models and Nature reconciles SUSY with the observations in at least one of the two ways: by breaking the signal relatively to the most visible naive models or by breaking the spectrum.

Section 6 is dedicated to breaking of the signal. It's harder to see SUSY if the spectrum is compressed or SUSY is stealth or SUSY is R-parity-violating. Compressed spectrum means that the LSP isn't much lighter than the colored superpartners. If that's so, not many particles may be produced when the colored superpartners decay to the LSP and something else. Moreover, the missing transverse energy tends to cancel as it's copied from the oppositely moving colored superpartners.

Stealth supersymmetry has a light LSP (usually outside the MSSM) which decays into something truly "almost invisible", like a light gravitino, and its R-even superpartner whose mass is just a bit lighter than the LSP mass. This R-even superpartner consequently decays into well-known SM particles so almost nothing new – and, more importantly, almost no missing energy – is produced in the reaction.

R-parity violation makes it harder to economically explain dark matter and may worsen problems with the proton decay. For the latter reason, RPV operators should still preserve either lepton or baryon number. SUSY becomes less visible because the (new) superpartners may completely decay up to SM particles again.



Section 7 is about breaking of the spectrum. Natural SUSY became a newly recycled term for SUSY models where only particles that are "really needed" for the lightness of Higgs' being are light – especially the third-generation quarks (primarily the stops). Light stops have been discussed on TRF many times, of course. This lightness of the third generation should ideally be connected with the heaviness of the third generation of SM fermions. Such models are OK with the LHC data because the data still allow light third-generation sleptons and squarks; and there's nothing unnatural about the heavy (and safely LHC-compatible) first two generations of sfermions. Nathaniel discusses various strategies to obtain natural SUSY models by choices in the mediation.

By supersoft SUSY, he means a different way of breaking the spectrum. The squarks of all generations are comparably light but the gluino is much heavier which is enough to suppress the production of superpartners at the LHC (which is mostly performing gluon-gluon collisions, using a microscopic perspective). This would be unnatural in the minimal models but it's OK if the gluino is a Dirac particle, something that I like so it's been repeatedly discussed on this blog.

Nathaniel discusses one more unusual way of breaking the spectrum, folded or colorless SUSY, in which the relevant superpartners don't carry any color, unlike their known SM partners. I don't understand how this could be possible and will study this tonight. (I see, they're just some non-SUSY models that also cancel quadratic divergences but in a more general way. This looks contrived to me and the only way way how string theory could endorse such things is via some non-supersymmetric orbifolds.)

Focus point SUSY – another way to break the spectrum, a way that is considered rubbish by Nima Arkani-Hamed, by the way – is also dedicated a special subsection.

The final subsection of Section 7 is about minisplit SUSY – going in the direction of split SUSY by Arkani-Hamed et al. but not that extreme. In this approach, one sacrifices naturalness but tries to respect all the other attractive conditions.

The final Section 8 is dedicated to thoughts about the future and Nathaniel's recommendations how people should approach the 2015- LHC run at 13 or 14 TeV. Acknowledgements and 93 references are the only other thing expecting you after that section.
Read More
Posted in string vacua and phenomenology, stringy quantum gravity | No comments

Tuesday, September 3, 2013

Did soot melt glaciers in the 19th century?

Posted on 11:09 PM by Unknown
Media including NBC focused our attention to a paper in PNAS,
End of the Little Ice Age in the Alps forced by industrial black carbon
by Thomas Painter and 5 co-authors from Caltech, Michigan, Innsbruck, Davis, and Boulder. The claim is easy: the post-little-ice-age melting of glaciers in the 19th century started well before CO2 could possibly matter which is inconvenient so a different explanation has to be found. The explanation is dirty snow from the industrial revolution.



Well, I find it plausible. Throughout the late 18th and the whole 19th century, they didn't care about the clean air at all. In some sense, they were happy people who hadn't devoured the apple of knowledge yet.




Just imagine: their lives could have been shortened by a year or a few years due to soot and other pollutants. But the advantage was that they didn't have to be constantly afraid of, or paranoic about, the dirty air. They could build without any environmental restrictions. They could freely live in these environments, too. I often tend to think that our modern obsession with the environmental standards brought us more unhappiness than benefits.




So there was enough soot around. Soot makes snow dirty. Dirty snow is darker and absorbs more light. That's why it heats up and melts. Moreover, the melting point of a mixed compound is generally lower – such mixtures tend to be liquid at temperatures at which pure compounds may still be solid crystals.

So the only question was whether enough soot was produced and whether it could get sufficiently far to affect the Alps because, you know, soot isn't gas so it can't just mix with the rest of the atmosphere. Soot is made of solid carbon particles, stupid.



Emil Škoda's factory in my hometown of Pilsen in the 1880s. Check that the number of chimneys grew by 1938 as shown by a stamp from June 1938 released shortly before the Third Reich began to damage Czechoslovakia. The stamp clearly shows that before the war, they were still immensely proud about all the pollution.

The paper talks about forcings between 9 and 35 watts per square meters. This is vastly higher than the "dangerous" forcing often attributed to the doubling of CO2, 3.7 watts per square meter (this is an IPCC-approved number, I am not saying anything controversial at all: only the translation of this forcing to a temperature change is controversial because it also depends on the sensitivity). Nevertheless, this soot effect which is between 3 and 10 times stronger than the effect of all the CO2 that we have added and will add between 1750 and 2100 (distant future!), even though it could have been created within one 19th century industrial season, is apparently quantitatively discussed in the climatological literature for the first time.

Just try to appreciate how immensely irrational this situation is. We were (and to a lesser extent, we still are) bombarded by kilotons of would-be scary junk about the threats posed by carbon dioxide, we are encouraged to pay trillions of dollars or even to qualitatively change our lifestyle. But an effect – an obvious effect in schoolkids' eyes – that seems to be 3-10 times stronger hasn't been really discussed or quantified for all the decades when the "climate change" hysteria was around and when researchers into this "problem" were getting tens of billions of dollars for investigating it. They just missed a 3-10 times stronger effect than their pet effect. They missed many other things, too.



While I find it plausible that soot mattered for the glaciers in the Alps, I think that more solid papers have to be written before we can be sufficiently certain about it to care about the research at all. But at the end, I am unlikely to believe that soot is the right explanation for most of these changes to the glaciers that occurred in recent millennia.

For example, the glaciers were probably nearly absent in the Alps – some mountains in the Roman Empire – 2,000 years ago. Was this absence due to man-made soot, too? Well, we might be underestimating the industrial activity of the Roman Empire – I actually do believe that in many surprising respects, they could compete with us. But just the overall human population was too small. The whole Roman Empire had 50+ million people during Augustus' reign. Well over 500 million live on the territory today. I have some trouble to imagine that they produced much more than 10 times more soot etc. per capita than we do today but it's not quite impossible, I think. The empire had to build on lots of very primitive, environmentally unfriendly "industry".

If we penetrate deeper into the history, then at some point, the soot from fires ignited by men becomes indistinguishable from soot from natural fires and similar events; the human addition gets lost in the natural background.

OK, fine, we have largely eliminated industrial soot pollution from our lives. We can't really eliminate or substantially and discontinuously lower CO2 emissions because it's so essential for industrial as well as biological processes. But should we? The total accumulative effect of man-emitted CO2 through three centuries of the industrial revolution may be 10-30 percent of the soot effect claimed in this paper, an effect that our 19th century ancestors didn't even know (and care) about. Even though soot was harmful for health, unlike CO2, they survived just well and their work built solid foundations for the truly remarkable strengthening and modernization of the human civilization that we've been witnessing since the 20th century.

So no, except for a dozen of specialists in the world, people on the Earth just shouldn't care about CO2 emissions for a second. It's stupid. The climate is affected by lots of factors. Some of them are purely natural, some of them are man-made, and some of them are mixed because some fires and other processes result(ed) from the cooperation of humans and Nature. Some of the contributions are understood, others are less understood, a big fraction of the important effects isn't understood at all. It is completely silly to tear one of the minor effect (a half-understood man-made contribution) out of context and promote it to something that even non-climatologists should care about just because this minor contribution is closest to their normal lives.



Off-topic: Microsoft sensibly bought Nokia's cell phone branch for $7 billion, and before Steve Ballmer retires, he is offering us a new product, Windows 1.0. LOL.
Read More
Posted in climate, science and society | No comments

16 out of half a billion: elite Calabi-Yau manifolds with a fundamental group

Posted on 2:16 AM by Unknown
Heterotic phenomenology seems to converge to an excitingly sparse shortlist of candidates

Heterotic compactifications represent a convincing – if not the most convincing – class of superstring vacua that seem to pass the "first great exam" for producing a theory of everything. A week ago, I discussed \(\ZZ_8\) orbifolds but now we return to smooth Calabi-Yau manifolds, the nice creatures you know from the popular books and from the girl who has a 3D printer.



The reason is a new hep-th paper by He, Lee, Lukas, and Sun (of China, Korea, and Oxford):
Heterotic Model Building: 16 Special Manifolds

...Mathematica supplements... (will be posted later)
These 16 manifolds are really special; they seem to have something that the remaining more than half a billion of manifolds in a list don't possess.




What is it? I will answer but let me begin with a more general discussion.

Calabi-Yau three-folds are used in the heterotic string model building and they're 6-real-dimensional shapes whose holonomy group is \(SU(3)\), a nice midway subgroup of the generic potato manifolds' \(O(6)\) holonomy. (Holonomy is the group of all rotations of the tangent space at any point that you may induce by the parallel transport around any closed curve through the manifold.) They come in families – topological classes. The most important topological invariant of such a manifold is the Euler characteristic \(\chi\).




For Calabi-Yaus, the Euler character (let me shorten the term in this way) may be written in terms of more detailed quantities, the Hodge numbers, as\[

\chi = 2(h^{1,1}-h^{2,1})

\] where, roughly speaking, the two terms refer to the number of topologically distinct and independent, non-contractible 2- and 3-dimensional submanifolds, respectively. In two different ways, they generalize the "number of handles" on a 2-dimensional Riemann surface. Let me assume that you know some complex cohomology calculus or you're satisfied with the sloppy explanation of mine.

These Hodge numbers \(h^{1,1},h^{2,1}\) also dictate the number of moduli – continuous parameters that can be used to deform a given Calabi-Yau manifold without changing its topology. Each Calabi-Yau seems to have a "mirror partner" (the relation is "mirror symmetry") that acts as (among other changes) as\[

(h^{1,1},h^{2,1})\leftrightarrow (h^{2,1},h^{1,1})\quad \Rightarrow\quad \chi\leftrightarrow -\chi

\] on the Hodge numbers and (as a consequence) on the Euler character. In total, \(30,108\) distinct Hodge number pairs \((h^{1,1},h^{2,1})\) are known to be realized. The number of known Calabi-Yau topologies is therefore at least equal to this number of order "tens of thousands". The plot is nice and reproduced many times on this blog and on Figure 1 in the paper.



The largest Hodge numbers appear in the "extreme" Calabi-Yaus with \((h^{1,1},h^{2,1})=(491,11)\) and its mirror with \((11,491)\). Those produce \(\chi=\pm 960\) which is the current record holder and the probability is high (but not 100 percent) that no greater Euler character is mathematically possible for the Calabi-Yau threefolds (the basic "pattern" of the picture would seem to be violated if there were larger Euler characters). In this sense, heterotic string theory predicts that there are at most \(480\) generations of quarks and leptons. The prediction seems to be confirmed experimentally. ;-)

There are of course much more detailed predictions you can make if you consider specific models.

But let me first say that there are three main methods to construct Calabi-Yau manifolds:
  • CICYs, complete intersection Calabi-Yaus,
  • elliptically fibered Calabi-Yaus,
  • Calabi-Yau three-folds obtained from ambient four-folds coming with a reflexive polytope.
One needs to know some fancy higher-dimensional geometry and I am not a genuine expert myself but I could still give a short lecture that will have to be dramatically reduced here.

The first group, the complete intersections, are defined by sets of algebraic (polynomial) equations for homogeneous coordinates of products of projective spaces. Candelas, Dale, Lutken, and Schimmrigk began to probe this class in 1988.

The elliptically fibered ones were intensely studied e.g. by Friedman, Morgan, and Witten in 1997.

The paper we discuss now is all about the third group, the Calabi-Yau three-folds extracted from Calabi-Yau four-folds. In 2000, Kreuzer and Skarke listed \(473,800,776\) ambient toric four-folds with a reflective polytope. The list was "published" two years later. You know, it's not easy to print almost half a billion of entries in a journal. Just kidding, the full list was of course not printed. ;-)

This class obtained from four-folds seems to be most inclusive and it's this class which is enough to produce the examples with \(30,108\) pairs of the two Hodge numbers. While one can construct at least one Calabi-Yau three-fold from each four-fold, it's my understanding that the number of topologies of Calabi-Yau three-folds is vastly smaller than half a billion because there are very many repetitions once you "reduce" the four-folds to three-folds.

The new Asian/Oxford paper wants to focus on heterotic phenomenology. For such models to be viable, we need manifolds with a nontrivial (different than the one-element group \(\ZZ_1\): a clever notation, by the way, right?) fundamental group \(\Gamma\). This group counts non-contractible curves along the manifold that are needed for the symmetry breaking by Wilson lines, something that is apparently necessary to break the GUT group down to the Standard Model group in similar models (the GUT Higgs fields are totally circumvented, they probably have to be circumvented, and the Wilson-line-based stringy breaking seems more viable than GUT-scale Higgses, anyway).

In other words, we need four-folds that come in pairs. The pair includes an "upstair" manifold \(\tilde X\) that has an isometry \(\Gamma\) and the "downstair" manifold i.e. quotient \(X=\tilde X/\Gamma\).
To make the story short, they only found 16 four-folds whose order (number of elements) \(|\Gamma|\gt 1\) i.e. for which the fundamental group is non-trivial.
This is quite a reduction, from half a billion to sixteen. The basic topological data about these manifolds are listed in Table 1 on page 9 of the paper. The Euler characters of \(\tilde X\) and \(X\) belong to the intervals \(96-288\) and \(40-144\), respectively. The fundamental group \(\pi_1(X)=\Gamma\) is \(\ZZ_2\) in 13 cases, \(\ZZ_3\) in 2 cases, and \(\ZZ_5\) for 1 manifold.

There has to be quite a competition to get this small shortlist. One could argue that except for the \(16\) entries, the half a billion candidates don't allow life in the sense of the usual anthropic discussions. A non-trivial fundamental group seems to be more important for life than oxygen.

Now, for some esoteric Picard geometric reasons, they eliminate two candidates (with \(\Gamma=\ZZ_2\)) and they try to put line bundles on the remaining \(14\) manifolds and pick the manifold/bundle combinations that lead to three chiral families embedded in consistent supersymmetric models. This produces \(29,000\) models, still a pretty exclusive club. Most of them (\(28,870\)) have the \(SO(10)\) gauge group, my favorite one, when interpreted as grand unified theories broken by a Wilson line; there are \(122\) \(SU(5)\)-based models, too. If you believed that the right vacuum is a "generic" element of the list, it would be likely that the group is \(SO(10)\) which means that there exist right-handed neutrinos, among other things.

It's sort of impressive what progress has occurred in this heavily mathematical portion of string phenomenology in recent years. The advances were made possible by a combination of progress in algebraic geometry and in computer-aided algebra – and with the realization of the importance of holomorphic vector bundles that satisfy the Hermitian Yang-Mills equations as an efficient way to deal with the difficult problem of the gauge fields that have to have a profile on the compactification manifold.
Read More
Posted in mathematics, string vacua and phenomenology | No comments

Lev Pontryagin: 105th anniversary

Posted on 1:51 AM by Unknown
Lev Semenovič Pontryagin was born in Moscow, Russian Empire, on September 3th, 1908, i.e. 105 years ago, and died in May 80 years later, about 25+ years ago.

Interestingly and sadly enough, a primus stove explosion made him legally blind at the age of 14. That didn't prevent him from becoming a top mathematician.

On the other hand, it didn't stop him from being a jerk of a sort, either. In 1936, he warned the Soviet officials that the mathematics community was full of counter-revolutionaries in the so-called Luzin affair. People were losing jobs. He was not only an aggressive commie, he was a sort of fascist, too. During mathematical conferences, he would scream that a pro-Israel Jewish scientist named Nathan Jacobson was a mediocre mathematician and racist because he was a Zionist. Another, even better Jewish mathematician, Grigory Margulis, won the Fields medal but couldn't get the permission to leave the USSR after Pontryagin painted him as a dirty Jew, too.

Much like other anti-Semites, he would claim that he wasn't one – he was just an anti-Zionist, everyone was told. Good try but my suspicion isn't quite gone (although I made a different conclusion 5 years ago).




Later in his career, he would work on optimization; Pontryagin's minimum principle is behind the bang-bang control. But string theorists primarily know him because of his earlier work on algebraic and differential topology.




Needless to say, the most famous concept named after him is a characteristic class now known as the Pontryagin class. (Although the Pontryagin duality for the Fourier transform on locally compact groups is also deep.) If you search through Google Scholar for papers mentioning both "string theory" and "Pontryagin class", you get 276 hits dominated by papers written by Witten, Vafa, Harvey, Moore, Sethi, Mukhi, and a few pals.

The Pontryagin class is a complexified even Chern class within a cohomology whose degree is a multiple of four. Needless to say, I don't really understand these matters well. The people for whom it's their cup of tea must think about many things in terms of vector bundles. It's probably great for them and it allows them to see and calculate many interesting things but despite a course by GM, I just couldn't learn to use those things. I need to translate bundles to some fields with some properties or physical conditions, otherwise I don't really understand them. In some sense, I feel that mathematics and not physics must be the "mother tongue" for those folks even though many of them are stellar theoretical physicists, too.

A longer CV was written 5 years ago.
Read More
Posted in mathematics | No comments

The 50 to 1 project

Posted on 12:09 AM by Unknown
Topher (*1982) is the commercial brand of an Australian host and filmmaker who had 6 siblings, was educated at home, and decided that movies – and later movies about political issues etc. – were his real passion. Using some "amateurish" money, including some funds from Lord Monckton – as I understand it – he shot a series of interviews with skeptics called
The 50 to 1 Project
Thanks to Anthony W. and Honza U. for leading me to watch some of the videos. The ideas, graphics, and especially the content look very good. The videos include a 10-minute introductory video and interviews with Nova, Evans, Watts, Essex, Laframboise, Morano, Singer, and Ergas. Evans' interview is the one that I have listened to most carefully (Honza recommended me 9:30-12:00 of that video).

However, I was intrigued by the particular figure "50 to 1" that is used as the title of the project. It's supposed to say that "the price of mitigation exceeds the price of adaption by the factor of 50". Where did the number 50 come from? I quickly learned that it was extracted from the 2006 Stern report. In some counting, mitigation would cost 80% of the GDP while adaptation would cost 1.6%.

Well, nice, and given some interpretations, plausible. However, the Stern report was complete junk as a piece of economics (especially because of the totally unrealistic treatment of the discount rate) and one may get any number he wants. It seems bizarre to me when skeptics borrow such numbers from such badly constructed pillars of alarmism – even if the number from the alarmist report proves that the attempts to "wrestle" with the climate change are a preposterous waste of efforts and resources, anyway.




What does it exactly mean that the mitigation is 50 times more costly? To make the ratio well-defined, you have to specify what you mean by the "money spent for mitigation" and the "money spent for adaptation" (and I am neglecting the fact that we would have to compare two very different and non-interacting worlds, a rich one and a poor one, and one dollar in one of the worlds isn't easily converted to dollars in the other world). Both quantities are hard to define (what is included and what is not?) and both quantities, especially the first one, heavily depend on people's decisions so they can't be "objectively" quantified in advance.




The real issue is that the climate is inseparable from the weather and using existing technologies, one can't control the weather at every place of the globe for any realistic price. So the price for "full mitigation" is de facto infinite. Let me tell you an example what I mean. A category 5 hurricane is a weather phenomenon but it does contribute to the "climate" – the weather averaged over a few decades etc. – so you may also count it as the climate phenomenon. A full-fledged program of "mitigation" could also require that there won't be any Category 5 hurricanes in the Atlantic Ocean anymore. I am not aware of a miraculous technology by which we could afford something like that today – or in the following decades. Even if you just decided to "ban" any hurricane that is stronger than all the hurricanes on the record, and that clearly could be interpreted as a sign of "climate change" if one decides to do so, you would have to pay an infinite price to be "sure".

But if you can't prevent some strong hurricanes or other potentially harmful weather phenomena, it can always be said that you haven't mitigated the climate phenomena that matter. You may always waste more money for "mitigation" but their effect on the things that matter will be negligible and perhaps negative. To summarize, the "price of mitigation" is a completely arbitrary number. It's really "the amount of money that people are willing to waste for no benefits that are described by meaningless phrases about global warming". It can be millions, it can be billions, it can be trillions, it can be quadrillions of dollars.

You can't really stop the weather phenomena now or in any foreseeable future. Even if you decided to regulate only the global mean temperature – which isn't really a too important quantity for anything that people are doing – you will fail. For dozens of trillions of dollars, you could suck the CO2 from the atmosphere and reduce the concentration of 400 ppm to something like 150 ppm at which plants stop growing. For various estimates of the sensitivity, such a reduction of CO2 could lower the temperature by one or several degrees.

But at some moment, Nature will bring us larger variations. In the year 60,000 AD, a new ice age will likely peak and the global mean temperature may be up to 10 °C lower than today. You won't be able to compensate this cooling by an increase of CO2 because you could need up to 5-10 doublings, to raise the CO2 concentration from 400 ppm at least above 10,000 ppm, but the atmosphere would become uncomfortable for breathing. (There would be many advantages, too.) More importantly, you won't really find the appropriate amount of carbon to burn anywhere. Also, the Earth will fight back by absorbing the excess CO2 (it's doing so today, devouring 2 ppm from the atmosphere every year). The higher it is, the more quickly it absorbs it. We won't be able to reach 10,000 ppm of CO2 by man-made emissions. Chances are that even 1000 ppm is too much for us to reach. In a few centuries, fossil fuels will "peak" in some definition and the CO2 will peak at a comparable moment. Maybe at 600 ppm, maybe at 1500 ppm, no one knows and it's not really important.

My broader point is that it is impossible, at least with the currently foreseeable technologies, to prevent the changes of the climate even if you nonsensically reduce the the concept of the climate to the global mean temperature only. So the "price of mitigation" is a quantity only encodes how much money people are going to waste for some verbally "justified" nonsense but the goal, the "mitigation", will not be achieved. It can't be achieved locally. It can't even be achieved when you talk about a single global quantity, at least not for an extended period of time.

The "price of adaptation" is somewhat less arbitrary but it's still largely ill-defined. People are adapting to changes, anyway. As I wrote in the paragraphs above, people will have to adapt and prepare for weather phenomena even if they would gain the control over the global mean temperature. So the "price of mitigation" still includes the "price of adaptation" because adaptation will always be needed, too.

Now, you may ask: Which expenses may be included into the "price of adaptation"? I don't know of an algorithm to separate the expenses to those that are about "adaptation" and those that are not. When you buy a heating system for your home, you are adapting and preparing your household for the likely drop of the temperature in the winter. The seasons are mostly natural and predictable – a winter is bound to happen sometime between November and March so you have to be ready and it's not about "climate change" – except that you can say the same thing about slower oscillations of the ocean-atmospheric coupled system, too. El Niños and La Niñas are almost guaranteed to occur sometime in the next and every 5 years (both of them). So it's also normal to get ready for these ENSO episodes. With various frequencies, hurricanes, drought, floods, and many other phenomena are guaranteed to arrive, too. The expenses meant to protect us and our assets from these things are always "partly climatic".

The separation of expenses to those linked to "adaptation" and those independent of "adaptation" is about hundreds of arbitrary administrative choices, too. At the end, the societies will adapt whether they invest any special money or not. Assuming that the mankind won't go extinct in the next 100 or 200 years, and it won't, it will be able to say that it will have adapted to the changes. If the mother of a family in the third world that is increasingly starving undergoes abortion, it is a way of adaptation, too. Another question is whether the mankind will be able to see that it could be better off if it will have made some or different investments (whatever is the right tense here). But a shrunk population or luxurious expenses is a way of adaptation, too.

My broader point here is that it makes no sense to ask whether the mankind has adapted or not. Whatever it has done, it has adapted as long as it survives. And of course that it will survive under the business-as-usual, at least if we neglect some non-climatic, more dangerous threats. For this reason, it's not possible to quantify the "price of adaptation" in any way.

So I would never find it appropriate to politicize expenses as "payments for mitigation" or "payments for adaptation". It's a nonsensical way of looking at things. We will adapt whatever we do and we will never mitigate weather phenomena whatever we do. If we make some expenses, we must have more specific reasons to do so than just "adaptation". We may want to make a city resilient towards 500-year floods, for example. But there will always be some phenomena that will leave us unprepared. It's a law of Nature. You (and societies) can't protect themselves against "every threat".

OK, I've spent way too much time with this ill-defined topic.

David Evans was asked a question that I am also asked quite often – how it was possible that so many scientists pay lip service to the global warming orthodoxy if the data seem to clearly show that there's no justification for fear. Do they abandon the scientific method? Sadly, yes, Evans answers. More importantly, he says that the scientists are also humans and they need to have jobs, to feed themselves, spouses, or kids, in some cases, and they just know that being incompatible with the CO2 ideology could be a threat for this kind of personal safety.

I assure you that these considerations are damn real and important.

At Harvard, my specialization wasn't linked to the climate in any direct way and the climate wasn't even in the top 5 of the "sensitive questions" that turned me into a foe of the prevailing left-wing academic establishment (be sure that feminism, blackism, and other demagogic victimisms were more important – and at some moment, even my opposition to the anti-physics attacks by crackpots such as Woit and Smolin must have become politically incorrect by itself because similar left-wing jerks largely "own" the broader academic environment, too). However, even when it comes to this utterly silly and unimportant topic – the global warming propaganda – I was made very sure that my inconvenient knowledge would have been enough to make my life in the Academia insufferable.



The most intense realization of this fact came when a Marxist slut – or what is precisely the politically correct term for her – called Naomi Oreskes was visiting Harvard. She had some friendly encounters with an important theoretical physicist, a nice guy and top expert who was left-wing but I would never consider him a true left-wing zealot (a highly pragmatic chap, in a sense). She learned that I publicly declared her work on the "scientific consensus" to be rubbish. So she sent some e-blackmail to me with copies resent to all senior names at Harvard whom she knew and considered important (the recipient list included the heads of Harvard's climate and Earth-sciences-related institutes and some senior physicists in my department: all of those remained silent as far as I could see). The e-blackmail "argued" that I was spitting on the 50-year-long work by the best scientists of the history (she meant crappy fraudsters like Michael Mann and herself) and something had to be done about it.

Just imagine what would happen if a conservative senior visiting professor bullied a young progressive female junior faculty member by resending his hateful threats requiring to "comply" with his preferred ideology to a collection of powerful white old men. Once the story would be leaked, The New York Times and 10 other major left-wing newspapers wouldn't write about anything else than this nationwide scandal for months. When exactly the same thing happens with the political sides switched, I simply had to suffer. I had no realistic defense against the slut's bullying. Try to write letters by the mouse pointer in the Flash above so that you avoid the shark. ;-)

Now imagine what changes if you take the story above and replace a string theorist with a junior faculty member who is doing research of climatology and adjacent disciplines. Obviously, the amount, frequency, and urgency of the threats will increase by an order (or orders) of magnitude. The elimination from the system is pretty much guaranteed if he or she is publicly against the orthodoxy. Moreover, imagine that most such young folks can't really afford to sacrifice a year of income, unlike me.

The Academia is so contaminated by dishonest and aggressive ideologues who can't really be removed from the system – many of them have tenure – that you can't or you shouldn't realistically hope that in the coming years, the atmosphere in the Academia when it comes to climate change will substantially change. Instead, what we must hope for is that the broader society gets educated and realizes that the community of scholars as it exists in the real world today simply cannot and shouldn't be trusted when it comes to any politically sensitive questions. When the Academia notices that its status will have changed, it may change its weights, too. But the initial impulse simply has to operate outside the Academia.
Read More
Posted in climate, science and society | No comments
Newer Posts Older Posts Home
Subscribe to: Posts (Atom)

Popular Posts

  • Ostragene: realtime evolution in a dirty city
    Ostrava , an industrial hub in the Northeast of the Czech Republic, is the country's third largest city (300,000). It's full of coal...
  • Origin of the name Motl
    When I was a baby, my father would often say that we come a French aristocratic dynasty de Motl – for some time, I tended to buy it ;-). Muc...
  • Likely: latest Atlantic hurricane-free date at least since 1941
    Originally posted on September 4th. Now, 5 days later, it seems that no currently active systems will grow to a hurricane so the records wi...
  • Papers on the ER-EPR correspondence
    This new, standardized, elegant enough name of the Maldacena-Susskind proposal that I used in the title already exceeds the price of this b...
  • Bernhard Riemann: an anniversary
    Georg Friedrich Bernhard Riemann was born in a village in the Kingdom of Hanover on September 17th, 1826 and died in Selasca (Verbania), No...
  • New iPhone likely to have a fingerprint scanner
    One year ago, Apple bought AuthenTec , a Prague-based security company ( 7 Husinecká Street ), for $356 million. One may now check the Czech...
  • Prediction isn't the right method to learn about the past
    Happy New Year 2013 = 33 * 61! The last day of the year is a natural moment for a blog entry about time. At various moments, I wanted to wri...
  • Lubošification of Scott Aaronson is underway
    In 2006, quantum computing guy Scott Aaronson declared that he was ready to write and defend any piece of nonsensical claim about quantum gr...
  • A slower speed of light: MIT relativistic action game
    In the past, this blog focused on relativistic optical effects and visualizations of Einstein's theory: special relativity (download Re...
  • Eric Weinstein's invisible theory of nothing
    On Friday, I received an irritated message from Mel B. who had read articles in the Guardian claiming that Eric Weinstein found a theory of ...

Categories

  • alternative physics (7)
  • astronomy (49)
  • biology (19)
  • cars (2)
  • climate (93)
  • colloquium (1)
  • computers (18)
  • Czechoslovakia (57)
  • Denmark (1)
  • education (7)
  • Europe (33)
  • everyday life (16)
  • experiments (83)
  • France (5)
  • freedom vs PC (11)
  • fusion (3)
  • games (2)
  • geology (5)
  • guest (6)
  • heliophysics (2)
  • IQ (1)
  • Kyoto (5)
  • landscape (9)
  • LHC (40)
  • markets (40)
  • mathematics (37)
  • Middle East (12)
  • missile (9)
  • murders (4)
  • music (3)
  • philosophy of science (73)
  • politics (98)
  • religion (10)
  • Russia (5)
  • science and society (217)
  • sports (5)
  • string vacua and phenomenology (114)
  • stringy quantum gravity (90)
  • TBBT (5)
  • textbooks (2)
  • TV (8)
  • video (22)
  • weather records (30)

Blog Archive

  • ▼  2013 (341)
    • ▼  September (14)
      • Likely: latest Atlantic hurricane-free date at lea...
      • Democrats of Europe, wake up!
      • Confusions about the relationships of special rela...
      • Yo-yo banned in Syria
      • Snowden: Internet encryption useless against eyes ...
      • A universal derivation of Bekenstein-Hawking entro...
      • Nathaniel Craig's State of the SUSY Union address
      • Did soot melt glaciers in the 19th century?
      • 16 out of half a billion: elite Calabi-Yau manifol...
      • Lev Pontryagin: 105th anniversary
      • The 50 to 1 project
      • Ukrainian ex-porn star wins legal residence in Cze...
      • An apologia for ideas from Hawking's BH bet conces...
      • Feminists demand gender quotas for bodies buried i...
    • ►  August (42)
    • ►  July (36)
    • ►  June (39)
    • ►  May (38)
    • ►  April (41)
    • ►  March (44)
    • ►  February (41)
    • ►  January (46)
  • ►  2012 (159)
    • ►  December (37)
    • ►  November (50)
    • ►  October (53)
    • ►  September (19)
Powered by Blogger.

About Me

Unknown
View my complete profile