TheReference

  • Subscribe to our RSS feed.
  • Twitter
  • StumbleUpon
  • Reddit
  • Facebook
  • Digg

Friday, May 31, 2013

Quintuplets in physics

Posted on 11:19 PM by Unknown
Cool anniversary: In late January, we celebrated the 30th anniversary of the announcement of the discovery of the W-boson. Today, we celebrate the 30th anniversary of the Z-boson. They were comparably important discoveries to the recent discovery of the God particle.

Sport: Viktoria Pilsen defeated Hradec, a much weaker team, 3-to-0 in the last round so we won the top soccer league for the 2nd time (after 2011). Because the Pilsner ice-hockey team has won the top league as well, Pilsen became the 2nd town in Czechia after Prague that collected both titles in the same year (correction: wrong, 3rd town, Ostrava did it in 1981).
Ms Alexandra Kiňová (23) is expecting the first Czechia's naturally born quintuplets (a package of 5 babies) on Sunday morning (tomorrow; update: they're out fine) which would mean that we match the achievement of the most fertile U.S. state – Utah – from the last week.

The Daily Mail tells us that the pregnancy has been easy so far. Doctors were still talking about "twins" in January and "quadruplets" in April. The probability that a birth produces \(n\)-tuplets goes like \(1/90^{n-1}\) or so but the decrease slows down relatively to this formula for really high representations.

In physics, quintuplets are rare, too. By quintuplets, we mean five-dimensional irreducible representations of groups.




Correct me if I am wrong but I think that among the simple Lie groups, only \(SU(2)=SO(3)\), \(USp(4)=SO(5)\), and \(SU(5)\) have irreducible five-dimensional representations. Let's look at them because looking at all quintuplets in group theory and physics is a rather unusual direction of approach to a subset of wisdom contained across the structure of maths and physics.




First, \(SU(2)\). That's a three-dimensional group of \(2\times 2\) complex matrices \(M\) obeying \(MM^\dagger={\bf 1}\) and \(\det M=1\). The basic isomorphisms behind spinors imply that this group is the same as the group \(SO(3)\) of rotations of the three-dimensional space except that the matrices \(+M\) and \(-M\) have to be identified.

The irreducible representations of \(SU(2)\) are labeled by the spin \(j\) which must be either non-negative integer or positive half-integer (only the former may also be interpreted as proper representations of \(SO(3)\); the latter change their sign after a 360-degree rotation). Because the \(z\)-projection goes from \(m=-j\) to \(m=+j\) with the spacing equal to one, the representation is \((2j+1)\)-dimensional.

The \(j=0\) representation is the trivial singlet that doesn't transform at all; the \(j=1/2\) is the two-dimensional pseudoreal spinor; the \(j=1\) representation is equivalent to the usual 3-dimensional vector; the \(j=3/2\) representation is a gravitino-like four-dimensional "spinvector". And finally, the \(j=2\) representation is the traceless symmetric tensor. What do I mean by that?

Imagine that you consider the tensor product \(V\otimes W\) of two copies of the three-dimensional vector space \(V=W=\RR^3\). The tensor product is composed of objects \(T_{ij}\) where \(i,j\) are vector indices: it's composed of tensors. Clearly, such a tensor has \(3\times 3 = 9\) independent components. They can be split into several pieces:\[

{\bf 3}\otimes {\bf 3} = {\bf 5} \oplus {\bf 1}\oplus {\bf 3}

\] The identity \(3\times 3 = 5+1+3\) is the consistency check that verifies that the representations above have the right dimensions but the boldface identity above says more than just the arithmetic claim about the integers: the two sides are representations of whole groups and the identity says that they're transforming in equivalent ways under all elements of the group. Why is this decomposition right? Well, the tensor \(T_{ij}\) may be divided to the symmetric tensor part which is 6-dimensional and the antisymmetric tensor which is 3-dimensional (it is equal to \(\epsilon_{ijk}v_k\) i.e. equivalent to some vector \(v_k\)).

However, the 6-dimensional symmetric tensor isn't an irreducible representation of \(SO(3)\). The trace \[

\sum_{i=1}^3 T_{ii}

\] is independent of the coordinate system i.e. invariant under rotations and may be separated from the 6-dimensional representation. The trace may be set to zero by removing it i.e. considering\[

T^\text{traceless part}_{ij} = T_{ij} - \frac 13 \delta_{ij} T_{kk}

\] and such a traceless tensor has 5 independent components; it is a quintuplet. The quadrupole moment tensor is one of the most famous applications of this 5-dimensional object. You could think it's just an accident that this number 5 is equal to the number of integers between \(m=-2\) and \(m=+2\); you could claim that the agreement is pure numerology, an agreement between the dimensions of two representations. But it is more than numerology: the representations are completely equivalent. The translation from the components \(T_{ij}\) of the (complexified) traceless tensor and the five complex amplitudes \(c_m\) for \(-2\leq m\leq 2\) is nothing else than a linear change of the basis. It has to be so because for every \(j\), the representation of \(SU(2)\) is unique.

Now, let's talk about \(SO(5)\). Clearly, this group of rotations of the 5-dimensional space has a 5-dimensional vector representation consisting of \(v_i\). But what some readers aren't aware of is that the group \(SO(5)\) may also be identified with the isomorphic \(\ZZ_2\) quotient of a spinor-based group, namely \(USp(4)\). What is this group? It's a unitary (U) symplectic (Sp) group of complex \(4\times 4\) matrices \(M\) that obey\[

MM^\dagger = M^\dagger M = 1, \quad M A M^T = A.

\] Both conditions have to be satisfied. The first condition is the well-known unitarity condition, effectively meaning that \(s_i^* s_i\) is kept invariant (it's the squared Pythagorean length of the vector computed with the absolute values). The other condition is equivalent to keeping the antisymmetric cross-like product of two vector-like objects \(s_i A_{ij} t_j\) invariant where \(A_{ij}\) are elements of the (non-singular) antisymmetric matrix \(A\) above. Note that in this invariant, there is no complex conjugation.

Simple linear redefinitions of the 4 complex components \(s_i\) may always translate your convention for \(A\) mine which is \[

A = \text{block-diag} \zav{ \pmatrix{0&+1\\-1&0}, \pmatrix{0&+1\\-1&0} }

\] You just arrange the right number of the "simplest nonzero antisymmetric matrices" along the (block) diagonal. The two conditions (unitary and symplectic) may be then seen to imply that \(M\) is composed of \(2\times 2\) blocks of this form\[

\pmatrix{ \alpha&+\beta\\ -\beta^*&\alpha^*},\quad \alpha,\beta\in\CC

\] and the addition+matrix-multiplication rules for such matrices are the same rules as the addition+multiplication rules for the quaternions \(\HHH\). So the group \(USp(2N)\) may also be called \(U(N,\HHH)\), the unitary group over quaternions. In particular, \(USp(4)=U(2,\HHH)\). Such a quaternionization is possible with all pseudoreal representations.

So the 4-dimensional complex (actually pseudoreal!) fundamental representation of \(USp(4)\) is complex-4-dimensional (but it is equivalent to its complex conjugate because it's pseudoreal!) and it may be viewed as a spinor of \(SO(5)\). It is no coincidence that \(4\) in \(USp(4)\) is a power of two. How do you get the five-dimensional \(j=1\) vector out of these four-dimensional spinors?

Note that for \(SO(3)\sim SU(2)\), we had\[

{\bf 2}\otimes{\bf 2} = {\bf 3}\oplus {\bf 1}.

\] The tensor product of two spinors produced a vector (triplet; also the symmetric part of the tensor with two spinor indices) and a singlet (the antisymmetric part of the tensor with two 2-valued indices). Similarly, here we have\[

{\bf 4}\otimes{\bf 4} = {\bf 5}\oplus {\bf 1}\oplus {\bf 10}.

\] The decomposition of \(4\times 4 = 16\) to \(6+10\) is the usual decomposition of a "tensor with two spinor indices" to the antisymmetric part and the symmetric part, respectively. The symmetric part may be identified as the antisymmetric tensor with two vector indices, note that \(5\times 4 / 2\times 1 = 10\). And the antisymmetric part is actually irreducible here. It's because the invariant for the symplectic groups is antisymmetric, \(a_{ij}\), rather than the symmetric \(\delta_{ij}\) we had for the orthogonal groups, so it's the antisymmetric part that decomposes into two irreducible pieces.

By tensor multiplying \({\bf 4}\) with copies of itself, we may obtain all representations of \(USp(4)\) and \(SO(5)\) by picking pieces of the decomposed tensor products. That's what we mean by saying that the representation \({\bf 4}\) is "fundamental". Whenever an even number of these \({\bf 4}\) factors appears in the tensor product, we obtain honest representations of \(SO(5)\) that are invariant under 360-degree rotations and all these representations may also be given a natural description in terms of tensors with vector indices.

Finally, the special unitary group \(SU(5)\) has an obvious 5-dimensional complex representation. It is a genuinely complex one, i.e. a representation inequivalent to its complex conjugate:\[

{\bf 5}\neq \overline{\bf 5}

\] This representation (and its complex conjugate, of course) is important in the simplest grand unified models in particle physics. One may say that \(SU(5)\) is an obvious extension of the QCD colorful group \(SU(3)\). We keep the first three colors (red, green, blue, so to say) and add two more colors that are interpreted as two lepton species from the same generation. The full collection of fifteen 2-component left-handed spinors per generation (they describe quarks and leptons; a Dirac spinor is composed of two 2-component spinors; the right-handed neutrino is not included among the fifteen) is interpreted as \[

{\bf 5}\oplus\overline{\bf 10},

\] the direct sum of the fundamental quintuplet of \(SU(5)\) we have already mentioned and the antisymmetric "tensor" with \(5\times 4 / 2\times 1\) components. Note that the counting of the components is the same as it was for the representation of \(SO(5)\) above. However, the 10-dimensional representation of \(SU(5)\) is a complex one, inequivalent to its complex conjugate (I won't explain why the bar appears in the decomposition above, it's a technicality). The list of 15 spinors may be extended to 16, \(10+5+1\), if we add one right-handed neutrino and this \({\bf 16}\) is then the spinor representation of \(SO(10)\), a somewhat larger group that is capable of being the grand unified group (it is no accident that 16 is a power of two: that's what spinors always do).

The number 5 may be thought of as the first "irregular" integer of a sort but it is still small and special enough and is therefore linked to many special things in maths and physics. In maths, five is special because the square root of five appears in the golden ratio; and a pentagram may be constructed by a pair of compasses and a ruler (these two facts are actually related). Quadrupole moments, moments of inertia, five-dimensional rotations, and grand unifications are among the physical topics in which 5-dimensional representations are used as "elementary building blocks".

I hope that Ms Kiňová's birth will be as smooth as her pregnancy.
Read More
Posted in Czechoslovakia, everyday life, mathematics, string vacua and phenomenology, stringy quantum gravity | No comments

AGW: due to cosmic rays and freons?

Posted on 6:52 AM by Unknown
Lots of skeptics and the überalarmist Alexander Ač sent me the information about a widely discussed paper
Cosmic-Ray-Driven Reaction and Greenhouse Effect of Halogenated Molecules: Culprits for Atmospheric Ozone Depletion and Global Climate Change (arXiv, PDF)

WUWT, Google News
written by Qing-Bin Lu, a physicist (mostly biophysicist) at the University in Waterloo, in October 2012. The first detail that seems bizarre to me is the amount of hype surrounding a preprint that's been out for more than half a year. If there were real, active experts who follow what's going on in climatology and if the paper were right and important, they would have known it for half a year and not just now when the paper happened to appear in a journal.

It doesn't seem to be the case so at least one of the assumptions has to be invalid.




At any rate, the author claims that carbon dioxide has been irrelevant for the global mean temperature between 1850 and 1970; in fact, in a statistical analysis, he finds a slightly negative correlation between CO2 and temperature, \(R=-0.05\).

On the contrary, there's a positive claim – a nearly perfect correlation was observed with \(R\geq 0.96\) in 1970-2012 between the global surface temperature and the total amount of freons (more precisely CFCs – "freon" is a brand name owned by DuPont – and even more precisely halogenated gases) in the stratosphere.




So these ozone-hole-related compounds and factors decide about the climate as well, he believes. The Montreal Protocol is being praised; it recovered about 20%-25% of the Antarctic ozone hole while no significant progress has been seen in the mid latitudes. Note that the amount of ozone is reduced by freons but also by the effects of cosmic rays; the latter are modulated by the solar activity, we're reminded.

The author argues that freons could have generated 0.6 °C of warming between 1970 and 2012 and if they're really this important, their expected disappearance in the next 50-70 years could lead to global cooling that would bring the global mean temperature back to the levels people experienced in the 1950s.

There are many graphs in the paper; I won't repost them here. Some of them look pretty convincing. Still, the correlations could be coincidental and this possibility is more likely because many of these correlations have only been verified in the latest 3 decades or so. It's easy to fool oneself. I've seen many rather impressive visual correlations – three or four bumps reproduced rather nicely by a "theory" – and I know that most of them turn out to be fake.

Moreover, this scholar has much more experience with DNA molecules than analyses of the climate. The latter is a rather complicated thing and beginners tend to be naive in many respects. For these reasons and others, I doubt that the global climate is so easily linked to the freons. But I am not quite certain. It's plausible. They're powerful greenhouse gases (thousands of times greater global warming potential than CO2).



Around 0:19, this comedian seemed confused about the difference between "global warming" and "ozone depletion". Maybe he wasn't that confused, after all? ;-)

The apparent observation that no one can really safely show that the paper is wrong and why the paper is wrong highlights the immense degree of uncertainty about the truly dominant climate drivers on the decadal and centennial timescales.
Read More
Posted in climate, weather records | No comments

Thursday, May 30, 2013

An extremely cloudy Prague in 2013

Posted on 7:53 AM by Unknown
A Czech Canadian e-pal has complained to me that "the month of May was worth [an excrement]: cold, windy, rainy etc". But that's nothing compared to the first five months as we have experienced them in Czechia.



The Czech media such as The Week (EN) have told us about some cold hard figures describing the weather in Prague between January 1st and May 17th, 2013.




In this 137-day-long period, the number of hours when it was "clear skies" in Prague was just 80. To compare, it was 417 hours in the same period of the year 2011. I kid you not: the total length of the "clear skies" i.e. sunny weather in Prague dropped by a factor greater than five! (The figures refer to the Ruzyně station near the Havel airport.)

The cloudy, rainy weather may be blamed on the frequently low pressure but why the pressure is so low in 2013 isn't being explained by anything more fundamental that the meteorologists would be able to tell us. This lousy weather shouldn't continue indefinitely but the predictions suggest that the cloudy, rainy mess will actually continue for a few more weeks, well beyond the May 17th deadline I have mentioned.




The differences are extreme when we demand the "clear skies" but there are significant gaps even if we compare more inclusive quantities such as "the average percentage of the skies covered by clouds". In 2011, it was 61%; in 2012, it was 65%. In 2013, it was a whopping 82%. And let me remind you: this is not just some day-to-day variation of the weather but a comparison of periods that are almost half a year long!



The meteorologists warn that there could be floods but I don't think that there is such a direct relationship between a persistently lousy weather and floods. Impressive enough floods probably require some accumulation of water in the atmosphere while the persistently bad weather is delivering the precipitation in the form of rather uniform showers. These neverending showers and clouds is what the climate alarmists present as the "nice homogenized weather free of any extremes" but I have called it the "socialist weather" for 30 years (because this is how I remember the early 1980s) and even though farmers may like it, I am confident that most people agree that such a weather simply sucks.

Needless to say, the articles also mention promises of a global warming trend that our weather should eventually return to. Nice.



I want to emphasize how much more important these annoying yet mundane changes of the weather (which self-evidently have natural causes) are relatively to the "marvelous" effects that the promised global warming trends could ever bring us. If your good mood strongly depends on a sunny weather – or even clear skies – your reasons for a good mood were 5 times weaker than 2 years ago. That's a huge difference that may encourage suicides, the folks suggest. And I am not even going to explain what the cloudy weather does to the photovoltaic industry in which Czechia is still a major "power".

On the other hand, even if you adopt the hugely overestimated median predictions by the IPCC, the overall change of the global temperature during the two years that they want to attribute to CO2 is something like 0.03 °C. Imagine that you are a sane person and I ask you: What is the more important change among the following two: the increase of cloud cover from 61% to 82% or an increase of the global mean temperature (that is only very weakly correlated with the local temperature anywhere) by 0.03 °C?



Clearly, no human in the world could possibly detect the latter without special gadgets – and even common thermometers aren't really enough. On the other hand, everyone notices the lousy weather we've had in the Czech Republic since the early 2013 or so. After all, the clouds decide about more than 10 °C changes of the temperature during the daytime, 300 times greater temperature difference than the hypothetically CO2-induced temperature change. It's important to see various events and changes in the proper context and ignore events and changes that are manifestly 100+ times less important than certain other things that we pretty much ignore, too.
Read More
Posted in climate, Czechoslovakia, everyday life, weather records | No comments

SUSY GUT with \(A_4\): six predictions for fermion masses

Posted on 12:37 AM by Unknown
Stefan Antusch, Christian Gross, Vinzenz Maurer, and Constantin Sluka of Basel, Switzerland (Antusch is also affiliated with the Werner Heisenberg Institute, a part of the Max Planck Instiute in Munich) released an extremely intriguing preprint:
A flavour GUT model with \[\Large\theta_{13}^{PMNS} = \frac{\theta_{\rm Cabibbo}}{\sqrt 2}\]
Their model – or class of models – combines the constraints of supersymmetry, grand unification, and the \(A_4\) family symmetry to predict 20 parameters related to the fermion masses out of 14 parameters whose values they optimize. Among the 6 parameters they're able to predict without assuming them, 4 of them seem to match the experimental values very well and 2 predictions are completely new, expecting to be falsified or confirmed (a Majorana phase and the Dirac CP-phase).

That's quite something. Look at Table 3 on page 10 of the paper to see those amazingly accurate predictions for the masses, mixing angles, CP-violating angles, and neutrinos' squared-mass differences. I am impressed, especially because four of the confirmed predictions (a rather large number) seem to result as nontrivial predictions of their models.




I won't discuss supersymmetry or grand unification here because they're widely discussed topics on TRF and they're complicated, anyway. (See e.g. an article on neutrinos in grand unification.) Instead, let me focus on some "more special features" of their model, especially on the \(A_4\) flavor symmetry.




Recently, the last real angle in the neutrinos' mass matrix (PMNS matrix) was measured by the NuFIT collaboration. A surprise for many people was the relatively large value of the angle:\[

\theta_{13}^{PMNS} = 8.75^\circ\pm 0.43^\circ

\] Physicists tended to assume that the angle was either zero or extremely tiny. Well, as some string-inspired models (F-theory phenomenology...) had been predicting for quite some time, it's not small at all. (TRF readers could have suspected the figure was large since June 2011.) If you multiply this angle by \(\sqrt{2}\), you pretty much get the Cabibbo angle. This seems like some crazy numerology except that this condition may be rather naturally explained by some group-theoretical assumptions. And they do it.



The idea of the \(A_4\) flavor symmetry began around 2001 in papers such as Ma-Rajasekaran (around 500 citations). The hierarchy of the charged lepton's masses (electron, muon, tau) as well as the near-degenerate neutrino masses (with large mixing angles) naturally arises from such a model. The symmetry has to be "softly broken".

What is \(A_4\)? It is the group of even permutations of four elements. The group \(S_4\) of all permutations (the so-called symmetric group) has \(4!=24\) elements; the alternating group \(A_4\) has \(4!/2=12\) elements. These elements come in four distinct types, the so-called "conjugacy classes": the identity (1 element like that), a cyclic permutation of a triangle leaving the fourth element intact (8 elements like that), and a composition of two transpositions (3 elements like that: 12-34, 13-24, 14-23).

There is a general result about finite groups that says \(K=R\): the number of conjugacy classes is equal to the number of inequivalent irreducible representations. Because we have four classes, we must have four representations. Moreover, the sum of their squared dimensions must be equal to the number of elements of the group: \[

\sum_{i=1}^R d_i^2 = 12

\] How can you divide twelve to the sum of four integers that are squares of positive integers? The unique decomposition is \(1+1+1+9\). The first one-dimensional representation is the trivial one: every element is mapped to the identity operator on a one-dimensional space. The other two one-dimensional representations are complex conjugate to each other: they assign one to everything except for the rotations by \(\pm 120^\circ\) around any axis which are represented by \(\exp(\pm 2\pi i/3)\); the phase gets inverted for the other representation.

Finally, the three-dimensional irreducible representation of \(A_4\) is nothing else than the space in which the tetrahedron whose 4 vertices are being permuted is being embedded as \(A_4\) is embedded into an \(SO(3)\). This 3-dimensional space is identified with the space of the three generations of leptons and quarks.

Symmetries are always legitimate principles except that a broken \(A_4\) isn't the only symmetry one could consider. At this moment, I don't know of any top-down or stringy reason why a softly broken \(A_4\) symmetry should be a part of the description of the generations but even in the absence of such an explanation (F-theory on a tetrahedron? some \(A_4\) orbifolds? centralizers inside some group?), I find the tetrahedral symmetry to be a plausible part of Nature (vaguely justified by some bottom-up, phenomenological observations) and the accurate prediction of 4+2 parameters besides the 14 input parameters seems like a rather strong piece of evidence that there is something about this idea.



A Feynman diagram from 1968. Click for more.

Two other papers

An off-topic bonus comment. There are two other new papers I want to quickly mention. Leonardo Modesto argues he can define finite quantum gravity in any spacetime dimension. He uses some non-polynomial functions of the Riemann tensor in the action to conclude that the theory is free both of renormalizability and ghost problems. It seems self-evident to me that both of these lethal problems are actually there. Non-quadratic functions of the fields inevitably create negative-residue poles, the ghosts (leading to negative probabilities), and the non-polynomial "form factors" are completely undetermined, leaving infinitely many unknown coefficients which is the real problem with non-renormalizable theories (complete lack of predictive power).

J.L. Chkareuli of Georgia wants to describe gauge fields as Goldstone bosons triggered by a spontaneously broken SUSY. He talks about some Lorentz symmetry breaking which doesn't break the actual physical Lorentz symmetry and doesn't really write any supersymmetric Lagrangians of recognizable types.

I am sure that for all experts, it is very painful or impossible to read similar papers because these papers not only look self-evidently wrong but the authors seem to be unaware of the elementary reasons why these papers are wrong. They just don't seem to interact with credible physicists so it seems they haven't even been told why their papers are wrong. Or they have been told but they have misunderstood it. Or they understood the reason but pretend that the reason doesn't exist.

At any rate, Mr or Ms Chkareuli and Modesto, you can't really make credible physicists read your papers unless you address the obvious reasons why your papers are wrong at a very visible place – and in the abstract. When one reads the abstract and the first page and determines that you don't seem to be aware of the basic knowledge and arguments and principles, he just throws your paper to the trash bin before he gets to the second page because acting otherwise would mean to waste time. I am not 100% certain and I can't be 100% certain that every paper that seems wrong after this quick reading is indeed wrong and empty of any valuable stuff but if there are too many papers like that and the estimated probability that they're wrong is too high, it's simply sensible to throw them as soon as possible. Their authors don't seem to care.
Read More
Posted in string vacua and phenomenology | No comments

Wednesday, May 29, 2013

Encouraging high school students talented in physics

Posted on 10:30 AM by Unknown
...and astronomy

In the afternoon, I spent about 5 hours in Techmania, our local science center/museum built in some no longer operational construction halls of Škoda Works, a major factory in our city.



Your humble correspondent was partly invited as a (now aged) kid who could have benefited from similar events and who could have an idea what kind of aid may be helpful to the kids. There were high school teachers, primarily from a gymnasium in Cheb (a town in the very Western corner of the Czech Republic) and a sport gymnasium in Pilsen (which has educated many excellent and famous Pilsner soccer, ice-hockey, tennis players, and more).




We were also shown a small model of the fulldome planetarium that will be opened in several months and that will probably be the most modern digital planetarium in Europe, or at least a part of Europe. (Some related YouTube videos.)

Everything is digital over there, one can project almost anything (not just the night sky), and visitors may also wear the 3D glasses. The facility will replace an old mechanical planetarium that old people like me knew at the "Hamburg" buildings – which nowadays host a court – when we were kids. Of course, nothing from the mechanical planetarium may be directly imported. Nevertheless, I guess that the new facility has a much greater potential because it can show everything. It's less obvious whether this potential will be correspondingly exploited. Less may sometimes be more. I am somewhat worried that the kids will be so overwhelmed by visually impressive animations that they won't learn much and they won't have enough time to focus on anything and/or fall in love with the subject.

I just sent a mass e-mail to some folks who have worked in the visualization of relativity with the proposal to update their videos for the high-resolution, 3D fulldomes like the Pilsner one – there may be just dozens of places in the world that use the same technology. It could be fun to see the relativistic rollercoaster or the infalling observer in the black hole from this totally realistic perspective.

(Andrew Hamilton of Colorado immediately replied that in 2006, NASA and NSF funded a fun dome show called Black Hole: the Other Side of Infinity.)




Concerning the discussion about the identification and support for the talented high school students, lots of ideas were presented, echoed, questioned, improved, abandoned.

I would spend hours if I were just enumerating all the aspects in this discussion and my opinions about them – lack of excitement, the role of applied vs theoretical training, agreement or disagreement between the school curriculum and the activities promoted by physics/math olympiads and science centers, experimental vs theoretical, inclusiveness of the search for the talented students, other nations' having more money, other nations' being more hard-working, how to motivate teachers to do something beyond their elementary duties, how to motivate and reward schools, whether physics olympiads and/or other contests are still "in" or obsolete, and so on, how to make sure that such debates will infuence more than about 5 high school teachers that happened to gather (together with a similar number of the Techmania employees) today ;-), and so on.

Please feel free to offer your ingenious opinions.

When I was returning home, I spent almost one hour with a de facto homeless guy, a construction worker from Carlsbad who is building our new theater (to be opened in 2015), if I believe him, who was left in Pilsen by his drunk colleagues today (or they were high, I forgot). I only gave him a dollar because paying him the full bus ticket to Carlsbad seemed too much for a person I didn't really know. He told me he was released during the 2013 Klaus amnesty. He had been arrested because he had attacked a cop who was spitting on the guy's friend. ;-) If that's true, it's another piece of anecdotal evidence that it's right to support an occasional amnesty. But yes, it's also a reason not to trust such people too much. I gave him a tour of Pilsen of a sort and dragged him through various places such as an information center at the city hall and some booths of charities. Of course, no one would help him, not even 10% of what I did for him, so I am not surprised that he identified me as a sort of the ultimate saint.
Read More
Posted in Czechoslovakia, education, science and society | No comments

Tuesday, May 28, 2013

Heuristic ideas about bounded prime gaps

Posted on 4:03 AM by Unknown
Why Yitang Zhang's proof is probably far less fundamental than the claim

Yitang Zhang worked at Subway before he would land a mathematics job. And when he did, he wasn't publishing almost anything for years before he would offer a proof of something rather important weeks ago. That turned the name of the popular math instructor in New Hampshire into one of the most well-known names of number theorists in the world.



Some increasingly popular links are:
Bounded gaps between primes (Zhang's technical paper)
Philosophy behind the proof (Math Overflow)

First proof that... (Nature)

Prime number breakthrough by unknown professor (Telegraph)
If \(p_1,p_2,p_3,\dots =2,3,5,\dots\) denotes the \(n\)-th prime, the statement proven by Zhang may be phrased in a very simple way:\[

\liminf_{n\to\infty} (p_{n+1}-p_n) \lt 70,000,000.

\] The operator above is called the limit inferior which is just\[

\liminf_{n\to\infty}x_n := \lim_{n\to\infty}\Big(\inf_{m\geq n}x_m\Big)

\] If you think about this limit of the infimum for a while, you will understand that the limit inferior in the claim proved by Zhang is just the smallest gap between the adjacent primes that is realized infinitely many times (for infinitely many pairs). In other words, there exists at least one number – a potential gap between adjacent primes – that is realized infinitely many times.




Because of some technical properties that probably depended on many personal choices that Zhang has made while attacking the problem, the upper bound in the inequality turns out to be a high number, namely 70 million. It is such a high number that for all practical purposes, the proposition proven by Zhang is de facto equivalent to\[

\liminf_{n\to\infty} (p_{n+1}-p_n) \lt \infty

\] i.e. to the claim that there exists a finite number that is realized as the gap between adjacent primes in infinitely many pairs.




On the other hand, as I will argue, the actually correct (but rigorously unproven) claim stronger than Zhang's theorem says\[

\liminf_{n\to\infty} (p_{n+1}-p_n) = 2

\] which means that even twin primes – pairs of primes that differ by two – are realized infinitely many times: there are infinitely many pairs of twin primes. This claim is the famous twin prime conjecture. In some sense, the assertion proven by Zhang is 35 million times weaker than the twin prime conjecture. Note that the first twin primes are\[

(3, 5), (5, 7), (11, 13), (17, 19), (29, 31), (41, 43), (59, 61), \\
(71, 73), (101, 103), (107, 109), (137, 139), \dots

\] and there doesn't seem to be the tiniest reason to think that the list should terminate at some point. The largest currently known twin prime pair is \(2,003,663,613\cdot 2^{195,000}\pm 1\), two similar numbers that have 58,711 digits (each).

I don't plan to study the proof in detail because it looks very complicated and "non-unique" to me. The proven statement is slightly interesting but the proof is probably less interesting – there's just a small chance that I am wrong – and there's less "profound message" to learn from it. It's like if you are interested in the Moon and someone asks you to study O-rings in the Apollo spacecraft.

Moreover, I am not too interested in the claim that has been proved. But there is one more key reason: I feel certain that the proposition is true. The reason behind this certainty is the validity of a much stronger claim – a not quite rigorously defined one – that implies the twin prime conjecture, Zhang's proof, and many and many other much weaker corollaries. The claim is that
except for patterns that may be easily proved, the prime integers are distributed randomly and independently with \(1/ \ln n\) being the probability that a random number close to \(n\) is a prime.
This general – somewhat vague but still very important – claim has many consequences, including the Riemann Hypothesis. In fact, the character of the proposition above is more or less a special case of Gell-Mann's totalitarian principle in physics:
Everything that isn't forbidden is mandatory.
By this quote that generalizes the experience of the people suffering under totalitarian regimes such as communism and Nazism (there's no freedom: they tell you what to do and what not to do) to all of physics, Gell-Mann meant that the coefficient of every interaction in a Lagrangian or the probability of any resulting complicated process is nonzero unless one may use a symmetry or another rock-solid principle to prove that the coefficient is zero (because it violates the symmetry or another sacred principle).

In this analogy, "patterns that are easy to prove" are analogous to the "symmetries or other principles forbidding certain things". May I explain what I mean in the case of primes?

A pattern that is easy to prove is, for example, that if \(n\) is a prime, then \(n+1\) and \(n-1\) are not primes, assuming that \(n\gt 3\). It's because except for \(n=2\), only odd numbers may be prime. Similarly, among six consecutive integers greater than \(12\), just to be sure, at most two numbers may be primes. It's because only three numbers among the six are odd; and one of them is a multiple of three. One could continue with many examples of this kind.

Similarly, using the Gell-Mann totalitarian principle, one may demonstrate that the twin prime conjecture and its generalizations hold. There doesn't seem to be any reason why the difference between primes shouldn't be equal to two (or some other allowed even numbers) – there are many examples in which it is two, in fact – so there must exist infinitely many examples for the probability that \(n\) and \(n+2\) are both primes to be nonzero. Of course, it's hard to prove that "there is no reason why twin primes should stop at some point", either, but at least, one may prove that there exist no "reasons of the well-known types".

A TRF-based heuristic proof of the prime number theorem

Now, the density of primes around \(n\) asymptotically goes like \(1/ \ln n\). This is the right estimate for \(n\to\infty\), including the right numerical prefactor (the relative error goes to zero in the limit). This statement is known as the prime number theorem and it is a severely weakened sibling of the Riemann Hypothesis which may be equivalently stated as the (easy-to-prove) assertion that the roots of \(\zeta(s)\) only exist for \(s\in\RR\) or in the critical strip \(0\leq {\rm Re}(s)\leq 1\).

I can offer you a supersimple, Lumoesque argument why the density of primes goes like \(1/\ln(n)\). Call the functional dependence of the density \(\rho(n)\); it's really the probability that the number around \(n\) is prime. A number \(n\) is prime if it is not divisible by any prime smaller than or equal to \(\sqrt{n}\). These are statistically independent conditions. So\[

\rho(n) = P_{n\in{\rm primes}} = \prod_{p\leq \sqrt{n}}^{p\in{\rm primes}} (1-1/p)=\dots

\] because the probability that a random large \(n\) isn't a multiple of \(p\) equals \(1-1/p\). But the product may be written as the exponential of the sum of logarithms\[

\dots = \exp\sum_{p\leq \sqrt{n}}^{p\in{\rm primes}} \ln (1-1/p) = \dots

\] and the sum over primes \(p\) may be approximated by the sum over all integers \(i\) weighted by the probability \(\rho(i)\) that \(i\) is prime:\[

\rho(n) = \dots = \exp \sum_{i\leq\sqrt{n}} \rho(i) \ln (1-1/i).

\] Now, the sum over \(i\) may be approximated by an integral when \(\rho(i)\) is smoothened. Take the logarithm of the identity above (with the sum replaced by the integral)\[

\ln\rho(n) = \dots = \int_1^{\sqrt{n}} \dd i\, \rho(i) \ln (1-1/i)

\] and differentiate it with respect to \(n\) to get\[

\frac{\rho'(n)}{\rho(n)} = \frac{1}{2\sqrt{n}} \rho(\sqrt{n}) \ln(1-1/ \sqrt{n})\sim -\frac{\rho(\sqrt{n})}{2n}

\] where \(1/2\sqrt{n}\) came from \(\dd(\sqrt{n})/\dd n\) and where \(\ln(1-x)\sim -x\) because \(x\to 0^+\). One may easily verify that \(\rho(n)\sim 1 / \ln(n)\) satisfies the identity above; both sides are equal to \(-1/ n\ln(n)\) in that case. Among uniformly, nicely decreasing functions \(\rho(n)\), this solution may be seen to be unique. Even the coefficient in front of the logarithm or, equivalently, the base of the logarithm (\(e\)) may be seen to be determined by the (nonlinear) condition above.

You may check how Terence Tao imagines a heuristic proof of the prime number theorem. I leave you to decide who among the two of us is the cumbersome overworked craftsman and who is the seer. ;-)

At any rate, the heuristic proofs above aren't rigorous but one may rigorously prove the prime number theorem. One may also prove other things. As you can see by comparing various proofs sketched by various people – or the same people at various moments – there are many strategies that may be used to attack similar problems. When we're rigorously proving something like that in mathematics, we often work with lots of inequalities – not only the final one that e.g. Zhang has proved; but also with many inequalities in the intermediate steps. And the inequalities are usually ad hoc. We want to find an object that is "good enough to achieve a certain next step" but how good this good enough object has to be isn't quite determined. What the next step has to be isn't quite determined, either. There's simply a lot of freedom when one designs a proof.

It's very likely that some other mathematicians will improve Zhang's proof so that they will reduce the constant 70 million to something smaller. Such proofs may be perhaps obtained as "modest mutations" of Zhang's machinery. However, it's unlikely that someone will reduce the constant 70 million to a constant smaller than 6 while keeping the bulk of Zhang's proof intact because certain tools become inapplicable for such small gaps (see the Math Overflow summary of the proof).

The proof of the actual twin prime conjecture will probably have to be completely different than Zhang's proof. It's nice that he has achieved a rigorous proof of a theorem that is a weaker version of the twin prime conjecture but I doubt that one can learn a lot by studying the details of his proof. There had to be so much freedom when he designed it. So it's like a NASA rocket engineer's decision to study every detail of a Soyuz aircraft. I don't think that this is the most important activity needed to conquer the outer space. Much like the Soyuz spaceships, Zhang's proof probably have many idiosyncrasies reflect the Russians' and the Chinese-American man's suboptimal approach to problems.

In mathematics and theoretical physics, when something is just being proved, we often encounter two different situations: in one subclass, the methods needed to prove something give us such new insights that these insights – methods, auxiliary structures that were used to complete the proof, and so on – are actually more valuable than the statement that has been proven. But I tend to think that Zhang's proof belongs to the opposite class of situations – in which the proof is less important than the assertion because it's composed of many idiosyncratic steps and tricks that are probably inapplicable elsewhere and that may be replaced by completely different "building blocks" to prove even the desired proposition.

Of course that I can't be quite sure about this pessimistic appraisal of the proof's methodology if I haven't actually mastered the proof. But because of general reasons and experience, I believe it's the case, anyway. Moreover, I tend to believe that the theorem proved by Zhang – and even the twin prime conjecture that may be proved in the future – is extremely weak relatively to some rigorous formulations of Gell-Mann's totalitarian principle applied here which says something like "the distribution of primes is random except for [simple divisibility-based] patterns that may be easily demonstrated". I tend to believe that such a principle will ultimately be formulated in a rigorous way and proved by a rather simple yet ingenious method, too.

You should understand that if I believe that this elegant goal is a legitimate, finite, \({\mathcal O}(1)\) task for some future mathematicians, it's also reasonable for me to believe that the assertion by Zhang and its seemingly cumbersome proof is a nearly infinitesimal fraction of what mathematicians will achieve sometime in the future. Zhang's proof represents a kind of the cutting edge that the mathematicians are able to prove about similar propositions today. But do I really care about the cutting edge? This cutting edge, much like most cutting edges in mathematics, is made terribly modest by the mathematicians' uncompromising insistence on complete rigor. If one is actually interested in the truth and is satisfied with arguments suggesting that something is true at the 5-sigma or 10-sigma confidence level, in some counting, the cutting edge is elsewhere – it's much further.

So of course that the hunt for strictly rigorous proofs that has defined mathematics after its divorce with physics is a legitimate goal – a constraint worshiped by a large group of professionals, the mathematicians in the modern sense. However, the strict rules of this hunt inevitably imply that in many cases, these professionals place themselves miles beneath the actual cutting edge of knowledge as I understand it.

And that's the memo.
Read More
Posted in mathematics, philosophy of science, science and society | No comments

Monday, May 27, 2013

Smoluchowski, Milanković: birthdays

Posted on 11:18 PM by Unknown
Two Slavic, Austrian-Hungarian physicists were introduced to the sunlight on May 28th.

Marian Smoluchowski was born as a Pole in Austria in 1872; Milutin Milanković was born as a Serb in (then) Croatia, Kingdom of Hungary, in 1879. Smoluchowski was a statistical physicist; Milanković was a climatologist, astronomer, geophysicist – and also a construction engineer.

Marian Smoluchowski was born to an upper-class family near Vienna. He studied physics in Vienna; Exner and Stefan were among his teachers. His research followed the tradition of Ludwig Boltzmann from the beginning. When he was 40, he moved to Krakow to teach experimental physics. He was a keen mountain-climber and skier in the Alps and the Tatra Mountains. His being a resilient athlete didn't prevent him from death at the age of 45 – dysentery epidemics.




His important work was all about statistical physics. In 1904, he noticed that there were density fluctuations in gases and he also managed to correctly explain critical opalescence as a consequence of these fluctuations. In 1906, independently of Einstein's 1905 work, he expained the Brownian motion using the collisions between molecules and the pollen particles.

Note that these insights clarified not only power laws but also some relationships between coefficients. Both Einstein and Smoluchowski discovered the Einstein relation \(D=\mu k_B T\) between the diffusion constant, mobility, and temperature. And both of them realized why the average traveled distance scales like \(\Delta x\sim \sqrt{t}\). To find this scaling, it's useful to study how \((\Delta x(t))^2\) is statistically distributed.

I still sort of remember the times when I was a kid and these widespread insights based on the second power (or the square root, when inverted) looked surprising, unexplained to me. After some time, one understands that the second power is the most right function to measure the magnitude of things if the sign doesn't matter.

In 1916, Smoluchowski wrote his convection-diffusion equation sometimes bearing his name.




Now, Milutin Milanković was born to a "locally higher-class" family in a village in Eastern Croatia, near the Serbian border. I still don't understand how these Southern brothers of ours assign nationality to them. He is born in Croatia (both according to the old maps as well as current maps); he speaks the Serbo-Croatian language which is really the same for both nations; and he is still considered Serbian, not Croatian. What does it even mean? ;-)

Aside from the Serbs' being orthodox Christians (a criterion that can't reliably work today when the religious diversity is higher and atheists exist everywhere, too), the only explanation I can think of is that the Serbians sometimes like to use Cyrillic (which is also a bad criterion; last time I visited Belgrade, it was full of the Latin alphabet, too). Milutin Milanković's name is spelled Милутин Миланковић in Serbian Cyrillic; I have never appreciated that the last letter was an hbar (Planck's constant). ;-)

We just said that Smoluchowski was a mountain-climber and skier and died when he was 45. On the other hand, Milanković had frail health from the beginning (which forced his brother to educate him at home) and his three brothers died of tuberculosis soon. However, Milanković himself died of stroke when he was 79. A message of these two lives is that sport is a speedy way to the grave while non-lethal illnesses and disorders are what keeps you alive. ;-)

When he was 17, he went to Vienna to study civil engineering, and when he was 26, in 1905 (yes, he was just 2 months younger than Einstein), he already began to build dams, bridges, viaducts, aqueducts, and other structures in reinforced concrete across Austria-Hungary. This successful career makes it kind of incredible that he also became the guy who revealed the causes behind the most important among the large yet constantly repeating climate changes, the glaciation cycles: the Milanković cycles.

To make these important discoveries on the interface of astronomy, geophysics, and climatology, he first had to realize that atmospheric science was an inferior scientific discipline. In his words formulated in 1912:
Most of meteorology is nothing but a collection off innumerable empirical findings, mainly numerical data, with traces of physics used to explain some of them... Mathematics was even less applied, nothing more than elementary calculus... Advanced mathematics had no role in that science...
Despite this realistic appraisal, Milanković was actually a key person who did lots of insights to transform this inferior scientific discipline into an exact science. He suddenly started to introduce integrals and many other things into meteorology and climatology, something that most of the low-brow folks in the discipline had not known, in order to explain the terrestrial climate and the climate of other planets in the Solar System.

Most famously, he discovered the Milanković cycles in the 1920s. The alternation of ice ages and interglacials at the timescale of tens of thousands or hundreds of thousands of years is caused by variations of various orbital parameters – obliquity (axial tilt), eccentricity, and longitude of perihelion – which make the ice accumulate or melt at the most important "driving" place of the globe near the Arctic Circle.



The theory was believed and abandoned at various moments but the 2006 paper by Gerard Roe (building on realizations by Nigel Calder in the 1970s and probably others) eliminated all the doubts that a vast majority of the variation in the glaciation cycles has been caused by the Milanković (astronomical) cycles. The detailed agreement between all the wiggles of the theoretically calculated (astronomical, orbital) cyan curve and the empirical, reconstructed white curve is stunning and eliminates all the doubts that these astronomical cycles are what drives the variations by up to nearly 10 °C at the timescale of tens of thousands of years.

Recall that he was also a construction engineer. Aside from that job and the astronomical climatology, he also became an achieved historian of science, a popularizer of science, a reformer of the Julian calendar (who affected how some orthodox churches measure time), geologist often debating Alfred Wegener, and a scientific bureaucrat in Yugoslavia after the war.



On May 28th, 1987, Matthias Rust, a German (then) teenager, landed on the Red Square in Moscow. He did it to build a bridge between the communist and capitalist worlds, he says above. He was released after 14 months instead of the planned 4 years. Now he works as an analyst for an investment bank and plans to teach yoga.
Read More
Posted in science and society | No comments

Sunday, May 26, 2013

Anticommunist uprising in Pilsen: 60 years ago

Posted on 11:30 PM by Unknown
One of the numerous historical events that strengthen my pride in my hometown of Pilsen was the 1953 Uprising which was the first credible post-war anticommunist uprising in the Soviet bloc and the only violent anticommunist rebellion in the history of the communist Czechoslovakia (1948-1989).



Pilsen's communist headquarters in the 1950s. We're voting for the candidates of the National Front and against the remilitarization of West Germany, blah blah blah.

It followed the currency reform at the end of May 1953, exactly sixty years ago. Note that in March 1953, i.e. two months earlier, both Stalin and his Czechoslovak counterpart Gottwald died. This year exemplifies the incredible distortion of the economy; some of the numbers sound crazy.




First of all, the GDP growth in the preceding year was a stunning 30 percent. You could say that it had to be a tremendous success for the communist leaders. Except that you can't directly translate this figure into some living standards. The growth was due to a massive subsidization of the heavy industry preparing for another world war; the light industries that had been equally characteristic for Czechoslovakia wasn't really flourishing.

People apparently had lots of money but because the prices were dictated from the top, they couldn't really buy what they wanted. Shortages of consumer goods were omnipresent, inflation was at 28%. The nation was still using food rationing stamps.




The communist leaders decided for a tough currency reform. The salaries were reduced by a factor of five; and the savings were reduced by a factor of fifty – de facto liquidated. With similarly brutal changes, it was possible to stop food rationing and increase work quotas.

The apparatchiks' idea was that the working class – e.g. the workers in the Škoda Works ("Factories of V. I. Lenin"), the largest Czechoslovak heavy-industry firm located in Pilsen – wouldn't care because they're just working-class losers who have nothing to lose, except for their chains, the typical kind of angry jealous scum that the left-wing ideologies are building upon.

However, it turned out that this expectation was wrong in Pilsen. 20,000 workers in Pilsen (which included a disproportionately high number of members of the communist party itself) went on strike and began to revolt against the party, in order to defend their savings just like decent capitalists would. About 2,000 students joined the rebellion. The communist secret police members and their informers were treated exactly in the right way they should have been treated – they were lynched. The rebels overtook the municipal radio and some courthouses; they unsuccessfully tried to storm the local Bastille, the Bory Prison, to liberate the political prisoners. Busts of Gottwald and Stalin were being trashed and replaced by busts of the last democratic president of Czechoslovakia, Edvard Beneš. Banners saying "With USSR forever" were burned, and so on.

About 8,000 cops, 2,500 soldiers, and 80 tanks were called to Pilsen to suppress the uprising. So of course, it was suppressed sometime during the second day of the revolt, June 2nd, after about 40 rebels were killed. But the revolt did have an impact. At least, some price increases were softened (which, ironically, helped to stabilize the communist regime in the medium or long run) and the revolt may have helped to encourage other rebels in East Germany and especially in Hungary where the counterrevolution erupted in 1956.



Microsoft didn't like that Prague's 14th century Charles Bridge is boring, occupied by the walking people such as Mark Zuckerberg only, so they rebuilt it (and the Old Town Square and other places) as a circuit for McLaren F1 and P1. Forza Motorsport 5 for Xbox. ;-) Impressions.

It is my belief that the traditional conservative spirit of Pilsen did help. In the 15th century, for example, Pilsen was the most important pro-Catholic city in the Czech lands that was opposing – and successfully resisting – the early protestant Hussite warriors. And of course, the liberation of the city by the U.S. army in 1945 – the only major Czechoslovak city not taken by the Red Army – did help, too.



Bonus: some amazing motion sculptures...
Read More
Posted in Czechoslovakia, politics | No comments

Saturday, May 25, 2013

Global warming is here to stay

Posted on 10:46 AM by Unknown
Kevin Trenberth wrote the following text:
Global warming is here to stay, whichever way you look at it
So it must be a spherical global warming! Before I will mention every sentence of Trenberth's musings, let me offer you a quiz.



Click the image above to zoom in.

You see a graph that seems to be a graph of some temperatures. You see that the maximum that the temperature has reached on this graph is slightly above 0.7 °C. On the horizontal axis, you see 8 cells. Your task is to guess the value of the temperature now, pretty much in the middle of the next, 9th cell (the first one on the right side that isn't shown on the graph anymore).

The function seems to be increasing, right?




It could be something like 0.8 °C or 0.9 °C, some readers will say. The tiny dip at the right end of the graph is an irrelevant fluctuation. We see that the temperature was recently increasing by 0.1-0.2 °C per cell.

Are they right? Well, they're not right.

The latest reading at the moment defined above is actually 0.445 °C. How do I know that?




Because the graph above isn't a temperature graph but a graph of something that behaves in a very similar way as the local or global temperatures (at least if you take the logarithm of it) – namely the price of the Apple stock.

You may play with the Apple graph at Google Finance and you will immediately see what I did. I charted the price of the stock between early 2005 and late 2012, divided it by $1,000, and added the Celsius degree as a unit to confuse you.

You could have bought the stock for $7 in 2003 and it peaked above $700 in September 2012. In that way, you would earn 10,000% (ten thousand percent) of the original investment in less than a decade. Well, I guess that there are some TRF readers who did not do anything less than that...

For several years, I was already amazed by the rise of the Apple stock and thought that it couldn't possibly continue – although I seriously admitted that both scenarios were possible. I was proven wrong about the "primary guess" a few times. However, since September 2012, the stock went from $700 to $445. That's more than 35% decrease. Those who bought into the stock in Fall 2012 are probably not excessively happy now.

The rise of the stock had some causes – it wasn't a collection of coincidences that just happened to conspire to increase the price of the stock every year. But the idea that these causes are guaranteed to work forever is just childishly naive. There is no real evidence that the global mean temperature should behave differently than the Apple stock – and rise for the following decades and centuries, as long as the mankind burns the fossil fuels.

In order to show how naive the global warming alarmists' reasoning is, I will replace the belief that everything is about the enhanced greenhouse effect which is guaranteed to last forever by the belief that the Apple stock has to rise forever due to the ingenuity of the iPod. Here is Trenberth's article (I deliberately used a not-quite-primary product of Apple, the iPod, because this is what the global warming alarmists are doing with CO2, too: it is far from being the most important greenhouse gas and the greenhouse effect is far from being the most important contribution to the energy balance and temperature variability):

Apple's stock rise is here to stay, whichever way you look at it

Has the rise of the Apple stock stalled? This question is increasingly being asked because the MP3 player market seems cool and hot, or because the price on the Wall Street is not increasing at its earlier rate or the long-term rate expected from financial model projections.

The answer depends a lot on what one means by the “rising Apple stock”. For some it is equated to the “price paid on the Wall Street”. That keeps going up but also has ups and downs from year to year. More on that shortly.

Why should it go up? Well, because the demand for iPods is warming as a result of human leisure-time activities. With increasing collections of songs, videos, and other time-wasting media files in the society, there is an imbalance in data flows in and out through the cables connected to the MP3 players: the devices increasingly trap more media files and hence create a rising demand for the iPods. “Warming” of the market really means heating, and this can exhibit itself in many ways.

Rising demand for the stock among buyers on the Main Street are just one manifestation. Melting Research in Motion is another. So is collapse of Microsoft, Nokia, Samsung, and other competitors that contribute to the rising unemployment among non-Apple-trained IT specialists. Increasing the traffic of the Apple Store and invigorating viral video industry is yet another. But most (more than 90%) of the data imbalance goes into the internal flash disks of the iPods, and several analyses have now shown this. But even there, how much the upper layers of the RAM are being filled, as opposed to how much penetrates deeper into the internal flash memory of the iPod where it may not have much immediate influence, is a key issue.

The ups and downs of Apple's stock price

My colleagues and I have just published a new analysis showing that in the past decade about 30% of the media files has been dumped at levels below 700 megabytes beneath the most frequently accessed portions of the flash memory, where most previous analyses stop.

The first point is that this is fairly new; it is not there throughout the record. The cause of the shift is a particular change of the behavior of consumers who listen to the music in winds, especially in the Pacific Ocean where the subtropical trade winds have become noticeably stronger, changing the minimum volume that is needed to hear the songs and providing a mechanism for data to be carried down into the flash memory. This is associated with patterns of changes of the popular songs in the Pacific, which are in turn related to the female performances of the male interpreters.

The second point is that we have found distinctive variations in Apple's stock price with male musicians. A mini rise of the stock price, in the sense of an increase of the figure on the Wall Street, occurs in the latter stages of the publication of a new song by a male interpreter, as the music comes out of the flash drive and activates the speakers. The amount of data in the flash memory is also affected by volcanic eruptions, which also affect the perceptions of Apple's stock rise, especially if a volcano covers an Apple factory by lava.

Normal usage of the Internet also interferes by generating clouds that store the users' files for a while and then send them back, and there are fluctuations in the global data imbalance from month to month. But these average out over a year or so.

Another prominent source of expected variability in the industry’s data imbalance is changes in the electricity industry itself, seen most clearly as the cycle of variable spot prices. From 2005 to 2010 the power plants went into a quiet phase and the data energy imbalance is estimated to have dropped by about 10 to 15%.

Some of the penetration of the data into the depths of the flash memory is reversible, as it comes back when the user turns the iPod on again. But a lot is not; instead it contributes to the overall filling of the deep flash memory. This means less short-term addition of the data into the RAM, but at the expense of greater utilization of the flash memory, and faster deterioration of the competing companies. So this has consequences.

Apple's stock rise is here to stay

Coming back to the stock price record on the Wall Street, one thing is clear. The past decade is by far the most successful one for Apple's stock price on record. Human interest in Apple Inc really kicked in during the 1970s, shortly after the company was founded in 1976, and Apple's stock price has been pretty steady since then.



Mean NOAA

While the overall increase of the price is about $500 per decade, there are three one-year periods where there was a hiatus in warming, as the graph above shows, in 2000, partly in 2002, in 2008, and between 2012 and 2013. But at each end of these periods there were big jumps. We find exactly the same sort of flat periods at random places of financial model projections, lasting easily up to 15 months in length.

Focusing on the wiggles and ignoring the bigger picture of unabated rise of Apple's stock price is foolhardy, but an approach promoted by iPod success deniers. The elimination of the competition keeps marching up at a rate of more than 30 companies per decade since 1992 (when global markets using instantaneous transactions were made possible), and that is perhaps a better indicator that Apple's stock price continues unabated. The deterioration of all the competitors comes from both the melting of companies associated with the PC industry, thus adding more finances and flash memory chips to the market of intelligent phones, plus the warming and thus expanding market for the media players itself.

Apple's stock rise is manifested in a number of ways, and there is a continuing wireless data transfer imbalance in the vicinity of the iPods. The current hiatus in the rise of Apple's stock price is temporary, and this increase has not gone away.



LM: I hope that at least some readers were laughing. By this parody, I wanted to emphasize that while Trenberth writes and says lots of things about many random components of the atmospheric and weather systems, none of them really proves – or provides us with any significantly strong evidence – that CO2 is important enough so that the temperatures will be higher in 20, 30, 50, or 100 years. Much like the price of a stock, they may be higher but they may be lower, too. Rationalization of a predetermined conclusion is not genuine science.

In fact, it seems that he talks about so many things just in order to impress the readers who don't have a clue about climatology or, and it is even worse, to distract the reader who would otherwise realize that a previous claim by Trenberth was proven wrong. Randomly jumping from one topic to another creates excuses that undemanding readers are satisfied with. He isn't carefully verifying the statements individually. Chances are that about 1/2 of his statements are simply incorrect and even those that are correct have nothing to do with the main claim that Trenberth would like to justify but he cannot justify it because there exists no scientific evidence – namely the claim that CO2 matters and is a significant (or even main?) factor for the forecasts.
Read More
Posted in climate, markets, science and society, weather records | No comments

Friday, May 24, 2013

Eric Weinstein's invisible theory of nothing

Posted on 9:55 PM by Unknown
On Friday, I received an irritated message from Mel B. who had read articles in the Guardian claiming that Eric Weinstein found a theory of everything or something close:
Roll over Einstein: meet Weinstein (by Alok Jha)

Eric Weinstein may have found the answer to physics' biggest problems (by Marcus du Sautoy)

Geometric Unity (a lecture at Oxford that no physicist attended)
First, the puns involving names emulating Einstein are extremely far from being new to me because as the most successful Czechoslovak debunker of these new Einsteins (I mean anti-relativity cranks in this particular case), I've spent quite some time with the Slovak crackpot originally named Arthur Bolčo who also wrote the book Arthur Bolstein: An Ordinary Collapse of an Extraordinary Theory (which had both Einstein's and Bolstein's photographs on the cover, cute).

Now, Weinstein is a smart guy, a likable figure, a hedge fund speculator, the father of the MathWorld encyclopedia later run on Wolfram's domain (mistake! A different man, see the comments), and a discrete physicist close to folks like Edward Frenkel, a mathematician at Berkeley.

But the stories in the Guardian are just completely insane because they have absolute no basis.




This aspect of the story was nicely discussed by Jennifer Ouellette's blog entry "Dear Guardian: You’ve Been Played". Her blog, Cocktail Party Physics, has been incorporated into the website of a once nice American science magazine.




Eric Weinstein doesn't seem to have written a physics paper in his whole life (if we don't count his bizarre 0-citation PhD thesis) and this particular new theory of everything isn't described in any paper – not even an informal preprint – that anyone has seen. He was just invited by a buddy to give a seminar that no one attended and no one who knows similar things was invited to, in fact.

So it seems like another self-evident case of nepotism, self-promotion, unprofessionality of the journalists in the Guardian, and "science" run by press conferences. Even Nude Socialist's Andrew Pontzen concluded that Weinstein's theory of everything is probably nothing; see also a similar criticism by Spinor Info. I am not gonna speculate on whether or not Eric Weinstein has paid for the self-promotion but I think that it would be legitimate to speculate because there's quite some case for this hypothesis. It's comparably conceivable that the whole effect may be explained by the journalists' gullibility and stupidity and nothing else.

I don't know the content of the Oxford seminar, I haven't seen any paper, and I don't even know whether a paper will ever exist at all but building on some rumors, it seems that his work is another episode in the widespread confusion about the "graviweak unification". Papers by authors by Garrett Lisi – and in the case of the "TOE" papers, even papers by folks like Fabrizio Nesti (who wrote the JHEP \(\rm\LaTeX\) macro and added even some papers on matrix string theory, among others; despite his "priority", Nesti is less famous than Lisi because he's a sailor, not surfer) – are mixed sequences of mistakes, blunders, errors, misunderstandings, and – in the best passages – unjustified hopes.



I guess that Rammstein is a relative of Einstein, too. It's cool music I only began to like less than 10 years ago under the influence of friends JB and OK.

There are lots of lethal flaws in these papers each of which is sufficient to kill the idea. But the most general theme, the "graviweak unification" i.e. the unification of the electroweak and gravitational interactions at the level of spacetime fields, is just totally hopelessly wrong. There can't be any "graviweak unification".

It seems rather clear to me that all these authors are confused by the apparent similarity between the local Yang-Mills groups of gauge theories such as \(SU(2)_W\times U(1)_Y\) for the electroweak theory on one hand; and the local Lorentz group such as \(SO(3,1)\) acting on vielbeins \(e_\mu^a\) in the tetrad formalism describing the general theory of relativity i.e. gravity on the other hand.

These groups enter the dynamics differently but these differences could perhaps be due to some symmetry breaking (and even the difference between one group that is compact and another group that is not could turn out to be harmless, perhaps).

But the key fact that those people completely miss is that the \(SO(3,1)\) local Lorentz group is in no way the key local symmetry principle underlying general relativity in its "covariant" description. Instead, it's the diffeomorphisms that are the key symmetry. And diffeomorphisms are completely different transformations than the local Lorentz symmetries! Diffeomorphisms act like\[

\phi(x,y,z,t)\to\phi'(x,y,z,t) = \phi(x',y',z',t')

\] on the scalars and I want to avoid descriptions how tensors transform because you should know it. At any rate, the new value of a field at a point after the action of a diffeomorphism depends on the value(s) of the field at another point before the diffeomorphism acted. On the other hand, the local Lorentz symmetries – much like the Yang-Mills symmetries – only transform the fields at the same spacetime points into each other. The counterpart of the transformation above would need no primed coordinates \(x',y',z',t'\).

These are completely different transformations and the local Lorentz transformations are really unnecessary, optional. We may do general relativity without them. On the other hand, the tetrad formalism of general relativity requires both the local Lorentz symmetry and diffeomorphisms. What's essential for gravity are the diffeomorphisms which are completely different beasts than the local Lorentz transformations. Their generators are given by the stress energy tensor \(T_{\mu\nu}\) which has two vector indices (spin 2), unlike the generators of Yang-Mills-like symmetries \(J_\mu\) which are currents with one index (spin 1 of the gauge bosons).

There can't exist any purely field-theoretical unification of spin-1 fields and spin-2 fields (and of the corresponding symmetries) in the same spacetime that would be described by field theory. Only string theory and its diverse descriptions may achieve such a unification. They do so by attributing an internal structure of a sort to the particles including the messengers – for example, it's the string that may have internal vibrations and those may change the spin of the resulting particle, too. In this unification, one immediately generates an infinite tower of new states with arbitrarily high spins aside from \(J=1\), \(J=2\), too. And if we're "lucky", the excitations of the extended objects interact consistently, avoid divergences and anomalies, agree with the physics of gauge theories and general relativity. Only string theory in its different manifestations has been "lucky" so far and it seems likely that no other theory will ever join this "lucky" club.

The reasons why gravity can't be unified with the Yang-Mills forces in the naive, field-theoretical way is no string theory. In fact, it's not even rocket science. I find it strange that these men can't understand the reasons. I find it shocking that they still can't penetrate these simple things and isolate the simple mistakes they've been doing for years.

You may perhaps define some equations of motion that unify the (compact) Yang-Mills groups of the Standard Model (or similar ones) with the (noncompact) local Lorentz symmetry but the resulting theory will still have nothing whatsoever to do with gravity because the natural local transformations that allow us to write gravity covariantly are diffeomorphisms, not local Yang-Mills-like symmetries. Moreover, actual physicists have known for quite some time that in the real world, the unification of gravity with other forces critically depends on quantum mechanics which is why arbitrary games with the classical Lagrangians are no good.

Eric, Fabrizio, Garrett, please try to wake up and stop with this immensely stupid crackpottery and the embarrassing promotion of this crackpottery in the media!

14-dimensional fiber bundle

Incidentally, some sources suggest that Weinstein wants to construct his "theory of everything" out of a 14-dimensional bundle obtained by placing the 10-dimensional space of possible values of the 10-component metric tensor \(g_{\mu\nu}\) at each point of the 4-dimensional spacetime. That's a fun thing to present the identity \(10+4=14\) but it's otherwise completely empty. If you specify how many dimensions a theory should have, it is extremely far from actually having a theory – knowing a consistent description of the interactions of some particle, fields, or other objects in a given spacetime. Moreover, in some reasonable clarifications of the 14-dimensional theory, the 10 dimensions are spurious and the theory is still the same 4-dimensional theory with 10 fields we have known as general relativity.

Independently of that, there can't be any natural theories with a stable enough spacetime in 14 dimensions.
Read More
Posted in alternative physics, science and society | No comments

Sheldon Glashow on future of HEP in the U.S.

Posted on 3:23 AM by Unknown
...plus a wonderful interview with Nima about HEP and failures of science popularization at the end...

Sheldon Glashow, a co-father of the electroweak theory, wrote a 6-page essay about
Particle Physics in The United States, A Personal View
It is as phenomenological or experiment-oriented as you can get. Glashow complains that the SSC was cancelled, America lost its leadership in high-energy physics, and the next collider after the LHC is unlikely to be built in the U.S., too.

He offers his personal views on different kinds of experiments and different things they may try to determine, especially those that have a big chance to be performed primarily in the U.S.




I agree with his comments on high-precision experiments. He says that while some progress in physics may result from measuring some parameters more accurately than before, it's often the case that the "next digit" is the only thing we learn. It's an important thing to keep in mind. Even if such a high-precision experiment finds some discrepancy, it won't really make us sure that the discrepancy is due to something else than our error; and it won't tell us what new effect it is really due to if it is not just our error.




Quite generally, the most important transformative revolutions came from the people and experiments who were looking into places that no one else had visited before. I couldn't agree more. There are exceptions, too.

Glashow mentions the experiments focusing on dark energy and dark matter. The direct detection of the latter would be a breakthrough.

The next topic are possible violations of the conservation laws for the baryon number \(B\) and the lepton number \(L\). This topic also covers the likely Majorana character of the neutrino masses. A related topic are flavor-changing processes followed by electric dipole moments of elementary particles.

Glashow has dedicated much of his time to neutrino oscillations in the recent decade or so. He mentions this theme, too. The determination whether the sterile neutrinos exist; whether there is any CP-violation in the neutrino sector; and some information about the neutrino masses are his three key things he want to learn.



The fingers aren't Glashow's; they belong to Kenneth Lane.

It's all serious physics but I don't believe that Glashow doesn't feel that all these things are relatively unimportant technical details in the grand scheme of things. I would say if you remove at most 1 or 2 exceptions, the combined importance of all the questions that Glashow wants to be answered would still be lower than the importance of (just) his own contributions to the electroweak theory.

In fact, I think that the combined importance is also lower than the research of grand unification that Glashow has co-fathered as well, despite the very limited tools to study grand unification experimentally.

In other words, I think that it must be clear to Glashow that the truly important things in high-energy physics happen in research that is not confined to questions that can be answered by existing experiments or the experiments in the near future. One may be used to a pretty good synchronization of advances in theory and experiments but let me tell you something: this synchronization has always been far from perfect; and it is inevitable that the deviation from the perfect synchronization is growing and has to be growing. This growth seems almost as inevitable as the second law of thermodynamics.

While the divorce may be frustrating, it's a part of progress and a sign of progress that we may successfully answer questions that are extremely far from our abilities to directly experimentally test them; and on the contrary, we may perform experiments whose results may be hard to calculate (which is why most of these experiments may be considered to be "irrelevant mess" by the theorists). The increasing separation is inevitably linked to the ability of theorists to think about the natural phenomena ever more cleverly and indirectly; and to the experimenters' ability to test things well beyond those that seem simple to the theorists.

So I believe that the topics Glashow wants to focus on are already representing just a tiny percentage of the questions that are being legitimately asked and researched in high-energy physics and they're really narrow-minded technical details; and their relative importance is guaranteed to decrease in the future, too. In some sense, all the questions generalize the "another digit" type of discoveries and by its unambitious nature, Glashow's vision of the future therefore resembles the vision formulated by Lord Kelvin – we must just measure quantities in classical physics more accurately – before the relativistic and quantum revolutions took place.

The truly important things are different and to sort many of these things, we may use our brains and experiments in ratios that are extremely far from 50:50. There is really no good reason why the composition should always be close to 50:50. By thinking otherwise, Glashow is confining himself into a narrow-minded world governed by irrational quotas and by indefensible dogmas about "the only way to do physics" that result from these quotas.

In reality, people want to answer – and pretty much have the theoretical capacity to answer – many questions about the existence of grand unification, the details of cosmic inflation, the scale and other details surrounding baryogenesis, the black hole information puzzle, the diversity of the landscape and possible vacuum-selection mechanisms in the early cosmology, and others (I don't even want to irritate some readers with some much more abstract and fundamental questions about string theory that remain open). Glashow's attempts to remove these clearly physical questions from physics just because it's hard to measure them today (or in the near future) are preposterous and provincial.

And that's the memo.



Bonus: If you have 100 minutes, you should watch this 6-week-old conversation of Ideas Roadshow (Howard Burton, also a co-founder of the Perimeter Institute) with Nima Arkani-Hamed on the power of principles. Nima also talks about tasty sausages, about the reasons he hasn't written a popular book, about the Academia as a big industry and a bubble (that people including himself – he says – are riding), about his frustration what isn't communicated about science (the divide between the research and the popularization is wider than it has ever been; popularization counterproductively focuses on the latest results/fads instead of the accumulated wisdom – I totally agree; Nima is a big fan of Feynman's Messenger Lectures), about his own lectures at Cornell.

They also or mainly discuss how wrong pictures are often conveyed by popular books, about the wrong timing of Brian Greene's The Elegant Universe, how we learned that strings weren't that fundamental after 1985 (totally agreed), how dualities denying the previous philosophy became the #1 issue (totally agreed with the claim; but I mostly disagree that The Elegant Universe [1999] is picturing string theory of the 1980s if this is what Nima wants to say: it has lots and lots on dualities, M-theory, transitions, general facts on SUSY etc.), how fields and their excitation/particles aren't fundamental (although the popular literature repeats those things all the time), how the philosophy of the science process is still the same as it was during Galileo's and Newton's time (despite the claims in the popular literature: totally agreed again), how the uniqueness and ridigity of the theoretical structures isn't conveyed (totally agreed), how Nima learned a lot from the First Three Minutes, what his daily job looks like (the laymen seem to be confused what's happening in physics departments), and so on.

Nima is bothered by claims like "science is culture" [Seed] because it's either vacuously true (everything humans do "is" culture) or profoundly false: science looks for eternal things and is independent of culture (fully agreed). Unlike human arguments, (well-defined enough) things in science are objectively right or wrong. Nima also correctly emphasizes that the principles inevitably lead to the important conclusions. There's almost nothing to adjust about physics; QM and relativity determine things almost uniquely. The theories are both rigid and fragile (break down when modified) and it's the first time we have a theory of nearly everything and we may divide things we understand from those we don't. Nima also refers to various Leesmolins and others who invent leprechauns (without giving names) and says that in genuine science, it's almost impossible to propose really new things that aren't immediately falsified (I agree with everything in this paragraph). The reason the public isn't getting these things is that they're just not being offered (except for Weinberg's books etc.).

Arkani-Hamed is annoyed by people's statements that fields are fundamental and particles are their excitations; only particles are being measured. I disagree with Nima here; fields and particles are exactly equally real and exactly equally fundamental. We may measure not just particles; we may also measure the fields (and various functionals of theirs). In contrast with Nima's proclamations, there is a photon field. It's called the electromagnetic field, stupid. We may measure the intensity of the magnetic field at a given point which is something else than measuring the number of photons in a state. The equal validity of these viewpoints is really what Bohr's complementarity (or wave-particle duality) is about. It sounds like Nima is trying to constrain what we can measure; but every Hermitian operator is a good enough observable and many of them can simply be formulated naturally in terms of field operators only. Of course that I agree with him that there may be many different field theory Lagrangians with different fields that yield the same physics, however. But that doesn't mean we can't talk about the fields or measure them.

Nima says that the nature of the incompatibility between gravity and quantum mechanics is being misrepresented. Despite people's claims, we can say the things in the same sentence. The problem is that we don't know what's happening at short distances. Totally agreed. He also says that one can experimentally test theories without doing any new experiment – relativity and quantum mechanics are so constraining, powerful proxies of our empirical knowledge that they kill almost all candidate theories. Totally agreed. For example, all decent physicists have known for decades that the Higgs boson had to exist. Agreed. He talks about the near-uniqueness of possible particles' spins up to 2. See this article of mine. For the Higgs mechanism, only spin-0 particles are OK.

He discusses the conservative radicalism vs radical conservatism (try to extrapolate what you know as far as you can; the latter is preferred). We often determine that the principles of Nature allow certain things we haven't seen yet; the Higgs prediction is an example. People are similarly excited by SUSY; Nima is as frustrated as myself by the public discussions presenting SUSY as another leprechaun. For Nima, SUSY is a part of Nature – the last thing in a list of things that Nature can do. Agreed. 3/2 is the last missing spin.

The interviewer is visibly skeptical about these claims. Nima is asked when he would admit that SUSY is wrong. For SUSY "anywhere, at any scale", Nima would be willing to bet many years of salary. For SUSY at the LHC, it's one of the most plausible things but he wouldn't sacrifice an annual salary for that. At any rate, SUSY is existing somewhere. It's important we know it. Totally agreed.

At 1:08:00, they start to talk about the (bogus) superluminal neutrinos claims. For Nima, the episode shows how good science works, how crappy science works, and how muddy is the picture of science in the general public. So he is saying some important things that got less attention than the shoddy sensationalism. The orthodox physicists (like myself) were described as those who are afraid to challenge the authorities blah blah blah. Everyone really wants to discover such things; ideas about the will to suppress them are rubbish, he says. (Agreed with everything.) There had been a whole sub-industry analyzing how you could work for Lorentz violations – Coleman, Glashow, Kostelecký etc. Parameterization of the violations. And, on the other hand, the search for a possible theoretical justification. Nothing has worked here. Totally agreed. Black holes with Lorentz violation violate the laws of thermodynamics. So while the shoddy journalists were saying that Nima, me, and others haven't considered the possibility that Einstein was wrong, the truth was exactly the opposite: we have thought about this possible failure of Einstein so much and so carefully that we simply new it wasn't possible for such a strong violation to exist.

Why so much misinformation? Partly because of the incompetent theorists like Leesmolins who haven't thought about these things carefully and they were encouraging the bullshit ideas about the violations' being possible. These junky papers claimed that a supernova constraint on the Lorentz violations was the only one. But that was very far from being the case. The story about the reasons why we were certain didn't get out at all. Next time, the message should probably convey the idea that rebellious ideas is how the physicists create their names (even by moderately rebellious ones); but credible physicists still have to work within a strongly constraining straitjacket. It seems incredibly unlikely that we will be proven as wrong as Ptolemy.

Quantum mechanics won't be proven wrong but in some questions (cosmology etc.), it seems impotent and a Viagra for that problem will have to be found. He would love to understand why QM and relativity are both fighting against each other so much (barely compatible); and they lovingly co-operate with one another, too.

Nima is asked what he is passionate about now. It's a decade of experimental buzz – the LHC, dark matter searches etc. – which is a new situation. By 2015-2017, he will be willing to bet a year's salary about an answer to the nature of the hierarchy problem. Spacetime is doomed; Nima talks about the twistor-related research. What to do without the spacetime? You need a handle, not just nihilism, and to isolate it is hard. A cool thing about the truth is that it is an attractor: just sit nearby and don't be afraid of it. Nature is a friend of people with vastly different talents.

He talks about a thought experiment: in the early 20th century, you're visited by a ghost who tells you that by 1930, determinism would be gone. What would you do with this divine information? A straightforward physicist would probably just add random non-deterministic terms into the equations and he would get nothing else than some generic leesmolinian stinky crap. That's because the conservative radicalism doesn't really work. It's hard to envision what the whole new framework should look like. You need to invent the Hilbert spaces and all the postulates "simultaneously". Unlikely. The right approach is to ask how I explain the physics that I think we understand in terms of the new non-deterministic framework. You could try to generalize the least-action principle and perhaps invent the path integral which actually makes the existence of the least-action formulation of classical physics more comprehensible.

Holography etc. are the culmination of the 20th century physics (advertisement for string theory colleagues at IAS). We have to find out how to replace the beef of physics after the spacetime is doomed – the spacetime has to emerge without being put in advance. This process could be analogous to the story of the least action and the path integral. The simplification of the complicated numerous Feynman diagrams in the twistor-related uprising is a big hint – new mathematical structures are being discovered. Totally new ways to talk about established physics – which doesn't put the usual starting points as input (locality and unitarity aren't manifest from the beginning) – is perhaps emerging. We need to understand the plethora of the theoretical data as deeply as we can; Einstein was finally doing the things in the same way.

Via Plato Hagel.



A typical American tourist who visited Prague today was shot by a tweeting paparazzo with a Polish name, Lukasz Porwoll. He went to the Charles Bridge (the picture is from the Lesser Town Bridge Tower, hi-res) and McDonald's. They witnessed a pro-gay-activist-Putna rally at the Prague Castle. It seems that no one cared about them or interacted with them. Note that similar American tourists aren't wealthy enough to afford a more respectable shirt for the historical city. If you want to contribute to him and his madam, use the PayPal pig button at the bottom. Unless I find out that the poor chap is a billionaire or something of the sort, I will send him the money.

The photograph above first appeared on Twitter, then in the Czech newspapers, then on The Reference Frame, and much later on various assorted blogs such as Facebook.
Read More
Posted in experiments, science and society, string vacua and phenomenology | No comments
Newer Posts Older Posts Home
Subscribe to: Posts (Atom)

Popular Posts

  • Ostragene: realtime evolution in a dirty city
    Ostrava , an industrial hub in the Northeast of the Czech Republic, is the country's third largest city (300,000). It's full of coal...
  • Origin of the name Motl
    When I was a baby, my father would often say that we come a French aristocratic dynasty de Motl – for some time, I tended to buy it ;-). Muc...
  • Likely: latest Atlantic hurricane-free date at least since 1941
    Originally posted on September 4th. Now, 5 days later, it seems that no currently active systems will grow to a hurricane so the records wi...
  • Papers on the ER-EPR correspondence
    This new, standardized, elegant enough name of the Maldacena-Susskind proposal that I used in the title already exceeds the price of this b...
  • Bernhard Riemann: an anniversary
    Georg Friedrich Bernhard Riemann was born in a village in the Kingdom of Hanover on September 17th, 1826 and died in Selasca (Verbania), No...
  • New iPhone likely to have a fingerprint scanner
    One year ago, Apple bought AuthenTec , a Prague-based security company ( 7 Husinecká Street ), for $356 million. One may now check the Czech...
  • Prediction isn't the right method to learn about the past
    Happy New Year 2013 = 33 * 61! The last day of the year is a natural moment for a blog entry about time. At various moments, I wanted to wri...
  • Lubošification of Scott Aaronson is underway
    In 2006, quantum computing guy Scott Aaronson declared that he was ready to write and defend any piece of nonsensical claim about quantum gr...
  • A slower speed of light: MIT relativistic action game
    In the past, this blog focused on relativistic optical effects and visualizations of Einstein's theory: special relativity (download Re...
  • Eric Weinstein's invisible theory of nothing
    On Friday, I received an irritated message from Mel B. who had read articles in the Guardian claiming that Eric Weinstein found a theory of ...

Categories

  • alternative physics (7)
  • astronomy (49)
  • biology (19)
  • cars (2)
  • climate (93)
  • colloquium (1)
  • computers (18)
  • Czechoslovakia (57)
  • Denmark (1)
  • education (7)
  • Europe (33)
  • everyday life (16)
  • experiments (83)
  • France (5)
  • freedom vs PC (11)
  • fusion (3)
  • games (2)
  • geology (5)
  • guest (6)
  • heliophysics (2)
  • IQ (1)
  • Kyoto (5)
  • landscape (9)
  • LHC (40)
  • markets (40)
  • mathematics (37)
  • Middle East (12)
  • missile (9)
  • murders (4)
  • music (3)
  • philosophy of science (73)
  • politics (98)
  • religion (10)
  • Russia (5)
  • science and society (217)
  • sports (5)
  • string vacua and phenomenology (114)
  • stringy quantum gravity (90)
  • TBBT (5)
  • textbooks (2)
  • TV (8)
  • video (22)
  • weather records (30)

Blog Archive

  • ▼  2013 (341)
    • ►  September (14)
    • ►  August (42)
    • ►  July (36)
    • ►  June (39)
    • ▼  May (38)
      • Quintuplets in physics
      • AGW: due to cosmic rays and freons?
      • An extremely cloudy Prague in 2013
      • SUSY GUT with \(A_4\): six predictions for fermion...
      • Encouraging high school students talented in physics
      • Heuristic ideas about bounded prime gaps
      • Smoluchowski, Milanković: birthdays
      • Anticommunist uprising in Pilsen: 60 years ago
      • Global warming is here to stay
      • Eric Weinstein's invisible theory of nothing
      • Sheldon Glashow on future of HEP in the U.S.
      • Palo Alto mass killer of Ukulele Orchestra caught
      • Does global warming cause tornadoes?
      • Augustin-Louis Cauchy: an anniversary
      • Intriguing spectra of finite unified theories (FUT)
      • A proof of the Riemann Hypothesis using the conver...
      • Ask questions to James Hansen
      • Anthony Zee: Einstein Gravity in a Nutshell
      • Tommaso Dorigo impressed by a cold fusion paper
      • Light Dirac RH sneutrinos seen by CDMS and others?
      • Investigation of the largest Czech credit union: a...
      • Ways to discover matrix string theory
      • President is right to veto Martin Putna's professo...
      • William Happer on CNBC
      • String theory = Bayesian inference?
      • Valtr Komárek: 1930-2013
      • Novim Group: "Just Science" AGW app
      • Richard Dawid: String Theory and the Scientific Me...
      • IRS was used to intimidate political opposition in...
      • Feynman, Schwarzschild: anniversaries
      • Why we should work hard to raise the CO2 concentra...
      • In the honor of the heterotic string
      • Nassim Haramein: science as religion
      • Short questions often require long answers and proofs
      • Comparing the depth of the millennium problems
      • Aaronson's anthropic dilemmas
      • Will you help John Cook "quantify the consensus"?
      • Two dark matter papers
    • ►  April (41)
    • ►  March (44)
    • ►  February (41)
    • ►  January (46)
  • ►  2012 (159)
    • ►  December (37)
    • ►  November (50)
    • ►  October (53)
    • ►  September (19)
Powered by Blogger.

About Me

Unknown
View my complete profile