Wednesday, August 30, 2017

The annotated math of (almost) everything

Have you heard of the principle of least action? It’s the most important idea in physics, and it underlies everything. According to this principle, our reality is optimal in a mathematically exact way: it minimizes a function called the “action.” The universe that we find ourselves in is the one for which the action takes on the smallest value.

In quantum mechanics, reality isn’t quite that optimal. Quantum fields don’t have to decide on one specific configuration; they can do everything they want, and the action then quantifies the weight of each contribution. The sum of all these contributions – known as the path-integral – describes again what we observe.

This omniscient action has very little to do with “action” as in “action hero”. It’s simply an integral, usually denoted S, over another function, called the Lagrangian, usually denoted L. There’s a Lagrangian for the Standard Model and one for General Relativity. Taken together they encode the behavior of everything that we know of, except dark matter and quantum gravity.

With a little practice, there’s a lot you can read off directly from the Lagrangian, about the behavior of the theory at low or high energies, about the type of fields and mediator fields, and about the type of interaction.

The below figure gives you a rough idea how that works.



I originally made this figure for the appendix of my book, but later removed it. Yes, my editor is still optimistic the book will be published Spring 2018. The decision about this will fall in the next month or so, so stay tuned.

Wednesday, August 23, 2017

I was wrong. You were wrong too. Admit it.

I thought that anti-vaxxers are a US-phenomenon, certainly not to be found among the dutiful Germans. Well, I was wrong. The WHO estimates only 93% of children in Germany receive both measles shots.

I thought that genes determine sex. I was wrong. For certain species of fish and reptiles that’s not the case.

I thought that ultrasound may be a promising way to wirelessly transfer energy. That was wrong too.

Don’t worry, I haven’t suddenly developed a masochist edge. I’ve had an argument. Not my every-day argument about dark matter versus modified gravity and similar academic problems. This one was about Donald Trump and how to be wrong the right way.
Percentage of infants receiving 2nd dose of measles vaccine in Germany.
[Source: WHO]

Trump changes his mind. A lot. May that be about the NATO or about Afghanistan or, really, find me anything he has not changed his mind about.

Now, I suspect that’s because he doesn’t have an opinion, can’t recall what he said last time, and just hopes no one notices he wings that presidency thing. But whatever the reason, Trump’s mental flexibility is a virtue to strive for. You can see how that didn’t sit well with my liberal friends.

It’s usually hard to change someone’s mind, and a depressingly large amount of studies have shown that evidence isn’t enough to do it. Presenting people with evidence contradicting their convictions can even have the very opposite effect of reinforcing their opinions.

We hold on to our opinions, strongly. Constructing consistent explanations for the world is hard work, and we don’t like others picking apart the stories we settled on. The quirks of the human mind can be tricky – tricky to understand and tricky to overcome. Psychology is part of it. But my recent argument over Trump’s wrongness made me think about the part sociology has in our willingness to change opinion. It’s bad enough to admit to yourself you were wrong. It’s far worse to admit to other people you were wrong.

You see this play out in almost every comment section on social media. People defend hopeless positions, go through rhetorical tricks and textbook fallacies, appeal to authority, build straw men, and slide red herrings down slippery slopes. At the end, there’s always good, old denial. Anything, really, to avoid saying “I was wrong.”

And the more public an opinion was stated, the harder it becomes to backpedal. The more you have chosen friends by their like-mindedness, and the more they count on your like-mindedness, the higher the stakes for being unlike. The more widely known you are, the harder it is to tell your followers you won’t deliver arguments for them any longer. Turn your back on them. Disappoint them. Lose them.

It adds to this that public conversations encourage us to make up opinions on the fly. The three examples I listed above had one thing in common. In neither case did I actually know much about what I was saying. It wasn’t that I had wrong information – I simply had no information, and it didn’t occur to me to check, or maybe I just wasn’t interested enough. I was just hoping nobody would notice. I was winging it. You wouldn’t want me as president either.

But enough of the public self-flagellation and back to my usual self. Science is about being wrong more than it is about being right. By the time you have a PhD you’ll have been wrong in countless ways, so many ways indeed it’s not uncommon students despair over their seeming incapability until reassured we’ve all been there.

Science taught me it’s possible to be wrong gracefully, and – as with everything in life – it becomes easier with practice. And it becomes easier if you see other people giving examples. So what have you recently changed your mind about?

Tuesday, August 15, 2017

You don’t expand just because the universe does. Here’s why.

Not how it works.
It’s tough to wrap your head around four dimensions.

We have known that the universe expands since the 1930s, but whether we expand with it is still one of the questions I am asked most frequently. The less self-conscious simply inform me that the universe doesn’t expand but everything in it shrinks – because how could we tell the difference?

The best answer to these questions is, as usual, a lot of math. But it’s hard to find a decent answer online that is not a pile of equations, so here’s a verbal take on it.

The first clue you need to understand the expansion of the universe is that general relativity is a theory for space-time, not for space. As Herman Minkowski put it in 1908:
“Henceforth space by itself, and time by itself, are doomed to fade away into mere shadows, and only a kind of union of the two will preserve an independent reality.”
Speaking about the expansion of space, hence, requires to undo this union.

The second clue is that in science a question must be answerable by measurement, at least in principle. We cannot observe space and neither can we observe space-time. We merely observe how space-time affects matter and radiation, which we can measure in our detectors.

The third clue is that the word “relativity” in “general relativity” means that every observer can chose to describe space-time whatever way he or she wishes. While each observer’s calculation will then differ, they will come to the same conclusion.

Armed with these three knowledge bites, let us see what we can say about the universe’s expansion.

Cosmologists describe the universe with a model known as Friedmann-Robertson-Walker (named after its inventors). The underlying assumption is that space (yes, space) is filled with matter and radiation that has the same density everywhere and in every direction. It is, as the terminology has it, homogeneous and isotropic. This assumption is called the “Cosmological Principle.”

While the Cosmological Principle originally was merely a plausible ad-hoc assumption, it is meanwhile supported by evidence. On large scales – much larger than the typical intergalactic distances – matter is indeed distributed almost the same everywhere.

But clearly, that’s not the case on shorter distances, like inside our galaxy. The Milky Way is disk-shaped with most of the (visible) mass in the center bulge, and this matter isn’t distributed homogeneously at all. The cosmological Friedmann-Robertson-Walker model, therefore, just does not describe galaxies.

This is a key point and missing it is origin of much confusion about the expansion of the universe: The solution of general relativity that describes the expanding universe is a solution on average; it is good only on very large distances. But the solutions that describe galaxies are different – and just don’t expand. It’s not that galaxies expand unnoticeably, they just don’t. The full solution, then, is both stitched together: Expanding space between non-expanding galaxies. (Though these solutions are usually only dealt with by computer simulations due to their mathematical complexity.)

You might then ask, at what distance does the expansion start to take over? That happens when you average over a volume so large that the density of matter inside the volume has a gravitational self-attraction weaker than the expansion’s pull. From atomic nuclei up, the larger the volume you average over, the smaller the average density. But it is only somewhere beyond the scales of galaxy clusters that expansion takes over. On very short distances, when the nuclear and electromagnetic forces aren’t neutralized, these also act against the pull of gravity. This safely prevents atoms and molecules from being torn apart by the universe’s expansion.

But here’s the thing. All I just told you relies on a certain, “natural” way to divide up space in space and time. It’s the cosmic microwave background (CMB) that helps us do it. There is only one way to split space and time so that the CMB looks on average the same in all direction. After that, you can still pick your time-labels, but the split is done.

Breaking up Minkowski’s union between space and time in this way is called a space-time “slicing.” Indeed, it’s much like slicing bread, where each slice is space at some moment of time. There are many ways to slice bread and there are also many ways to slice space-time. Which, as number 3 clued you, are all perfectly allowed.

The reason that physicists chose one slicing over another is usually that calculations can be greatly simplified with a smart choice of slicing. But if you really insist, there are ways to slice the universe so that space does not expand. However, these slicing are awkward: they are hard to interpret and make calculations very difficult. In such a slicing, for example, going forward in time necessarily pushes you around in space – it’s anything but intuitive.

Indeed, you can do this also with space-time around planet Earth. You could slice space-time so that space around us remains flat. Again though, this slicing is awkward and physically meaningless.

This brings us to the relevance of clue #2. We really shouldn’t be talking about space to begin with. Just as you could insist on defining space so that the universe doesn’t expand, by willpower you could also define space so that Brooklyn does expand. Let’s say a block down is a mile. You could simply insist on using units of length in which tomorrow a block down is two miles, and next week it’s ten miles, and so on. That’s pretty idiotic – and yet nobody could stop you from doing this.

But now consider you make a measurement. Say, you bounce a laser-beam back between the ends of the block, at fixed altitude, and use atomic clocks to measure the time that passes between two bounces. You would find that the time-intervals are always the same.

Atomic clocks rely on the constancy of atomic transition frequencies. The gravitational force inside an atom is entirely negligible relative to the electromagnetic force – its about 40 orders of magnitude smaller – and fixing the altitude prevents gravitational redshift caused by the Earth’s gravitational pull. It doesn’t matter which coordinates you used, you’d always find the same and unambiguous measurement result: The time elapsed between bounces of the laser remains the same.

It is similar in cosmology. We don’t measure the size of space between galaxies – how would we do that? We measure the light that comes from distant galaxies. And it turns out to be systematically red-shifted regardless of where we look. A simple way to describe this – a space-time slicing that makes calculations and interpretations easy – is that space between the galaxies expands.

So, the brief answer is: No, Brooklyn doesn’t expand. But the more accurate answer is that you should ask only for the outcome of clearly stated measurement procedures. Light from distant galaxies is shifted to the red meaning they are retreating from us. Light collected from the edges of Brooklyn isn’t redshifted. If we use a space-time slicing in which matter is at rest on the average, then the matter density of the universe is decreasing and was much higher in the past. To the extent that the density of Brooklyn has changed in the past, this can be explained without invoking general relativity.

It may be tough to wrap your head around four dimensions, but it’s always worth the effort.



[This post previously appeared on Starts With A Bang.]

Wednesday, August 09, 2017

Outraged about the Google diversity memo? I want you to think about it.

Chairs. [Image: Verco]
That leaked internal memo from James Damore at Google? The one that says one shouldn’t expect employees in all professions to reflect the demographics of the whole population? Well, that was a pretty dumb thing to write. But not because it’s wrong. Dumb is that Damore thought he could have a reasoned discussion about this. In the USA, out of all places.

The version of Damore’s memo that first appeared on Gizmodo missed references and images. But meanwhile, the diversity memo has its own website and it comes with links and graphics.

Damore’s strikes me as a pamphlet produced by a well-meaning, but also utterly clueless, young white man. He didn’t deserve to get fired for this. He deserved maybe a slap on the too-quickly typing fingers. But in his world, asking for discussion is apparently enough to get fired.

I don’t normally write about the underrepresentation of women in science. Reason is I don’t feel fit to represent the underrepresented. I just can’t seem to appropriately suffer in my male-dominated environment. To the extent that one can trust online personality tests, I’m an awkwardly untypical female. It’s probably unsurprising I ended up in theoretical physics.

There is also a more sinister reason I keep my mouth shut. It’s that I’m afraid of losing what little support I have among the women in science when I fall into their back.

I’ve lived in the USA for three years and for three more years in Canada. On several occasions during these years, I’ve been told that my views about women in science are “hardcore,” “controversial,” or “provocative.” Why? Because I stated the obvious: Women are different from men. On that account, I’m totally with Damore. A male-female ratio close to one is not what we should expect in all professions – and not what we should aim at either.

But the longer I keep my mouth shut, the more I think my silence is a mistake. Because it means leaving the discussion – and with it, power – to those who shout the loudest. Like CNBC. Which wants you to be “shocked” by Damore’s memo in a rather transparent attempt to produce outrage and draw clicks. Are you outraged yet?

Increasingly, media-storms like this make me worry about the impression scientists give to the coming generation. Give to kids like Damore. I’m afraid they think we’re all idiots because the saner of us don’t speak up. And when the kids think they’re oh-so-smart, they’ll produce pamphlets to reinvent the wheel.

Fact is, though, much of the data in Damore’s memo is well backed-up by research. Women indeed are, on the average, more neurotic than men. It’s not an insult, it’s a common term in psychology. Women are also, on the average, more interested in people than in things. They do, on the average, value work-life balance more, react differently to stress, compete by other rules. And so on.

I’m neither a sociologist nor psychologist, but my understanding of the literature is that these are uncontroversial findings. And not new either. Women are different from men, both by nature and by nuture, though it remains controversial just what is nurture and what is nature. But the cause is besides the point for the question of occupation: Women are different in ways that plausibly affect their choice of profession.

No, the problem with Damore’s argument isn’t the starting point, the problem is the conclusions that he jumps to.

To begin with, even I know most of Google’s work is people-centric. It’s either serving people directly, or analyzing people-data, or imagining the people-future. If you want to spend your life with things and ideas rather than people, then go into engineering or physics, but not into software-development.

That coding actually requires “female” skills was spelled out clearly by Yonatan Zunger, a former Google employee. But since I care more about physics than software-development, let me leave this aside.

The bigger mistake in Damore’s memo is one I see frequently: Assuming that job skills and performance can be deduced from differences among demographic groups. This just isn’t so. I believe for example if it wasn’t for biases and unequal opportunities, then the higher ranks in science and politics would be dominated by women. Hence, aiming at a 50-50 representation gives men an unfair advantage. I challenge you to provide any evidence to the contrary.

I’m not remotely surprised, however, that Damore naturally assumes the differences between typically female and male traits mean that men are more skilled. That’s the bias he thinks he doesn’t have. And, yeah, I’m likewise biased in favor of women. Guess that makes us even then.

The biggest problem with Damore’s memo however is that he doesn’t understand what makes a company successful. If a significant fraction of employees think that diversity is important, then it is important. No further justification is needed for this.

Yes, you can argue that increasing diversity may not improve productivity. The data situation on this is murky, to say the least. There’s some story about female CEOs in Sweden that supposedly shows something – but I want to see better statistics before I buy that. And in any case, the USA isn’t Sweden. More importantly, productivity hinges on employees’ well-being. If a diverse workplace is something they value, then that’s something to strive for, period.

What Damore seems to have aimed at, however, was merely to discuss the best way to deal with the current lack of diversity. Biases and unequal opportunities are real. (If you doubt that, you are a problem and should do some reading.) This means that the current representation of women, underprivileged and disabled people, and other minorities, is smaller than it would be in that ideal world which we don’t live in. So what to do about it?

One way to deal with the situation is to wait until the world catches up. Educate people about bias, work to remove obstacles to education, change societal gender images. This works – but it works very slowly.

Worse, one of the biggest obstacles that minorities face is a chicken-and-egg problem that time alone doesn’t cure. People avoid professions in which there are few people like them. This is a hurdle which affirmative action can remove, fast and efficiently.

But there’s a price to pay for preferably recruiting the presently underrepresented. Which is that people supported by diversity efforts face a new prejudice: They weren’t hired because they’re skilled. They were hired because of some diversity policy!

I used to think this backlash has to be avoided at all costs, hence was firmly against affirmative action. But during my years in Sweden, I saw that it does work – at least for women – and also why: It makes their presence unremarkable.

In most of the European North, a woman in a leading position in politics or industry is now commonplace. It’s nothing to stare at and nothing to talk about. And once it’s commonplace, people stop paying attention to a candidate’s gender, which in return reduces bias.

I don’t know, though, if this would also work in science which requires an entirely different skill-set. And social science is messy – it’s hard to tell how much of the success in Northern Europe is due to national culture. Hence, my attitude towards affirmative action remains conflicted.

And let us be clear that, yes, such policies mean every once in a while you will not hire the most skilled person for a job. Therefore, a value judgement must be made here, not a logical deduction from data. Is diversity important enough for you to temporarily tolerate an increased risk of not hiring the most qualified person? That’s the trade-off nobody seems willing to spell out.

I also have to spell out that I am writing this as a European who now works in Europe again. For me, the most relevant contribution to equal opportunity is affordable higher education and health insurance, as well as governmentally paid maternity and parental leave. Without that, socially disadvantaged groups remain underrepresented, and companies continue to fear for revenue when hiring women in their fertile age. That, in all fairness, is an American problem not even Google can solve.

But one also doesn’t solve a problem by yelling “harassment” each time someone asks to discuss whether a diversity effort is indeed effective. I know from my own experience, and a poll conducted at Google confirms, that Damore’s skepticism about current practices is widespread.

It’s something we should discuss. It’s something Google should discuss. Because, for better or worse, this case has attracted much attention. Google’s handling of the situation will set an example for others.

Damore was fired, basically, for making a well-meant, if amateurish, attempt at institutional design, based on woefully incomplete information he picked from published research studies. But however imperfect his attempt, he was fired, in short, for thinking on his own. And what example does that set?

Thursday, August 03, 2017

Self-tuning brings wireless power closer to reality

Cables under my desk.
One of the unlikelier fights I picked while blogging was with an MIT group that aimed to wirelessly power devices – by tunneling:
“If you bring another resonant object with the same frequency close enough to these tails then it turns out that the energy can tunnel from one object to another,” said Professor Soljacic.
They had proposed a new method for wireless power transfer using two electric circuits in magnetic resonance. But there’s no tunneling in such a resonance. Tunneling is a quantum effect. Single particles tunnel. Sometimes. But kilowatts definitely don’t.

I reached out to the professor’s coauthor, Aristeidis Karalis, who told me, even more bizarrely: “The energy stays in the system and does not leak out. It just jumps from one to the other back and forth.”

I had to go and calculate the Poynting vector to make clear the energy is – as always – transmitted from one point to another by going through all points in between. It doesn’t tunnel, and it doesn’t jump either. For the MIT guys’ envisioned powering device with the resonant coils the energy flow is focused between the coils’ centers.

The difference between “jumping” and “flowing” energy is more than just words. Once you know that energy is flowing, you also know that if you’re in its way you might get some of it. And the more focused the energy, the higher the possible damage. This means, large devices have to be close together and the energy must be spread out over large surfaces to comply with safety standards.

Back then, I did some estimates. If you want to transfer, say, 1 Watt, and you distribute it over a coil with radius 30cm, you end up with a density of roughly 1 mW/cm2. That already exceeds the safety limit (in the frequency range 30-300 MHz). And that’s leaving aside there usually must be much more energy in the resonance field than what’s actually transmitted. And 30cm isn’t exactly handy. In summary, it’ll work – but it’s not practical and it won’t charge the laptop without roasting what gets in the way.

The MIT guys meanwhile founded a company, Witricity, and dropped the tunneling tale.

Another problem with using resonance for wireless power is that the efficiency depends on the distance between the circuits. It doesn’t work well when they’re too far, and not when they’re too close either. That’s not great for real-world applications.

But in a recent paper published in Nature, a group from Stanford put forward a solution to this problem. And even though I’m not too enchanted by transfering power by magnetic resonance, it is a really neat idea:
Usually the resonance between two circuits is designed, meaning they receiver’s and sender’s frequencies are tuned to work together. But in the new paper, the authors instead let the frequency of the sender range freely – they merely feed it energy. They then show that the coupled system will automatically tune to a resonance frequency at which efficiency is maximal.

The maximal efficiency they reach is the same as with the fixed-frequency circuits. But it works better for shorter distances. While the usual setting is inefficient both at too short and too long distances, the self-tuned system has a stable efficiency up to some distance, and then decays. This makes the new arrangement much more useful in practice.
Efficiency of energy transfer as a function of distance
between the coils (schematic). Blue curve is for the
usual setting with pre-fixed frequency. Red curve is
for the self-tuned circuits.

The group didn’t only calculate this, they also did an experiment to show it works. One limitation of the present setup though is that it works only in one direction, so still not too practical. But it’s a big step forward.

Personally, I’m more optimistic about using ultrasound for wireless power transfer than about the magnetic resonance because ultrasound presently reaches larger distances. Both technologies, however, are still very much in their infancy, so hard to tell which one will win out.

(Note added: Ultrasound not looking too convincing either, ht Tim, see comments for more.)

Let me not forget to mention that in an ingenious paper which was completely lost on the world I showed you don’t need to transfer the total energy to the receiver. You only need to send the information necessary to decrease entropy in the receiver’s surrounding, then it can draw energy from the environment.

Unfortunately, I could think of how to do this only for a few atoms at a time. And, needless to say, I didn’t do any experiment – I’m a theoretician after all. While I’m sure in a few thousand years everyone will use my groundbreaking insight, until then, it’s coils or ultrasound or good, old cables.