18 July 2014

French research grants - the 2014 campaign

The main research funding organization in France is the ANR: "Agence Nationale de la Recherche" (national research agency). Every year, it finances a variable number of projects, in all scientific fields.

The results for 2014 were announced today, and the presentation text (in French only) is very upbeat: the success rate of 28% is more than 11% higher than that of 2013 ! A historical increase, one might conclude.

Unlike last year, however, the 2014 selection was done in two steps, and the 28% figure only accounts for the second one. The first step had already selected only 33% of the initial submissions, reducing the overall success rate to 9.4%. The only historical event is that this value dipped below 10% for the first time in the ten years since the ANR was created.

I make two predictions:
  • The 9.4% figure will never appear in official documents.
  • The 2015 campaign will consist of three steps, the last of which will select 100% of the projects that made it through the second round. 

    [UPDATE 19/07/2014] : For comparison, I plotted below the yearly success rates and total amounts distributed by the ANR since 2005.


      11 July 2014

      Reproducible experiments

      Yesterday evening, after having spent my day trying (and failing) to reproduce somebody's published research, I stumbled (via Soylent News) upon a psychologist's essay on "the emptiness of failed replications". Jason Mitchell, psychology professor at Harvard, states that failing to replicate somebody else's experiment does not represent a meaningful scientific contribution. Well, thank you, Prof. Mitchell !

      All jokes aside, it took me quite some time to parse the text, and even more time to realize that this difficulty is likely due to the implicit assumptions that I brought from my own field of work (experimental physics), which are quite different from those of the author, an experimental psychologist. Ultimately, I learned more from trying to separate these two viewpoints than from the text itself, which makes a rather simplistic argument.

      The argument

      Mitchell's main point appears to be that one cannot learn from negative arguments, since not finding something cannot prove it doesn't exist. This sounds entirely reasonable, and is certainly true in the case of the "black swan" example the author uses, but is completely wrong in usual scientific experiments: learning that the correlation between two variables is zero (within the uncertainty) is as strong a result as saying that it is significant and positive. Of course, the first outcome is less likely to lead to a high-profile paper.

      The assumptions

      A basic assumption in physical sciences is that of "homogeneity": the outcome of an experiment should not depend on its location, time or the personality of the scientist. Mitchell does not address this point directly, but seems to imply that getting all the details right for precisely replicating an experiment is next to impossible. He then blames this on the replicators' lack of some sort of "core competence". This is a valid point: if Nature is the same everywhere but the experimentalists are sloppy, their results will of course differ. From this I would however draw two uncomfortable conclusions:
      1. This sloppiness may just as well affect the initial experiment as the attempt to reproduce it.
      2. It also undermines an entire field of study if there is no way of distinguishing careful scientists from the careless (or incompetent) ones.
      In "tabletop" physics, replicating an experiment is relatively cheap1. It is also crucial: our research builds on someone else's results, and very often the replication is a necessary step before being able to go further. Chemists sometime spend weeks or months in order to reproduce published protocols. Needless to say, this is not done to prove the original author wrong ! Neither of these points seems to apply in psychology, as presented by Mitchell.

      Finally, I find quite strange Mitchell's attitude that replicating experiments is almost morally wrong: "One senses either a profound naiveté or a chilling mean-spiritedness at work." This goes beyond mere scientific debate and sounds more like responding to a personal offense.



      1. Even in large scale experiments, reproducing the results may be necessary, albeit very expensive. A good example is the search for the Higgs boson, with the two experiments, ATLAS and CMS, working side-by-side but without communicating (see for instance Jon Butterworth's "Smashing Physics".)

      6 July 2014

      ILCC2014: days four and five

      The plenary lectures of these last two days were given by chemists (Carsten Tschierske and Tadashi Kato) and they were both impressive for the results but also for the huge amount of work these clearly required.

      Overall, I think the scientific level was higher than two years ago, maybe because of the smaller number of oral presentations: there were only three parallel sessions (four on Tuesday) compared to five in Mainz. Finally, the next (26th) edition of the ILCC will take place at Kent State!

      5 July 2014

      Lost (and found) in translation

      On the plane back from Dublin I received all of 18 grams of mini-breadsticks. Fortunately, the packaging was more interesting than the contents:


      In English, the production place was a plant (industrial), while in French it was a workshop (atelier) with a hint of craftsmanship. The Italian term stabilimento (factory, but also establishment) is a bit more general. Unless, of course, I'm giving too much importance to these nuances. See my post on untranslatable concepts.

      3 July 2014

      ILCC2014: day three

      Not too much science today, so I finally got to see the Book of Kells:


      Like any respectable paper, the exhibition even has a "Materials and Methods" section!
      In the afternoon, tour of the Guinness storehouse, one of the few places in Dublin where gravity points upwards.


      1 July 2014

      ILCC 2014: day two

      Not too many events today (at least in the sessions I attended). Two highlights:
      •  Very nice talk by Sin-Doo Lee about surface patterning using micron-sized nanoparticles
      • I wasn't there, but it seems that Ivan Dozov's talk on twist-bend nematics was very appreciated (and led to lively discussion afterwards.)

      Spin 1/2

      I finally realized that the USB connector is a spin-1/2 object. For me, it works like this:
      • Try to plug it in: it doesn't work.
      • Turn by 180°: it doesn't work.
      • Turn again by 180°: it finally works!
      Clearly, it takes (at least) two full turns to bring it back to the initial oriention!

      30 June 2014

      ILCC 2014: day one

      The 25th International Liquid Crystal Conference (ILCC 2014) opened today at the Trinity College Dublin. There are over 600 participants (only 24 of them from France). The organization seems a bit less meticulous than two years ago in Mainz, but the food is definitely better!

      On the scientific side, modulated nematics (such as the twist-bend phase) are clearly the hot topic. There are also many talks on nanocomposites. Geographically, the Ljubljana groups (both at the University and the Jožef Stefan Institute) are very strong, and their collaborations with the Boulder teams of Noel Clark and Ivan Smalyukh look very fruitful, after Slobodan Žumer's talk this afternoon.


      29 June 2014

      How scientific is forensic science?

      A recent New Yorker article discusses the use of cell-phone call records in criminal trials, and in particular the precision with which a user can be located. Unsurprisingly, this precision is much lower than claimed by some prosecutors and, when overestimated, can lead to wrongful convictions.

      Some days ago I read [via Soylent News and Slate] about a 2009 report of an NAS Committee: Strengthening Forensic Science in the United States: A Path Forward. The Slate article also mentions the numerous convictions overturned by DNA tests and draws bleak conclusions about the current state of forensics (in the US, at least.) How did we get here?

      One obvious answer is that the courts of law are ill-equipped to deal with scientific subtleties (in the same way scientists are not prepared to interpret fine legal points.) In particular, it is quite difficult for judges to identify sound scientific evidence (although some standards do exist) and to assess its reliability. A very useful introduction to this point is "How Science Works", by David Goodstein, in Reference Manual on Scientific Evidence (2nd ed.)

      Another possible reason is the lack of a "checks and balances" mechanism. Scientific results (important ones, at least) are scrutinized by an entire community, with similar expertise and resources as the authors. During trial, evidence introduced by the prosecution should be questioned by experts for the defendant, but the latter may not have the necessary resources. This disparity is even stronger in the case of plea bargains (as in the New Yorker story), where the evidence is never actually introduced.