## 5 November 2017

### Reversible strain alignment and reshuffling of nanoplatelet stacks confined in a lamellar block copolymer matrix

Our paper has just been published in Nanoscale!

We show that the orientation and stacking state of nanoplatelets confined within a polymer matrix can be reversibly controlled simply by pulling on the material.

## 20 September 2017

### Normic support and the revision of prior knowledge

In three previous posts I discussed Martin Smith's paper "Why throwing 92 heads in a row is not surprising" [1]. I attempted a Bayesian interpretation of the concept of surprise, but I was sure that this had already been done before; a cursory literature search confirmed this impression (see below). Can one go further? Martin relates surprise to the more general concept of normic support, and the obvious question is whether the latter can also be interpreted in Bayesian terms.

I'll use the example of judicial evidence, that Martin treats in Ref. [2], where normic support is defined as follows :
a body of evidence E normically supports a proposition P just in case the circumstance in which E is true and P is false would be less normal, in the sense of requiring more explanation, than the circumstance in which E and P are both true.
The Blue Bus paradox can then be solved by arguing that testimonial evidence has normic support, while statistical evidence does not ([2], page 19). To put it in the terms above, finding out that the testimonial evidence is false would surprise us, while the failing of statistical evidence would not.

Can we restate this idea in terms of belief update, as I tried to do for surprise and, in particular, is the distinction between testimonial and statistical evidence similar to that between the coin throw and the lottery examples I drew here? The proposition P being in both cases "the bus involved was a Blue-Bus bus", we need to identify the evidence E of each type.
1. testimonial: the witness can identify the color of the bus with 90% accuracy.
2. statistical:   90% of the buses operating in the area on the day in question were Blue-Bus buses.
By quick analogy with the respective coin throw and the lottery examples, respectively, we can then say:
1. If the witness is wrong, the result
• calls into question his/her priorly presumed accuracy and prompts us to revise our estimate (Bayesian interpretation.)
• surprises us and requires more explanation (normic support perspective.)
2. If the bus is not blue then, although non-blue buses only account for 10% of the total,
• since the Blue-Bus deduction was merely based on the proportion of each type of bus there is no prior knowledge to revise (Bayesian interpretation.)
• the result is unlikely but not abnormal, and thus it does not call for further explanation (normic support perspective.)
I'll discuss in future posts how similar the two interpretations are and whether they solve the paradox (right now my feeling is that they don't, but I need to think about it some more.)

#### Bayesian surprise

A reference on the Bayesian treatment of surprise, defined as the Kullback-Leibler divergence of the posterior distribution with respect to the prior one.
Itti, L., & Baldi, P. F. (2006). Bayesian surprise attracts human attention. In Advances in neural information processing systems (p. 547–554).

1. Martin Smith, Why throwing 92 heads in a row is not surprising, Philosophers' Imprint (forthcoming) 2017.
2. Martin Smith, When does evidence suffice for conviction?, Mind (forthcoming.)

## 17 September 2017

### Some choices are not surprising

In two previous posts I discussed Martin Smith's paper "Why throwing 92 heads in a row is not surprising" [1].

I argued that the all-heads sequence is more surprising than a more balanced one (composed of roughly equal numbers of heads and tails), although the two have the same probability of occurrence based on the prior information (fair and independent coins), because the first event challenges this information, while the second does not.

Prompted by an email exchange with Martin (whom I thank again for his patient and detailed replies!) I would like to discuss here cases where I believe no particular outcomes would be surprising. Let us take an example from the same paper, the lottery. I agree with the author that a draw consisting of consecutive numbers (e.g. 123456) is not surprising, nor is any other pattern or apparently random sequence, which all have the same probability of occurrence.

In my view, this is simply because –for the lottery case– equiprobability assumption is just an uninformative prior. Finding a patterned sequence does not challenge our conviction, because there is nothing to challenge: any outcome will do equally well. For the coin toss, on the other hand, the equiprobability of all sequences stems from the very strong conviction that the coins are (1) unbiased and (2) independent, and the all-head outcome does challenge it.

1. Martin Smith, Why throwing 92 heads in a row is not surprising, Philosophers' Imprint (forthcoming) 2017.

## 12 September 2017

### Impressions from ECIS 2017 - day 2

Highlights from the morning session of the second day. I managed to miss Jacob Klein's plenary talk (on Interfacial water).

Parallel session on topics 5 and 6 (roughly, inorganic colloids)
• Andrés Guerrero-Martínez (Madrid University) on the reshaping, fragmetation and welding of gold nanoparticles using femtosecond lasers.
• Two more talks on responsive Au@polymer systems: Jonas Schubert (Dresden University) and Rafael Contreras-Cáceres (Málaga University)
• Pavel Yazhgur (postdoc at the ESPCI, Paris after a remarkable PhD at the LPS, Orsay!) on hyperuniform binary mixtures. I should write a post on hyperuniformity at some point...
Parallel session on topic 3 (polymers, liquid crystals and gels)
• Hans Juergen Butt on the crystallization of polymers or water in alumina pores.

## 4 September 2017

### Retiro

I went jogging in the Buen Retiro park this evening. It reminds me of the Parc de la Tête d'Or, where I used to run many years ago. The difference is that back then I would overtake pretty much everybody. Nowadays, it's the other way round.

### Impressions from ECIS 2017

I'm in Madrid for the 31st conference of the European Colloid and Interface Society. Here are some highlights from the morning session:

#### Michael Cates on active colloids (plenary session)

I arrived late and missed some of this talk, plus I'm not a specialist in the area of active colloids. What I found interesting is the search for the minimal modification of the various Hohenberg-Halperin models (B and H) that yield interesting behaviour; I still haven't understood how breaking the time-reversal symmetry comes into play. Here is a reference I promised myself I would read on the flight back home.

#### Parallel session on topics 5 and 6 (roughly, inorganic colloids)

Two talks on secondary structures in gold nanoparticle systems with potential applications to SERS:
Two other talks focused on magnetic nanoparticles:
• Laura Rossi (Utrecht University) on the self-assembly of hematite cubes (paper not yet published).
• Golnaz Isapour (Fribourg University, in the group of Marco Lattuada) on color-changing materials based on responsive polymers (pNIPAM for temperature and PVP for pH).
Aside from the nice work, the last talk also references a paper on Color change in chameleons, from which I learned that structures that generate structural colors are called iridophores (great name!), in contrast with the pigment-bearing chromatophores. I have already written about structural colors on this blog.

## 18 August 2017

### Surprise and belief update

In a previous post, I started discussing a paper [1] on the (un)surprising nature of a long streak of heads in a coin toss. My conclusion was that the surprise is not intrinsic to the particular sequence of throws, but rather residing in its relation with our prior information. I will detail this reasoning here, before returning to the paper itself.

Let us accept as prior information the null hypothesis $$H_0$$ "the coin is unbiased". The conditional probabilities of throwing heads or tails are then equal: $$P(H|H_0) = P(T|H_0)=1/2$$. With the same prior, the probability of any sequence $$S_k$$ of 92 throws is the same: $$P(S_k|H_0) = 2^{-92}$$, where $$k$$ ranges from $$1$$ to $$2^{92}$$.

Assum now that the sequence we actually get consists of all heads: $$S_1 = \lbrace HH \ldots H\rbrace$$ What is the (posterior) probability of getting heads on the 93rd throw? Let us consider two options:
1.  We can hold steadfast to our initial estimate of lack of bias $$P(H|H_0) = 1/2$$.
2. We can update our "belief value" and say something like: "although my initial assessment was that the coin is unbiased [and the process of throwing is really random and I'm not hallucinating etc.], having thrown 92 heads in a row is good evidence to the contrary and on next throw I'll probably also get heads". Thus, $$P(H|H_0 S_1) > 1/2$$ and in fact much closer to 1. How close exactly depends on the strength of our initial confidence in $$H_0$$, but I will not do the calculation here (I sketched it in the previous post).
I would say that most rational persons would choose option 2 and abandon $$H_0$$; holding on to it (choice 1) would require an extremely strong confidence in our initial assessment.

Note that for a sequence $$S_2$$ consisting of 46 heads and 46 tails (in any order) the distinction above is moot, since $$P(H|H_0 S_2) =P(H|H_0) = 1/2$$. The distinction between $$S_1$$ and $$S_2$$ is not their prior probability [2] but the way they challenge (and update) our belief.

Back to Martin Smith's paper now: what makes him adopt the first choice? I think the most revealing phrase is the following:

When faced with this result, of course it is sensible to check [...] whether the coins are double-headed or weighted or anything of that kind. Having observed a run of 92 heads in a row, one should regard it as very likely that the coins are double-headed or weighted. But, once these realistic possibilities have been ruled out, and we know they don’t obtain, any remaining urge to find some explanation (no matter how farfetched) becomes self-defeating.[italics in the text]

As I understand it, he implicitly distinguishes between two kinds of propositions: observations (such as $$S_1$$) and checks (which are "of the nature of" $$H_0$$, although they can occur after the fact) and bestows upon the second category a protected status: these types of conclusions, e.g. "the coin is unbiased" survive even in the face of overwhelming evidence to the contrary (at least when it results from observation.)

There is however no basis for this distinction: checks are also empirical findings: by visual inspection, I conclude that the coin does indeed exhibit two different faces; by more elaborate experiments I deduce that the center of mass is indeed in the geometrical center of the coin, within experimental precision; by some unspecified method I conclude that the "throwing process" is indeed random; by pinching myself I decide that I am not dreaming etc. At this point, however, the common sense remark is: "if you want to check the coin against bias, the easiest way would be to throw it about 92 times and count the heads".

If we estimate the probability of the observations (given our prior belief) we should also update our belief in light of the observations. Recognizing this symmetry gives quantitative meaning to the "surprise" element, which is higher for some sequences than for others.

1. Martin Smith, Why throwing 92 heads in a row is not surprising, Philosophers' Imprint (forthcoming) 2017.
2. We only considered here the probabilities before and after the 92 throws. One might also update one's belief after each individual throw, so that $$P(H)$$ would increase gradually.

## 17 August 2017

### How surprising is it to throw 92 heads in a row?

Martin Smith (from the University of Edinburgh) discusses the relation between surprise and belief [1].

As a striking introduction, he claims that, in a coin toss, throwing a large number of heads in a row is not surprising. He deploys a version of the sorites argument insofar as "surprise" is concerned: if the individual events $$e_k$$ of getting heads on the $$k$$-th throw are unsurprising, then so is their conjunction.

This particular example is easily dealt with by noting the importance of prior information: the sequence of heads is surprising because we know that "The coins don’t appear to be double-headed or weighted or anything like that – just ordinary coins", as Smith insists in his first paragraph [2]. I know nothing about the theories of Shackle and Spohn, but I doubt his analysis would survive adding event $$e_0$$: "We checked that all coins were unbiased". On the other hand, I believe a Bayesian treatment similar to that given by Jaynes in §5.2 of [3] would be quite satisfactory (see also Chapter 4 for a general presentation and §9.4 for an example dealing specifically with coin tossing and bias.) Once again, the information brought by $$e_1$$ through $$e_{92}$$ contradicts $$e_0$$, this is why it is surprising (or informative), not because the events would have an intrinsic "surprising" character.

The author's insistance on the equivalence of the various results: "[E]ach one of these sequences is just as unlikely as 92 heads in a row." glosses over the fact that each sequence is more or less compatible with the fairness assumption $$e_0$$. Let us introduce the probability $$p$$ of throwing heads. Then, $$e_0$$ amounts to saying that the (prior) probability distribution $$f(p)$$ of parameter $$p$$ is peaked in $$0.5$$ and has a certain width $$w$$. The higher our confidence in coin fairness, the lower $$w$$.

It is only in the case of absolute certainty $$w \to 0$$ ($$f(p)$$ is a Dirac delta) that the results are equivalent. As soon as $$w$$ exceeds a ridiculously small value, the evidence brought by the 92 heads dramatically shifts the peak of the (posterior) probability distribution $$f'(p)$$ close to 1. A sequence with 46 heads, although exactly as improbable, has no such effect (at most, it may lead to a modest decrease in $$w$$.)

The surprise is not related to the probability of a particular sequence, but to the extent it challenges our belief; I believe this statement to be rather trivial (or at least uncontroversial) and indeed Smith reaches pretty much the same conclusion in the last —and most interesting— section of the paper (to be discussed in a future post) although he cannot see the element of surprise in the coin toss experiment.

1. Martin Smith, Why throwing 92 heads in a row is not surprising, Philosophers' Imprint (forthcoming) 2017.
2. Had we known that all coins were double-headed, throwing only heads would not only be unsurprising, it would be certain.
3. E. T. Jaynes and G. L. Bretthorst, Probability theory the logic of science, Cambridge University Press 2003.

## 23 May 2017

### Ionic Liquids: evidence of the viscosity scale-dependence

Our paper just appeared in Scientific Reports!

## 7 January 2017

### Liberalism vs. Conservatism

I have always found the liberal/conservative distinction difficult to draw, largely due to the several meanings of each term (e.g. the concept of "liberal" in political science and its casual use in the United States, on the one hand, and in Europe on the other.) Motivated by my recent reading of David Gress' From Plato to Nato, I tried to define each side by a set of principles, as small and as general as possible. This is my first attempt (work very much in progress):

#### Liberal

(L1) Individuals are equal.
(L2) The individual precedes the community (ontologically).

#### Conservative

(C1) The community takes precedence over the individual.
(C2) The "essence" of the community defines a set of values (religious, national etc.) that limits individual freedom.

Below the fold I discuss some consequences of these definitions.