Saturday, December 29, 2012

Science and metaphysics


by Massimo Pigliucci

Afternoon time at the annual meeting of the American Philosophical Association. I’m following the session on science and metaphysics, chaired by Shamik Dasgupta (Princeton). The featured speakers are Steven French (Leeds-UK), James Ladyman (Bristol-UK), and Jonathan Schaffer (Rutgers-New Brunswick).  I have developed a keen interest in this topic of late, though as an observer and commentator, not a direct participant to the discussion. Let’s see what is going to transpire today. A note of warning: what follows isn't for the (metaphysically) faint of heart, and it does require at least some familiarity with fundamental physics.

We started with French on enabling eliminitavism, or what he called taking a Viking approach to metaphysics. (The reference to Vikings is meant to evoke an attitude of plundering what one needs and leave the rest; less violently, this is a view of metaphysics as helping itself to a varied toolbox.) French wishes to reject the claim made by others (for instance, Ladyman) that a prioristic metaphysics should be discontinued. However, he does agree with critics that metaphysics should take science seriously.

The problem French is concerned with, then, is how to relate the scientific to the ontological understanding of the world. Two examples he cited were realism about wave functions and the kind of ontic structural realism favored by Ladyman and his colleague Ross.

Ontic structural realism comes in at least two varieties: eliminativist (we should eliminate objects entirely from our metaphysics, particles are actually "nodes" in the structure of the world) and non-eliminativist (which retains a "thin" version of objects, via the relations of the underlying structure).

French went on to talk about three tools for the metaphysician: dependence, monism, and an account of truth making.

Dependence. The idea is that, for instance, particles are "dependent" for their existence on the underlying structure of the world. A dependent object is one whose features are derivative on something else. In this sense, eliminitavism looks viable: one could in principle "eliminate" (ontologically) elementary particles by cashing out their features in terms of the features of the underlying structure, effectively doing away with the objects themselves.

The basic idea, to put it as French did, is that "if it is of the essence, or nature or constitution of X that it exists only if Y exists, so that X is dependent on Y in the right sort of way, then X can be eliminated in favor of Y + structure."

As French acknowledged, however (though he didn't seem sufficiently worried about it, in my opinion), the eliminativist still needs to provide an account of how we recover the observable properties of objects above the level of fundamental structure.

Monism. This is the (old) idea that the world is made of one kind of fundamental stuff, a view recently termed "blobjectivism" (everything reduces to a fundamental blob). As French put it, this is saying that yes, electrons, for instance, have charges, but there really are no electrons, there is just the blob (that is, the structure).

A number of concerns have been raised against monism, and French commented on a few. For instance, monism can't capture permutations in state space. To which the monist responds that monistic structure includes permutation invariance. This, however, strikes me as borderline begging the question, since the monist can always use a catch all "it's already in the structure" response to any criticism. But how do we know that the blob really does embody this much explanatory power?

Truthmakers. French endorses something called Cameronian truthmaker theory, according to which < X exists > might be made true by something other than X. Therefore, the explanation goes, < X exists > might be true according to theory T without X being an ontological commitment of T.

Perhaps this will be made clearer by looking at one of the objections to this account of truth making: the critic can reasonably ask how is it possible that there appear to be things like tables, chairs, particles, etc. if these things don't actually exist. French's response is that one just needs to piggyback on the relevant physics, though it isn't at all settled that "the relevant physics" actually says that tables, chairs and particles don't exist in the strong eliminativist sense of the term (as opposed to, say, they exist as spatio-temporal patterns of a certain kind, accessible at the relevant level of analysis).

Next we moved to Ladyman, on "between eliminativism and monism: the radical middle ground." He acknowledged that structural realism is accused by some of indulging in mystery mongering, but Ladyman responded (correctly, I think) that it is physics that threw up stuff —  like fundamental relations and structure — that doesn't fit with classical metaphysical concepts, and the metaphysician now has to make some sense of the new situation.

Ladyman disagrees with French's eliminativism about objects, suggesting that taking structure seriously doesn't require to do away with objects. The idea is that there actually are different versions of structuralism, which depend on how fundamental relations are taken to be. James also disagrees with the following speaker, Schaffer, who is an eliminativist about relations, giving ontological priority to one object and intrinsic properties (monism). Ladyman's (and his colleague Ross') position is summarized as one of being non-eliminativist about metaphysically "thin" individuals, giving ontological priority to relational structures.

One of the crucial questions here is whether there is a fundamental level to reality, and whether consequently there is a unidirectional ontological dependence between levels of reality. Ladyman denies a unidirectional dependence. For instance, particles and their state depend on each other (that is, one cannot exist without the other), the interdependence being symmetrical. The same goes for mathematical objects and their relations, for instance the natural numbers and their relations.

As for the existence of a fundamental level, we have an intuition that there must be one, partly because the reductionist program has been successful in science. However, Ladyman thinks that the latest physics has rendered that expectation problematic. Things got more and more messy in fundamental physics of late, not less so. Consequently, for Ladyman the issue of a fundamental level is an open question, which therefore should not been built into one's metaphysical system — at least not until physicists settle the matter.

Are elementary quantum particles individuals? Well, one needs to be clear on what one means by individual, and also on the relation between the concept of individuality and that of object. This is a question that is related to that old chestnut of metaphysics, the principle of identity of indiscernibles (which establishes a difference between individuals — which are not identical, and therefore discernible — and mere objects). However, Ladyman collapses individuals into objects, which is why he is happy to say that — compatibly with quantum mechanics — quantum particles are indeed objects. The idea is that particles are intrinsically indiscernible, but they are (weakly) discernible in virtue of their spatio-temporal locality. 

Ladyman, incidentally, is aware of course of the quantum principle of non-locality, which makes the idea of precisely individuated particles problematic. But he doesn't think that non-locality licenses a generic holism where there is only one big blob in the world, and that individuality can be recovered by thinking in terms of a locally confined holism. Again, that strikes me as sensible in terms of the physics (as I understand it), and it helps recovering a (thin, as he puts it) sense in which there are objects in the world.

Finally, we got to Schaffer, who argued against ontic structural realism of the type proposed by either French or Ladyman. He wants to defend the more classical view of monism instead. He claimed that that is the actual metaphysical picture that emerges from current interpretations of quantum mechanics and general relativity.

His view is that different mathematical models — both in q.m. and in g.r. — are best thought of as just being different notations related by permutations, corresponding to a metaphysical unity. In a sense, these different mathematical notations "collapse" into a unified picture of the world.

Schaffer's way to cash out his project is by using the (in)famous Ramsey sentences, which are sentences that do away with labels, not being concerned with specific individuals. Now, one can write the Ramsey sentences corresponding to the equations of general relativity, which according to the author yields a picture of the type that has been thought of since at least Aristotle: things come first, relations are derivative (i.e., one cannot have structures or relations without things that are structured or related). If this is right, of course, the ideas that there are only structures (eliminitavism a la French) or that structures are ontologically prior to objects (Ladyman) are incorrect.

So, Schaffer thinks of Ramsey sentences as describing structural properties, which he takes to be the first step toward monism. Second, says Schaffer, what distinguishes abstract structures from the one describing the universe is that something bears those structures. That something is suggested to be the largest thing we can think fits the job, that is the universe as a whole. He calls this picture monistic structural realism: there is a cosmos (the whole), characterized by parts that bear out the structures qualitatively described by the Ramsey translation of standard physical theories like relativity and quantum mechanics. Note that this is monism because — thanks to the Ramsey translation — the parts are interchangeable, related by the mathematical permutations mentioned above.

Okay, does your head spin by now? This is admittedly complicated stuff, which is why I added explanatory links to a number of the concepts deployed by the three speakers. I found the session fascinating as it gave me a feeling for the current status of discussions in metaphysics, particularly of course as far as it concerns the increasingly dominant idea of structural realism, in its various flavors. Notice too that none of the participants engaged in what Ladyman and Ross (in their Every Thing Must Go, about which I have already commented) somewhat derisively labeled "neo-Scholasticism," that is the entire discussion took seriously what comes out of physics, all participants conceptualizing metaphysics as the task of making sense of the broad picture of the world that science keeps uncovering. That seems to me to be the right way of doing metaphysics, and one that may (indeed should!) appeal even to scientists.

The philosophy of genetic drift


by Massimo Pigliucci

This morning I am following a session on genetic drift at the American Philosophical Association meetings in Atlanta. It is chaired by Tyler Curtain (University of North Carolina-Chapel Hill), the speaker is Charles Pence (Notre Dame), and the commenters are Lindley Darden (Maryland-College Park) and Lindsay Craig (Idaho). [Note: I’ve written myself about this concept, for instance in chapter 1 of Making Sense of Evolution. Check also these papers in the journal Philosophy & Theory in Biology: Matthen and Millstein et al.]

The title of Charles' talk was "It's ok to call genetic drift a force," a position — I should state right at the beginning — with which I actually disagree. Let the fun begin! Drift has always been an interesting and conceptually confusing issue in evolutionary biology, and of course it plays a crucial role in mathematical population genetic theory. Drift has to do with stochastic events in generation-to-generation population sampling of gametes. The strength of drift is inversely proportional to population size, which also means it has an antagonistic effect to natural selection (whose strength is directly proportional to population size).

Charles pointed out that one popular interpretation of drift among philosophers is "whatever causes fail to differentiate based on fitness." The standard example is someone being struck by lightening, the resulting death clearly having nothing to do with that individual's fitness. I'm pretty sure this is not what population geneticists mean by drift. If that were the case, a mass extinction caused by an asteroid (that is, a cause that has nothing to do with individual fitness) would also count as drift. Indeed, discussions of drift — even among biologists — often seem to confuse a number of phenomena that have little to do with each other, other than the very generic property of being "random."

What about the force interpretation then? This is originally due to Elliott Sober (1984), who developed a conceptual model of the Hardy-Weinberg equilibrium in population genetics based on an analogy with Newtonian forces. H-W is a simple equation that describes the genotypic frequencies in a population where no evolutionary processes are at work: no selection, no mutation, no migration, no assortative (i.e., non random) mating, and infinite population size (which implies no drift).

The force interpretation is connected to the (also problematic, see Making Sense of Evolution, chapter 8) concept of adaptive landscape in evolutionary theory. This is a way to visualize the relationship between allelic frequencies and selection: the latter will move populations "upwards" (i.e., toward higher fitness) on any slope in the landscape, while drift will tend to shift populations randomly around the landscape.

The controversy about thinking of drift as a force began in 2002 with a paper by Matthen and Ariew, followed by another one by Brandon in 2006. The basic point was that drift inherently does not have a direction, and therefore cannot be analogized to a force in the physical (Newtonian) sense. As a result, the force metaphor fails.

Stephens (2004) claimed that drift does have direction, since it drives populations toward less and less heterozygosity (or more and more homozygosity). Charles didn't buy this, and he is right. Stephens is redefining "direction" for his own purposes, as heterozygosity does not appear on the adaptive landscape, making Stephens' response entirely artificial and not consonant with accepted population genetic theory.

Filler (2009) thinks that drift is a force because it has a mathematically specific magnitude and can unify a wide array of seemingly disparate phenomena. Another bad answer, I think (and, again, Charles also had problems with this). First off, forces don't just have magnitude, they also have direction, which, again, is not the case for drift. Sober was very clear on this, since he wanted to think of evolutionary "forces" as vectors that can be combined or subtracted. Second, it seems that if one follows Filler far too many things will begin to count as "forces" that neither physicists nor biologists would recognize as such.

Charles' idea is to turn to the physicists and see whether there are interesting analogs of drift in the physical world. His chosen example was Brownian motion, the random movement of small objects like dust particles. Brownian motion is well understood and mathematically rigorously described. Charles claimed that the equation for Brownian motion "looks" like the equation for a stochastic force, which makes it legitimate to translate the approach to drift.

But I'm pretty sure that physicists themselves don't think of Brownian motion as a force. Having a mathematical description of stochastic effects (which we do have, both for Brownian motion and for drift — and by the way, the two look very different!) is not the same as having established that the thing one is modeling is a force. Indeed, Charles granted that one could push back on his suggestion, and reject that either drift or Brownian motion are forces. I'm inclined to take that route.

A second set of objections to the idea of drift as a force (other than it doesn't have direction) is concerned with the use of null models, or inertial states, in scientific theorizing. H-W is supposed to describe what happens when nothing happens, so to speak, in populations of organisms. According to Brandon, however, drift is inherent in biological populations, so that drift is the inertial state itself, not one of the "forces" that move populations away from such state.

Charles countered that for a Newtonian system gravity also could be considered "constitutive," the way Brandon thinks of drift, but that would be weird. Charles also object that it is no good to argue that one could consider Newtonian bodies in isolation from the rest of the universe, because similar idealizations can be invoked for drift, most famously the above mentioned assumption of infinite population size. This is an interesting point, but I think the broader issue here is the very usefulness of null models in science in general, and in biology in particular (I am skeptical of their use, at least as far as inherently statistical problems of the kind dealt with by organismal biology are concerned, see chapter 10 of Making Sense).

Broadly speaking, one of the commentators (Darden) questioned the very benefit of treating drift as a force, considering that obviously biologists have been able to model drift using rigorous mathematical models that simply do not require a force interpretation. Indeed, not even selection can always be modeled as a vector with intensity and direction: neither the case of stabilizing selection nor that of disruptive selection fit easily in that mold, because in both instances selection acts to (respectively) decrease or increase a trait's variance, not its mean. Moreover, as I pointed out in the discussion, assortative mating is also very difficult to conceptualize as a vector with directionality, which makes the whole attempt at thinking of evolutionary "forces" ever more muddled and not particularly useful. Darren's more specific point was that while it is easy to think of natural selection as a mechanism, it is hard to think of drift as a mechanism (indeed, she outright denied that it is one), which again casts doubt on what there is to gain from thinking of drift as a force. The second commentator (Craig) also questioned the usefulness of the force metaphor for drift, even if defensible along the lines outlined by Pence and others.

Even more broadly, haven't physicists themselves moved away from talk of forces? I mean, let's not forget that Newtonian mechanics is only an approximation of relativity theory, and that "forces" in physics are actually interpreted in terms of fields and associated particles (as in the recently much discussed Higgs field and particle). Are we going to reinterpret this whole debate in terms of biological fields of some sort? Isn't it time that biologists (and philosophers of biology) let go of their physics envy (or their envy of philosophy of physics)?

Friday, December 28, 2012

From the APA: Metaethical antirealism, evolution and genetic determinism


by Massimo Pigliucci

The Friday afternoon session of the American Philosophical Association meeting from which I am blogging actually had at the least three events of interest to philosophers of science: one on race in population genetics, one on laws in the life sciences, and one on the strange combination of (metaethical) antirealism, evolution and genetic determinism. As it is clear from the title of this post, I opted for the latter... It featured three speakers: Michael Deem (University of Notre Dame), Melinda Hall (Vanderbilt), and Daniel Demetriou (Minnesota-Morris).

Deem went first, on "de-horning the Darwinian dilemma for realist theories of value" (no slides, darn it!). The point of the talk was to challenge two claims put forth by Sharon Street: a) that the normative realist cannot provide a scientifically acceptable account of the relation between evolutionary forces acting on our evaluative judgments and the normative facts realists think exist; b) that the "adaptive link account" provides a better explanation of this relation than any realist tracking account. (Note: much of this text is from the handout distributed by Deem.)

The alleged dilemma consists in this: by hypothesis, evolutionary forces have played a significant role in shaping our moral evaluative attitudes. If so, how is the moral realist to make sense of the hypothesis while holding on to moral realism? Taking the first horn, the realist could deny any relation between evolution and evaluative judgments. But this would mean either skepticism about evaluative judgments, or lead to a view that evolved normative judgments coincidentally align with moral facts, neither option being palatable to the moral realist.

The second horn leads the realist to accepting the link with evolution. But this means that the s/he would have to claim that tracking normative truths is somehow biologically adaptive, a position that is hard to defend on scientific grounds.

According to Street there are two positions available here: the tracking account (TA) says that  we grasp normative facts because doing so in the past has augmented our ancestors' fitness. The adaptive link account (ALA) says that we make certain evaluative judgments because these judgments forged adaptive links between the responses of our ancestors and the environments in which they lived. Note that the difference between TA and ALA is that the first talks of normative facts, the latter of evaluative judgments.

Street prefers ALA on the grounds that it is more parsimonious and clear, and that it sheds more light on the phenomenon to be explained (i.e., the existence of evaluative judgments). Deem doesn't think this is a good idea, because within the ALA evaluative judgments play a role analogous to hard-wired adaptations in other animals, which seems implausible; and because it is mysterious why selection would favor evaluative judgments.

Deem then went on to propose a modified ALA: humans possess certain evaluative tendencies because these tendencies forged adaptive links between the responses of our ancestors and their environments. Note that the difference between standard ALA and realist ALA is that the first one talks of evaluative judgments, the latter of evaluative tendencies. (This distinction makes perfect sense to me: judgments are the result, at least in part, of reflection; tendencies can be thought of as instinctual reactions or propensities. So, for instance, humans have both, while other primates only — as far as we know — possess propensities, but are incapable of judgments.)

To put it in his own words, Deem claims that "the realist can show that his/her position is compatible with evolutionary biology and can provide an account of the relation between the evolutionary forces that shaped human evaluative attitudes and independent normative facts. ... [However] it seems evolutionary theory underdetermines the choice between realism and antirealism in metaethics."

Okay, I take it that Deem's idea is to reject the suggestion that evolution makes it unnecessary to resort to the realist idea that there are normative facts. Perhaps so, in a way similar to which an evolutionary account of our abilities at mathematical reasoning wouldn't exclude the possibility of mathematical realism ("Platonism"). But one needs a positive reason to contemplate an objective ontological status of moral truths, and I think the case for that is far less compelling than the analogous case for mathematical objects (one of the reasons being that while mathematical abstractions truly seem to be universal, moral truths would still apply only to certain kind of social organisms capable of self-reflection).

Melinda Hall talked about "untangling genetic determinism: the case of genetic abortion" (another talk without slides, or even a handout!). She is interested in abortion in cases where medical evidence predicts that the infant will be severely disabled. Given such information, is it moral to terminate the pregnancy ("genetic" abortion, a type of negative genetic selection) or, on the contrary, is it moral to continue it?

The basic idea seems to be that genetic abortion is conceptually linked to genetic determinism, i.e., an overemphasis on the importance of genetic factors in development. In turn, Hall argued, the decision to terminate pregnancies in such cases contributes to stigmatize, as well as reduce social resources for, the disabled community.

Disability has both a social and a biological component, and if a lot of the negative effects of disabilities on life quality are the result of social construction, then the main issue is social and not biological. Disability advocates claim that it is problematic to make a single trait (the disability, whatever it is) become an overriding, criterion on the basis of which to make the decision to abort.

There is thus apparently a tension — which Hall sought to diffuse — between the usually pro-choice attitude of disability advocates and the restriction on the mother's reproductive rights if one objects to "genetic abortion."

A reasonable (I think) worry is that "gene mania," i.e., the quest for purely or largely biological explanations for human behavior, may encourage the search for simplistic solutions to problems that are in reality complex and in good part social-environmental. My own worry about Hall and some of her colleagues' approach, however, is the opposite danger that disability advocates may seriously underestimate the biological basis of disabilities, which may in turn lead to an equally problematic tendency to reject medical preventive solutions. (Indeed, Hall at one point made the parenthetical comment that disabilities may not be a "problem" at all. I think that's willful rejection of the painful reality in which many human beings live.)

Hall went on to invoke the nightmarish social scenario depicted in the scifi movie Gattaca. I don't object to using scifi scenarios as evocative thought experiments, but of course there is a huge disanalogy between the situation in Gattaca and the issue of disabilities. Gattaca's "inferiors" were actually normal human beings, pitted against genetically enhanced ones. Disable people are, in a very important sense, the mirror image of the movie's enhanced humans, since they lack one or another species-normal functionality typical of humans.

Though Hall qualified this, disability advocates apparently worry that "negative genetic selection" may nurture a societal attitude that it may one day be possible to eliminate disability, which somehow could turn into decreased social support for disabled people. Frankly, I think that's an egregious example of non-sequitur, and moreover it flies in the face of the empirical evidence that Western societies at least have significantly increased allocation of resources to the disabled (see, for instance, the Americans with Disabilities Act).

This whole discussion seems to be predicated on an (unstated and, I think, indefensible) equivalency or near-equivalency between the moral status of a fetus who is likely to develop into a disable person and that person him/herself. As the commentator for the paper (Daniel Moseley, UNC-Chapel Hill) pointed out, it is hard to see what is morally wrong in parents' decision to abort a fetus that has a high likelihood — based on the best medical evidence available — to develop a disability that would be hard to live with, regardless of whatever support society will provide (as it ought to) to the disabled person resulting from that pregnancy, should the parents decide not to abort.

Finally, Daniel Demetriou spoke about "fundamental moral disagreement, antirealism, and honor." (Yay! Slides!!) He took on Doris and Plakias' argument that moral realism predicts fundamental moral agreement (analogously, say, to agreement about mathematical or scientific facts). However, empirically there is plenty of evidence for moral disagreements, for instance in the case of the "culture of honor" among whites in the American South. This is turned by Doris and Plakias into an argument against moral realism (i.e., there are fundamental disagreements about moral norms because there is no objective thing of the matter about moral norms).

There are indeed interesting data showing that white Southerners respond more violently to insult and aggression. The alleged explanation is that these people inherited (culturally, not genetically) a culture of honor, which comes from their pastoral ancestors. More broadly, an honor culture according to some authors is likely adaptive in pastoralist social environments, where goods are easily stolen and a reputation for prompt and violent reaction may function as an effective deterrent (as opposed to, say, the situation in agricultural societies, where goods like crops are not easily stolen).

Interestingly, African pastoralists, as well as pastoralists in Sardinia and in Crete, consider raiding from other livestock owners a way to prove their honor as young men. The same goes for the Scottish highlands, again highlighting the connection between honor and violence.

Demetriou, however, is not convinced by this account, raising a number of objections, including the fact that pastoralist societies are still concerned with fairness, as in the concept of fair fighting. Fairness in fighting would not be a good deterrence against aggression, contra the above thesis. Moreover, there are several honor cultures that are not in fact violent. Instead, Demetriou put forth a "competition ethic account" of honor, where honor has to do with social reputation.

Metaethically, Demetriou agreed that honor really is different from the liberal ethics of welfare, favoring prestige instead. Similarly, liberalism favors cooperative principles, while honor ethics favors competition. So for Demetriou the honor outlook is much more fundamentally different from the liberal ethos than even the story based on the effectiveness of violence would suggest.

However, the author concluded, moral realism has no problem with the divergence between liberalism and honor, since it is possible to accommodate the difference invoking pluralism of a realist sort. Well, yes, though it seems to me that this strategy is capable of accommodating pretty much any set of data demonstrating empirical divergence of ethical systems... Moreover, one of Demetriou's comments toward the end was a bit confusing. He wondered why a white Southerner who has grown up in an honor culture couldn't "wake up" to a liberal approach, perhaps (his examples) after watching the right movie or reading the right book. But wait, that seems to imply no pluralism at all, but rather a situation in which the person steeped in the honor culture was simply wrong and realized, under proper conditions, that he was so. That, of course, may be, but it is a very different defense of realism against the empirically driven antirealist argument. Which one is it? Actual pluralism, or the idea that there is one correct moral system and some people are simply in error about it?

Overall this felt as a somewhat disjointed session, particularly because the second talk had hardly anything at all to do with antirealism, while neither the first nor especially the last talk had much to do with genetic determinism. But such is the way of many APA sessions, and each of the three talks did raise interesting questions about the relationship between ethics and science. It has been pretty uncontroversial for a while among moral philosophers that their discipline (just like every other branch of philosophy, I would argue) better take seriously the best scientific evidence relevant to whatever philosophical issues are under discussion. The much more interesting and thorny question is that of what exactly the implications of the science are for ethical and even metaethical positions, as well as — conversely — what the implications of our ethical theories are for the way science itself is conducted and scientific advise is implemented in our society.

From the APA: Philosophers and climate change


by Massimo Pigliucci

It's that time of year: the period between Christmas and New Year's Eve, when for some bizarre reason the American Philosophical Association has its annual meeting. This year it's in Atlanta, and I made it down here to see what may be on offer for a philosopher of science. This first post is about how philosophers see climate change, at least as reflected in an APA session chaired by Eric Winsberg, of the University of South Florida. The two speakers were Elisabeth Lloyd (Indiana) and Wendy Parker (Ohio), with critical commentary by Kevin Elliott (South Carolina).

The first talk was by Lloyd, who began by addressing the claim — by climate scientists — that the robustness of their models is a good reason to be confident in the results of said models. Broadly, however, philosophers of science do not consider robustness per se to be confirmatory. To put it simply, models could be robust and wrong.

Still, Lloyd argued that robustness is an indicator that a model is, in fact, more likely to be true. She began by referring to a point made by some theoretical ecologists: good models will predict the same outcome in spite of being built on different specific assumptions.

Lloyd stressed that there are different concepts of robustness. One is that of measurement robustness, the most famous example of which is the estimation, based on as many as 13 different methods, of Avogado's number. The concept used by Lloyd, however, is one of model robustness, which deals with the causal structure of the various climate models. The focus, then, shifts from the outcome (measurement robustness) to the internal structure of the models themselves.

Climate models are a way of articulating theory, because the equations that describe atmospheric dynamics are not analytically solvable. Lloyd went into some detail concerning how these models are actually built, pointing out how often predictions of crucial variables (like global mean surface temperature) are the result of six to a dozen different models, incorporating a range of parameter values. A set of models, or model "type," is characterized by a common core of causal processes underlying the phenomena to be modeled. An interesting example is that when climate models do not include greenhouse gases (but are limited to, say, solar and volcanic effects), they are indeed robust, as a set, but their output does not match with the available empirical data.

The point is that if a model set includes greenhouse gases as a core causal component, all models in the set produce empirically accurate estimates of the output variables — that is, the set is robust — for a range of parameter values of the other components of the model. Moreover, it is the case that individual parameter values in a given model within the set are themselves supported by empirical evidence. The result is a strong inference to the best explanation supporting particular models within a given causal core set.

While I find Lloyd's analysis convincing, it seems to me that in a backward sense it does reach the conclusion that it isn't robustness per se that should generate confidence in a model (or set of models), but rather the robustness together with multiple lines of evidence pointing toward the empirical adequacy both of the outcome of the model and of its specific parameter settings.

Wendy Parker gave the second talk, tackling the role of computer simulations as a form of theoretical practice in science. She referred to Fox Keller, according to whom describing simulations as "computer experiments" qualitatively changes the landscape of what counts as theorizing in science. Parker is interested in the sense in which simulation outcomes can be thought of as "observations," and the models themselves as observing instruments.

She began with a description of numerical weather predictions, which started in the 1950s, before modern digital computers. The data anchoring the analysis of weather models today are produced by satellites and local stations. While the models are set up as regular grids on the territory, the data are of course not comparably evenly spread. Forecasters then use various methods of "data assimilation," which take into account not only the available empirical data, but also the previous forecasts for a given area. The goal is to find a better fit than the one characteristic of the previous forecast alone to achieve a best estimate of the state of the system.

The resulting sequence of constantly updated weather snapshots was soon seized upon by climate scientists to help bridge the gap between weather and climate. This process of re-analysis of weather data, integrated by additional empirical data not originally taken into account by the forecasters, is now a common practice of data assimilation in climate science (the example discussed by Parker in detail is that of a procedure known as 4DVAR).

The point is that re-analysis data sets are simulation outputs, which however are treated as observational data — though some researchers keep them distinct from actual observational data by referring to them in published papers as "reference data" or some such locution. The problem begins when some climate scientists think of assimilation models themselves as "observing instruments" in the absence of actual measurement instruments on the ground. (Interestingly, there are documented cases of assimilation models "seeing" atmospheric phenomena that were not registered by sparsely functioning ground instruments and that were later confirmed by satellite imagery.)

Parker wants to reject the claim that models should be thought of as observing instruments, while she is sympathetic to the conceptualization of simulation outcomes as "observations" of atmospheric processes.

Her objection to thinking of assimilation models as observing instruments is that, although they are informative and, indirectly, empirical (because at some previous iteration empirical data did enter into them), they are not "backward-looking" as true observations themselves are (i.e., you don't "observe" something that hasn't happened yet) are and so are best thought of as predictions.

Parker's argument for nonetheless considering simulation outcomes as observations is that they are empirical (indirectly), backward-looking (partially, because model assimilation also uses observations made at times subsequent to the initial model projections), and informative. That is, they fulfill all three criteria that she laid out for something to count as an observing or measuring procedure. Here Parker is building on van Fraassen's view of measuring as "locating in logical space."

While I enjoyed Parker's talk, in the end I was not convinced. To begin with, we are left with the seemingly contradictory conclusion that assimilation models are not observation instruments, and yet produce observations. Second, van Fraassen's idea was meant to apply to measurement, not observation. Parker acknowledged that van Fraassen distinguishes the two, but she treated them as effectively the same. Lastly, it is not clear what hinges on making the distinction that Parker is pushing, and indeed quite a bit of confusion may arise from blurring too much the distinction between actual (that is, empirical) observations and simulation outcomes. Still, the underlying issue of the status of simulations (and their quantitative outputs) as theoretical tools in science remains interesting.

The session was engaging — regardless of one's agreement or not about specific claims made by the participants — because it showcased some of the philosophical dimensions of ongoing and controversial scientific research. It is epistemologically interesting, as Lloyd did, to reflect on the role of different conceptualizations of robustness in modeling; and it is thought provoking, as Parker did, to explore the roles of computer simulations at the interface between theory and observation in science. Who knows, even climate scientists themselves may find something to bring home (both in their practice and in their public advocacy, which was commented upon by Lloyd) from this sort of philosophical analysis!

Friday, December 21, 2012

Lakatos Award for Wolfgang Spohn

The London School of Economics and Political Science announces that the Lakatos Award, of £10,000 for an outstanding contribution to the philosophy of science, has been won by Wolfgang Spohn of the University of Konstanz for his book The Laws of Belief: Ranking Theory and its Philosophical Implications (Oxford University Press, 2012).

The Lakatos Award is given for an outstanding contribution to the philosophy of science, widely interpreted, in the form of a book published in English during the previous five years.  It was made possible by a generous endowment from the Latsis Foundation.  The Award is in memory of the former LSE professor, Imre Lakatos, and is administered by an international Management Committee organised from the LSE.

The Committee, chaired by John Worrall, decides the outcome of the Award competition on the advice of an international, independent and anonymous panel of Selectors who produce detailed reports on the nominated books.
________________________________________________________________________

Nominations can now be made for the 2013 Lakatos Award, and must be received by Friday 19th April 2013. The 2013 Award will be for a book published in English with an imprint from 2008-2013 inclusive. A book may, with the permission of the author, be nominated by any person of recognised standing within the profession.  (The Management Committee is not empowered to nominate books itself but only to respond to outside nominations.)

For further details of the nomination procedure or more information on the Lakatos Award 2013, contact the Administrator, Tom Hinrichsen, at t.a.hinrichsen@lse.ac.uk

Monday, December 17, 2012

CfP for the 11th Graduate Conference at the University of Western Ontario

The University of Western Ontario is organizing its 11th Graduate Conference in Philosophy of Mind, Language, and Cognitive Science (May 23-25, 2013). The Call for Paper can be found there. The Deadline is March 1, 2013.

Friday, December 14, 2012

Conf: Evolution, Intentionality and Information (Bristol, May 2013)



Conference: Evolution, Intentionality and Information.
University of Bristol, May 29th-31st 2013.

A three-day inter-disciplinary conference at the University of Bristol.
This is the inaugural event in the ERC-funded project 'Darwinism and the
Theory of Rational Choice', directed by Professor Samir Okasha. The aim of the conference is to discuss the use of 'intentional', 'strategic' and 'informational' concepts in evolutionary biology.

Plenary speakers: Evelyn Fox-Keller, Daniel Dennett, Joan Roughgarden, Eva
Jablonka, David Haig, Denis Noble, Ken Binmore, Samir Okasha

Contributed papers are welcome.

For further information and details of how to register, please see the conference website

Monday, December 10, 2012

Fellowships: Philosophy of Science (Pittsburgh)

The application deadline of December 15 is approaching for senior, postdoctoral, and visiting fellowships at the Center for Philosophy of Science, University of Pittsburgh, for the academic year 2013-2014.

For more details, see joining on the Center Web site (http://www.pitt.edu/~pittcntr).