## Friday, January 15, 2010

### When Mechanistic Models Explain (Part 1): Models and Explanations

This post is part of my series on the work of the philosopher Carl Craver. The series deals with the nature of neuroscientific explanations. For an index, see here.

Over the next few posts, I am going to take a look the following article:
Craver, C. "When Mechanistic Models Explain" (2006) 153 Synthese 355-376
The article looks at how models are used in science, and at how phenomenal models are distinguished from explanatory models. Craver uses an example to guide his discussion: the Hodgkin-Huxley model of the action potential, derived from experimental work done with the giant axon of the squid (not the axon of the giant squid). The action potential is key to understanding how nerve cells communicate.

When Hodgkin and Huxley first proposed their model, they thought it was non-explanatory (merely phenomenal). It has since become an exemplar of an explanatory model. Craver examines how this happened.

In this first part, we will look at three important distinctions that need to be understood before moving on to consider the Hodgkin-Huxley case: (1) the difference between a phenomenal and an explanatory model; (2) the difference between sketch-models and complete explanations; and (3) the difference between how-possibly models and how-actually models.

(1) Phenomenal Models versus Explanatory Models
Models are used all the time in science. So what is a model? Craver offers a skeletal account: a model is an abstract description of a real system. The scientist will start constructing a model by first identifying a target phenomenon (T). This could be anything. For example, it could be the motion of the planets across the night sky or even how humans learn language.

Having identified T, the scientist will construct an algorithm or function (S) that can reproduce something similar to T. S can be implemented in a physical system, written in a computer program, captured in mathematical equations, or sketched in block-and-arrow diagrams. As an example, an orrery is a model of the solar system.

Models can be explanatory or non-explanatory. A non-explanatory model can be phenomenally accurate without being true (it can save the target phenomenon). For instance, the Ptolemaic model of the solar system is a good phenomenal model, but it is not descriptively accurate. To underline this point, you might enjoy watching Carl Sagan's discussion of Ptolemy.

Now we must ask the question: what differentiates a merely phenomenal model from an explanatory model? Craver puts forward an instrumentalist answer: an explanatory model is more useful for the purposes of control and manipulation. So, if we want to send spacecraft to the outer solar system, we need to work with Newtonian models, not Ptolemaic models.

(2) Sketches versus Complete Descriptions
The next distinction of which we need to aware is that between mechanism sketches and ideally complete mechanistic descriptions.

Craver wants us to imagine that all mechanistic models are aligned on a spectrum. At one extreme of the spectrum we have vague mechanism sketches. A classic example would be the box models that are used to explain cognitive phenomena. For instance, I might say agency (the ability to act) is the product of a three part mechanism involving a sensor (something that captures information from the environment), a processor and an actuator (something that performs the action). Such a mechanism sketch is just a representation of ignorance. I have no idea what really takes place during the processing stage.

At the other end of the spectrum lie complete descriptive mechanisms. These would faithfully replicate all the details of the target phenomenon. No mechanistic models reach this ideal since they all abstract away from the details somewhat. A more approachable ideal might be the mechanism of chemical neurotransmission, discussed previously.

(3) How-Possibly models versus How-Actually Models
The final distinction is that between models that describe possible mechanisms for producing the target phenomenon and models that describe the actual mechanisms through which the target phenomenon are produced.

The majority of models in artificial intelligence (even connectionist models) are of the how-possibly variety. For example, you might write an algorithm that can do facial or voice recognition. This algorithm is a model, but it may not bear any resemblance to how the human brain manages to do voice and facial recognition.

Okay I'll leave it there for now. In the next post I'll cover the Hodgkin-Huxley model and show how it was transformed from a non-explanatory how-possibly model into an explanatory how-actually model