Paul Churchland’s “Plato’s Camera”

Paul Churchland, Plato’s Camera, MIT Press, 2012

Churchland says that cognition works this way: Each neuron has a state, “active” or not (or degrees of activation—Churchland at one point (p. 17) suggests 10 or more.) The values of a collection of neurons is an “activation pattern.” The brain is organized into such collections. Their algebra is not specified, i.e., whether the collections are disjoint, overlap or have inclusion relations. In addition, for any two neurons there is a relation, “synaptically connected” or not. Synaptic connections change over time. Activation patterns are created by sensory inputs via synaptic connections.

“These synaptic connections are, simultaneously, the brain’s elemental information processors, as well as its principal repository of general information about the world’s abstract structure.”

The “space” referred to is the mathematical space of possible activation sets and synaptic connections. So far, nothing new. You can read much the same, in somewhat different terminology, in 19th century neurology, in Freud or Ramon y Cahal.

About the dynamics of activation states of the entire neural system, Churchland is a bit confusing. On the one hand, he writes “…the relevant representations (of something about the external world)…occupy positions in a space that has a robust and built-in probability metric against which to measure the likelihood, or unlikelihood, of the objective feature represented by that position’s ever being instantiated.” (18). (One of the few clumsy sentences in the book.) So there is a “built in” joint probability distribution over the activation levels of neurons, and therefore presumably a conditional probability distribution for the activation level of some neurons given the activation of others. The probability metric is “robust” by which I assume he means difficult to alter. Activation states change, in the mathematical space they have trajectories—that is the dynamics—and “Strictly…that trajectory is always exploring novel territory, since it never intersects itself exactly. (For an isolated dynamical system, such a perfect return would doom the system to an endless and unchanging periodicity.) “ (21). Which implies that Churchland thinks the transitions between activation states are deterministic. Putting his claims together—a fixed probability distribution on a set of activation states, and deterministic transitions that preserve the probability distribution, then if there are states that have positive probability Poincare’s Recurrence Theorem is contradicted. We can conclude that the only states that occur are those with probability zero. We might ask how Churchland knows that, but the book has no answer.

At least qualitatively, much of the dynamics is in 19th century neuropsychology as well. Churchland thinks it shames philosophy:

“Notably, and despite its primacy, that synapse-adjusting space-shaping process is almost wholly ignored by the traditions of academic epistemology, even into these early years of our third millennium.” (13)
We get “a competing conception of cognitive activity, an alternative to the “sentential” or “propositional attitude” model that has dominated philosophy for the past 2,500 years.” (14)

He is right about that, but perhaps wrong to lodge it as a complaint against “academic” philosophy or a virtue of his alternative. Epistemology addresses lots of normative questions about knowledge and belief and inference, from attempts at “analysis” of those notions, to questions of justification or warrant, to questions of reliable methods of deliberate inference. Most of these questions are not about how the brain works, not about psychology or neuropsychology, and while knowing how the brain works might inform us in some ways about the limits of individual cognition, it is not going to answer, or even address, most of those questions. Maybe they are the wrong questions—I think those about “analysis” are—but it is hard to dismiss questions such as “Does the Standard Model predict the existence of the Higgs’ boson”? and “How is that prediction obtained”? and “Do the CERN observations confirm the existence of the Higgs boson”? and “Is the Continuum Hypothesis independent of the other axioms of set theory?” as somehow wrong-headed. And if they are not wrong-headed, then questions about the logical or semantical or statistical characteristics of prediction, derivation, entailment and confirmation cannot be wrong headed. For millennia philosophers—Aristotle, Descartes, Hobbes, Leibniz, Hume,, Kant–tied those kinds of questions (not about the Higgs or the Continuum Hypothesis, of course) to speculations about how the mind works. George Boole seems the first to have realized that the laws of thought, whatever they may be, are not the laws of logic, probability or causality. Despite substituting “patterns of neural activation for “ideas,” Churchlands’s philosophical project is fundamentally revanchist.

Theories, he says, are in the head. They are not abstract entities, or thoughts, expressed in language, they are collections of activation states. Activation states, so far as I know, can have effects, but they do not have logical consequences. So we can expect no account of logical entailment, and no account of what it is a mathematician understands when she recognizes a proof. One might expect at this point some reflection on Frege’s anti-psychologist arguments in the preface to his Grundlagen would be offered, but Churchland settles for snuggling up with a coterie of the confused and vacuous, and I will bite my tongue here. He does write that “the lexicon of public language gets its meanings …from the framework of broadly accepted or culturally entrenched sentences in which they figure, and by the patterns of inferential behavior made normative thereby.” Public languages have “semantic content.”(28,29) Indeed. So we have two things, theories, which are in the head, and which have no consequences, and public language, which has semantic content and consequences. Why is the theory that which is in someone’s head rather than the semantic content of thoughts or their public expression? It would seem that what is in the head enables thinking, enables language learning, enables theory expression, enables reasoning to consequences, enables following normative rules, but the theory that one reasons about, with its semantic relations and logical entailments, is not any complex of activation states. There are resources Churchland has available but does not use, chiefly Humean. He might hold that activation states that realize thinking a thought compel succeeding states that realize another state that realizes thinking another thought, and that is what it is for one thought to be a necessary consequence of another. He might hold that there are culturally learned patterns of association that compel one thought representing state from another. But either recourse runs afoul of the lesson Boole revealed toward the end of The Laws of Thought. The laws of thought are not the laws of logic. People make logical (and semantical) errors, and some people do so systematically. Perhaps Churchland was wise to eschew these recourses, but he provides no others.

We have, says Churchland, a “conceptual” network, which I think means collections of synaptic strengths among related neurons that can be active together and whose activity influences the activity of other collections of neurons. He argues that that organization cannot be innate; it must be learned. His argument is from combinatorics.—there are not enough genes to specify the relations. That is a sad argument, however correct the conclusion. Churchland knows, or ought to, that it is not genes alone that determine cellular structure, metabolism, location and connections, but gene expression, and the combinatorics of possible gene expressions are enormous.

Churchland engages academic philosophy in challenging C.I. Lewis’ much debated analysis of knowledge: true, justified belief. His argument is that beliefs must be propositions, and justification and truth must be of propositions, and we and animals and babies know lots of things that are not propositions and lots of facts the knowledge of which we cannot possibly justify. I think it is correct that we and non-human creatures know lots of facts that the knower cannot justify, although properly situated a third party might justify the knower’s knowledge. But the stuff about language and belief is a confusion, albeit one prompted by a lot of philosophical literature. We can only state what is believed (or the object of fear or desire or any other “propositional attitude”) by ourselves or others, including animals, by inserting a sentence—Fido believes he might find food under the table. That does not imply that Fido has language, only that Fido has a thought, and that thought has for Fido a special status, belief. That having a thought is having a particular activation state, or one of a set of such states, may be true, but it is not to the point. The attribution of thought could be false for several reasons, in particular Fido could be the kind of creature that acts without thought but entirely with conditioned responses, or the kind of creature that has thoughts but all of which have the status of imaginings, not beliefs. I tend to think pigs and dogs and cattle and parrots have thoughts and beliefs. Not so sure about chickens. Pretty sure fish do not, don’t ask me why.

There are says Churchland three levels of learning: the formation of synaptic connections forming a “conceptual framework” of prototype representations and categories; the dynamical change in “habitual models of neuronal activation” resulting in the “systematic redeployment into novel domains of experience of concepts originally learned…in a quite different domain of experience; and cultural change.” (33).

The rest of the book is chiefly interesting details, some from science some from conjecture, a lot from connectionist computer simulation, about how this is supposed to work. The details are modern, but they are an elaboration of a set of ideas that can be found pretty explicitly in late 19th century connectionist neuropsychology. Churchland’s framework is not original.. That does not make it wrong. I tend to think the general framework is right, or close enough, and I have no complaint with detailing and speculating on relevant developments since the 19th century. The most original part, however, is Churchland’s claim that that neural biology provides a general philosophy of science, and that is a non-starter. If you want to know how the Standard Model predicts the Higgs’ boson, or why Kepler’s planetary theory is better than Ptolemy’s, or why a mathematical proof is a proof, don’t look to activation states and synaptic connections. But if you want to know how someone thinks any of those things, you might—someday.
Churchland says that cognition works this way: Each neuron has a state, “active” or not (or degrees of activation—Churchland at one point (p. 17) suggests 10 or more.)  The values of a collection of neurons is an “activation pattern.” The brain is organized into such collections. Their algebra is not specified, i.e., whether the collections are disjoint, overlap or have inclusion relations. In addition, for any two neurons there is a relation, “synaptically connected” or not. Synaptic connections change over time. Activation patterns are created by sensory inputs via synaptic connections. 

 

“These synaptic connections are, simultaneously, the brain’s elemental information processors, as well as its principal repository of general information about the world’s abstract structure.”

 

The “space” referred to is the mathematical space of possible activation sets and synaptic connections. So far, nothing new. You can read much the same, in somewhat different terminology, in 19th century neurology, in Freud or Ramon y Cahal.

 

About the dynamics of activation states of the entire neural system, Churchland is a bit confusing. On the one hand, he writes “…the relevant representations (of something about the external world)…occupy positions in a space that has a robust and built-in probability metric against which to measure the likelihood, or unlikelihood, of the objective feature represented by that position’s ever being instantiated.” (18). (One of the few clumsy sentences in the book.) So there is a “built in” joint probability distribution over the activation levels of neurons, and therefore presumably a conditional probability distribution for the activation level of some neurons given the activation of others.  The probability metric is “robust” by which I assume he means difficult to alter.  Activation states change, in the mathematical space they have trajectories—that is the dynamics—and “Strictly…that trajectory is always exploring novel territory, since it never intersects itself exactly. (For an isolated dynamical system, such a perfect return would doom the system to an endless and unchanging periodicity.) “ (21). Which implies that Churchland thinks the transitions between activation states are deterministic. Putting his claims together—a fixed probability distribution on a set of activation states, and deterministic transitions that preserve the probability distribution, then if there are states that have positive probability Poincare’s Recurrence Theorem is contradicted. We can conclude that the only states that occur are those with probability zero. We might ask how Churchland knows that, but the book has no answer.

 

At least qualitatively, much of the dynamics is in 19th century neuropsychology as well. Churchland thinks it shames philosophy:

 

“Notably, and despite its primacy, that synapse-adjusting space-shaping process is almost wholly ignored by the traditions of academic epistemology, even into these early years of our third millennium.” (13)

We get “a competing conception of cognitive activity, an alternative to the “sentential” or “propositional attitude” model that has dominated philosophy for the past 2,500 years.” (14)

 

He is right about that, but perhaps wrong to lodge it as a complaint against “academic” philosophy or a virtue of his alternative. Epistemology addresses lots of normative questions about knowledge and belief and inference, from attempts at “analysis” of those notions, to questions of justification or warrant, to questions of reliable methods of deliberate inference. Most of these questions are not about how the brain works, not about psychology or neuropsychology, and while knowing how the brain works might inform us in some ways about the limits of individual cognition, it is not going to answer, or even address, most of those questions.  Maybe they are the wrong questions—I think those about “analysis” are—but it is hard to dismiss questions such as “Does the Standard Model predict the existence of the Higgs’ boson”? and “How is that prediction obtained”? and “Do the CERN observations confirm the existence of the Higgs boson”? and “Is the Continuum Hypothesis independent of the other axioms of set theory?” as somehow wrong-headed. And if they are not wrong-headed, then questions about the logical or semantical or statistical characteristics of prediction, derivation, entailment and confirmation cannot be wrong headed. For millennia philosophers—Aristotle, Descartes, Hobbes, Leibniz, Hume,, Kant–tied those kinds of questions (not about the Higgs or the Continuum Hypothesis, of course) to speculations about how the mind works. George Boole seems the first to have realized that the laws of thought, whatever they may be, are not the laws of logic, probability or causality. Despite substituting “patterns of neural activation for “ideas,” Churchlands’s philosophical project is fundamentally revanchist.  

 

Theories, he says, are in the head. They are not abstract entities, or thoughts, expressed in language, they are collections of activation states. Activation states, so far as I know, can have effects, but they do not have logical consequences.  So we can expect no account of logical entailment, and no account of what it is a mathematician understands when she recognizes a proof.  One might expect at this point some reflection on Frege’s anti-psychologist arguments in the preface to his Grundlagen would be offered, but Churchland settles for snuggling up with a coterie of the confused and vacuous, and I will bite my tongue here. He does write that “the lexicon of public language gets its meanings …from the framework of broadly accepted or culturally entrenched sentences in which they figure, and by the patterns of inferential behavior made normative thereby.” Public languages have “semantic content.”(28,29)  Indeed.  So we have two things, theories, which are in the head, and which have no consequences, and public language, which has semantic content and consequences. Why is the theory that which is in someone’s head rather than the semantic content of thoughts or their public expression? It would seem that what is in the head enables thinking, enables language learning, enables theory expression, enables reasoning to consequences, enables following normative rules, but the theory that one reasons about, with its semantic relations and logical entailments, is not any complex of activation states. There are resources Churchland has available but does not use, chiefly Humean. He might hold that activation states that realize thinking a thought compel succeeding states that realize another state that realizes thinking another thought, and that is what it is for one thought to be a necessary consequence of another. He might hold that there are culturally learned patterns of association that compel one thought representing state from another. But either recourse runs afoul of the lesson Boole revealed toward the end of The Laws of Thought. The laws of thought are not the laws of logic. People make logical (and semantical) errors, and some people do so systematically. Perhaps Churchland was wise to eschew these recourses, but he provides no others.

 

We have, says Churchland, a “conceptual” network, which I think means collections of synaptic strengths among related neurons that can be active together and whose activity influences the activity of other collections of neurons. He argues that that organization cannot be innate; it must be learned. His argument is from combinatorics.—there are not enough genes to specify the relations. That is a sad argument, however correct the conclusion. Churchland knows, or ought to, that it is not genes alone that determine cellular structure, metabolism, location and connections, but gene expression, and the combinatorics of possible gene expressions are enormous.

 

Churchland engages academic philosophy in challenging C.I. Lewis’ much debated analysis of knowledge: true, justified belief. His argument is that beliefs must be propositions, and justification and truth must be of propositions, and we and animals and babies know lots of things that are not propositions and lots of facts the knowledge of which we cannot possibly justify. I think it is correct that we and non-human creatures know lots of facts that the knower cannot justify, although properly situated a third party might justify the knower’s knowledge. But the stuff about language and belief is a confusion, albeit one prompted by a lot of philosophical literature. We can only state what is believed (or the object of fear or desire or any other “propositional attitude”) by ourselves or others, including animals, by inserting a sentence—Fido believes he might find food under the table. That does not imply that Fido has language, only that Fido has a thought, and that thought has for Fido a special status, belief.  That having a thought is having a particular activation state, or one of a set of such states, may be true, but it is not to the point. The attribution of thought could be false for several reasons, in particular Fido could be the kind of creature that acts without thought but entirely with conditioned responses, or the kind of creature that has thoughts but all of which have the status of imaginings, not beliefs.  I tend to think pigs and dogs and cattle and parrots have thoughts and beliefs. Not so sure about chickens. Pretty sure fish do not, don’t ask me why.

 

There are says Churchland three levels of learning: the formation of synaptic connections forming a “conceptual framework” of prototype representations and categories; the dynamical change in “habitual models of neuronal activation” resulting in the “systematic redeployment into novel domains of experience of concepts originally learned…in a quite different domain of experience; and cultural change.” (33).

 

The rest of the book is chiefly interesting details, some from science some from conjecture, a lot from connectionist computer simulation, about how this is supposed to work. The details are modern, but they are an elaboration of a set of ideas that can be found pretty explicitly in late 19th century connectionist neuropsychology. Churchland’s framework is not original.. That does not make it wrong. I tend to think the general framework is right, or close enough, and I have no complaint with detailing and speculating on relevant developments since the 19th century.  The most original part, however, is Churchland’s claim that that neural biology provides a general philosophy of science, and that is a non-starter. If you want to know how the Standard Model predicts the Higgs’ boson, or why Kepler’s planetary theory is better than Ptolemy’s, or why a mathematical proof is a proof, don’t look to activation states and synaptic connections. But if you want to know how someone thinks any of those things, you might—someday.

Posted in Uncategorized | Leave a comment

No punches pulled

This blog ill post reviews and comments by C. Glymour, sometimes too blunt for archive journals, on current books and articles in philosophy of science.

Posted in Uncategorized | Leave a comment