Stanley Cavell and Timothy Williamson: Must We Mean What We Say? And How?

This is an extension of earlier thoughts on Wittgenstein, and particularly about how philosophers think of meaning and to what extent culture gets involved in it. I want to contrast Stanley Cavell, for whom culture is very nearly the starting point of philosophical investigation, and Timothy Williamson, for whom it seems to be a recurrent nuisance. Both claim very different aspects of Wittgenstein for their own projects. I side with Cavell.

“Must We Mean What We Say?” is very early Cavell, dating from 1957, before he had gotten his PhD. I am not sure how widely read it is today, because it is written in the argot of the Ordinary Language Philosophy of the time (Cavell was a student of J.L. Austin’s). Although the essay goes far beyond Austin in its underlying concerns, Cavell is still working within an orthodoxy that he would soon transcend.

The signs are clearly already there, as Cavell concertedly links the technical aspects of Wittgenstein and Ordinary Language Philosophy to looser concerns of art, literature, and taste. He yokes the ideas of language games and social practice to somewhat Kantian ideas about the experience of art, beauty, and meaning. His skill in doing so is already manifest. His employment of technical discourse (here Wittgenstein, elsewhere psychoanalysis) never overshadows the literary humanist sense that comes to the forefront in his later work; Cavell fits in my mind next to William Empson, Erich Auerbach and Northrop Frye rather than to Austin or Ryle.

Notably, he draws out those aspects of Wittgenstein closest to this sensibility, which Austin and Ryle clearly did not possess: the amazement and bafflement at culture, the ability to be temporarily transported by a “game,” be it a work of art or a conversation, the sense of awe. Wittgenstein’s deployment of these moments was very sparing and always cautiously conditioned by his radical uncertainty. Cavell seems to possess more holistic certainty, and as Nightspore suggested in a comment, this allows parts of Wittgenstein’s work to come forward more fully in a way that Wittgenstein would never have allowed.

Cavell does defend Ordinary Language Philosophy from an attack by the logician and skeptic Benson Mates. I have not read the attack, but from Cavell’s quotes, it seems a bit more temperate than Ernest Gellner’s attack, but not all that much more sympathetic, akin to Timothy Williamson‘s recent urgings that we forget about all those ordinary language anecdotes and platitudes and once more get down to solving logical and metaphysical issues for all time. Reading Williamson’s “Must Do Better” seems to indicate that we haven’t come very far in the last 50 years:

What about progress on realism and truth? Far more is known in 2004 about truth than was known in 1964, as a result of technical work by philosophical and mathematical logicians such as Saul Kripke, Solomon Feferman, Anil Gupta, Vann McGee, Volker Halbach and many others on how close a predicate in a language can come to satisfying a full disquotational schema for that very language without incurring semantic paradoxes. Their results have significant and complex implications, not yet fully absorbed, for current debates concerning deflationism and minimalism. One clear lesson is that claims about truth need to be formulated with extreme precision, not out of kneejerk pedantry but because in practice correct general claims about truth often turn out to differ so subtly from provably incorrect claims that arguing in impressionistic terms is a hopelessly unreliable method. Unfortunately, much philosophical discussion of truth is still conducted in a programmatic, vague and technically uninformed spirit whose products inspire little confidence.

Precision is often regarded as a hyper-cautious characteristic. It is importantly the opposite. Vague statements are the hardest to convict of error. Obscurity is the oracle’s self-defense. To be precise is to make it as easy as possible for others to prove one wrong. That is what requires courage. But the community can lower the cost of precision by keeping in mind that precise errors often do more than vague truths for scientific progress.

In addition to the humdrum methodological virtues, we need far more reflectiveness about how philosophical debates are to be subjected to enough constraints to be worth conducting. For example, Dummett’s anti-realism about the past involved, remarkably, the abandonment of two of the main constraints on much philosophical activity. In rejecting instances of the law of excluded middle concerning past times, such as ‘Either a mammoth stood on this spot a hundred thousand years ago or no mammoth stood on this spot a hundred thousand years ago’, the anti-realist rejected both common sense and classical logic. Neither constraint is methodologically sacrosanct; both can intelligibly be challenged, even together. But when participants in a debate are allowed to throw out both simultaneously, methodological alarm bells should ring: it is at least not obvious that enough constraints are left to frame a fruitful debate.

When law and order break down, the result is not freedom or anarchy but the capricious tyranny of petty feuding warlords. Similarly, the unclarity of constraints in philosophy leads to authoritarianism. Whether an argument is widely accepted depends not on publicly accessible criteria that we can all apply for ourselves but on the say-so of charismatic authority figures. Pupils cannot become autonomous from their teachers because they cannot securely learn the standards by which their teachers judge. A modicum of willful unpredictability in the application of standards is a good policy for a professor who does not want his students to gain too much independence.

Timothy Williamson, “Must Do Better” (2004) [I wish he had called it "Must Fail Better"]

The details are different, but the resemblance to Mates’, Ayer’s, and yes, even Gellner’s criticism of the post-Wittgensteinian movements in analytic philosophy is uncanny, right down to the excoriation of mystic philosophical oracles. And Cavell’s defense could just as well apply to the unnamed folks whom Williamson is bashing:

 But the philosopher who proceeds from ordinary language is concerned less to avenge sensational crimes against the intellect than to redress its civil wrongs; to steady any imbalance, the tiniest usurpation, in the mind. This inevitably re­quires reintroducing ideas which have become tyrannical (e.g., exist­ence, obligation, certainty, identity, reality, truth . . . ) into the specific contexts in which they function naturally.

This is not a question of cutting big ideas down to size, but of giving them the exact space in which they can move without corrupting. Nor does our wish to rehabilitate rather than to deny or expel such ideas (by such sentences as, “We can never know for certain . . . “; “The table is not real (really solid)”; “To tell me what I ought to do is always to tell me what you want me to do . . . “) come from a sentimental altruism. It is a question of self-preservation: for who is it that the philosopher punishes when it is the mind itself which assaults the mind?

Stanley Cavell, “Must We Mean What We Say?” (1957)

This reintroduction that Cavell recommends inevitably carries with it all the ambiguity and unprovability that Williamson (and Gellner) detest. It comes as little surprise that Williamson’s take on Wittgenstein and Austin is rather off-the-mark:

A standard framework for description is an incipient theory; it embodies a view of the important dimensions of the phenomena to be described. Since Wittgenstein and Austin were notoriously suspicious of philosophical theory, they inhibited theory-making even of this mild kind. Of course, many philosophers of the period escaped their influence. Austin himself permitted philosophical theories, if they were not premature; it was just that he put the age of maturity so late.

Wittgenstein held that philosophical theories were symptoms of philosophical puzzlement, not answers to it, but that was itself one of his philosophical theories. His work was always driven by theoretical concerns. This applies in particular to his account of family resemblance terms, his specific contribution to the study of vagueness, as it does to Friedrich Waismann’s similar notion of open texture, developed under Wittgenstein’s influence. However, theory does not flourish when it must be done on the quiet. It needs to be kept in the open, where it can be properly criticized.

Timothy Williamson, Vagueness (1994)

Williamson’s demands pose positivistic, scientific criteria for theories that much of Wittgenstein’s work cannot meet, and I gather Williamson is happy to throw that out and keep only what he deems satisfactory. But regardless of accuracy or inclusiveness, if the question comes down to whether I prefer Cavell’s Wittgenstein or Williamson’s Wittgenstein, the choice for me is obviously Cavell, as much as it must seem obviously Williamson to others. But I also don’t think that we know more about truth today than we did 50 years ago, at least not in any ordinary language sense of that claim.

And yet there is a worthy theory behind Cavell and Cavell’s Wittgenstein, but not one having to do with vagueness or predication. It is closer to the early Quine, and it certainly is miles from Williamson’s emphasis on referential semantics. It comes out toward the end of “Must We Mean What We Say?” and it speaks of a cultural, functionalist holism:

Few speakers of a language utilize the full range of perception which the language provides, just as they do without so much of the rest of their cultural heritage. Not even the philosopher will come to possess all of his past, but to neglect it deliberately is foolhardy. The consequence of such neglect is that our philosophical memory and perception become fixated upon a few accidents of intellectual history.

The mistake, however, is to suppose that the ordinary use of a word is a function of the internal state of the speaker.

I should urge that we do justice to the fact that an individual’s intentions or wishes can no more produce the general mean­ing for a word than they can produce horses for beggars, or home runs from pop flies, or successful poems out of unsuccessful poems.

Stanley Cavell, “Must We Mean What We Say?” (1957)

I take this to first propose an externalist, functionalist idea of meaning: what we “mean” when we say something has nothing to do with some private intention we may possess, and everything to do with the rules and standards of language use in our linguistic community. Cavell’s specific contribution is to say that if this is so, philosophy must take on the full burden of the linguistic and cultural history of our community, which includes (and even privileges) the difficult and arcane effects produced by literature. This is a huge responsibility, and no doubt a huge burden to those like Williamson who would rather examine meaning on a semantic or locally pragmatic level. Unfortunately, I think the burden of a more holistic pragmatism, one that inevitably requires heuristic inexactitude, is unavoidable.

A more formal attempt to describe this sort of functionalist pragmatism had already been given in 1948 by Wilfrid Sellars. Sellars later refined this vision to be considerably more complex, but already Sellars’ grasp of the problem in a non-skeptical way is inspiring. Rejecting empiricism, he describes a meeting of idealist and analytic traditions in a hybrid of metaphysical realism and linguistic idealism:

I like to think we have reformulated in our own way a familiar type of Idealistic argument. It has been said that human experience can only be understood as a fragment of an ideally coherent experience. Our claim is that our empirical language can only be (epistemologically) understood as an incoherent and fragmentary schema of an ideally coherent language. The Idealism, but not the wisdom, disappears with the dropping of the term ‘experience.’ Formally, all languages and worlds are on an equal footing. This is indeed a principle of indifference. On the other hand, a reconstruction of the pragmatics of common sense and the scientific outlook points to conformation rules requiring a [world-]story to contain sentences which are confirmed but not verified. In this sense the ideal of our language is a realistic language; and this is the place of Realism in the New Way of Words.

Wilfrid Sellars, “Realism and the New Way of Words“, in Pure Pragmatics and Possible Worlds (1948)

It is not that language defies all attempts to place it under precise understanding. It’s just that we are only local participants in a huge linguistic world to which we have only limited access, which makes the problem very, very hard, but also much richer the problems posed by Williamson. Determinations of meaning are theoretically possible, but in practice inexact, though not indeterminate. We can still proceed with provisional, pragmatic investigations, much in the way that Peirce did, within Sellars’ overarching structure, which I think is a great achievement.

For contrast, see Williamson here, trying to localize problems of vagueness in meaning. Williamson’s view of this community of meaning is limited and emaciated because of the limits imposed on it by his demands for atomistic quantification. The bottom line is that I wouldn’t want to live in a world and a community  in which language could be sufficiently quantified in the way that Williamson thinks it can.