David Auerbach on literature, tech, film, etc.

Stanley Cavell and Timothy Williamson: Must We Mean What We Say? And How?

This is an extension of earlier thoughts on Wittgenstein, and particularly about how philosophers think of meaning and to what extent culture gets involved in it. I want to contrast Stanley Cavell, for whom culture is very nearly the starting point of philosophical investigation, and Timothy Williamson, for whom it seems to be a recurrent nuisance. Both claim very different aspects of Wittgenstein for their own projects. I side with Cavell.

“Must We Mean What We Say?” is very early Cavell, dating from 1957, before he had gotten his PhD. I am not sure how widely read it is today, because it is written in the argot of the Ordinary Language Philosophy of the time (Cavell was a student of J.L. Austin’s). Although the essay goes far beyond Austin in its underlying concerns, Cavell is still working within an orthodoxy that he would soon transcend.

The signs are clearly already there, as Cavell concertedly links the technical aspects of Wittgenstein and Ordinary Language Philosophy to looser concerns of art, literature, and taste. He yokes the ideas of language games and social practice to somewhat Kantian ideas about the experience of art, beauty, and meaning. His skill in doing so is already manifest. His employment of technical discourse (here Wittgenstein, elsewhere psychoanalysis) never overshadows the literary humanist sense that comes to the forefront in his later work; Cavell fits in my mind next to William Empson, Erich Auerbach and Northrop Frye rather than to Austin or Ryle.

Notably, he draws out those aspects of Wittgenstein closest to this sensibility, which Austin and Ryle clearly did not possess: the amazement and bafflement at culture, the ability to be temporarily transported by a “game,” be it a work of art or a conversation, the sense of awe. Wittgenstein’s deployment of these moments was very sparing and always cautiously conditioned by his radical uncertainty. Cavell seems to possess more holistic certainty, and as Nightspore suggested in a comment, this allows parts of Wittgenstein’s work to come forward more fully in a way that Wittgenstein would never have allowed.

Cavell does defend Ordinary Language Philosophy from an attack by the logician and skeptic Benson Mates. I have not read the attack, but from Cavell’s quotes, it seems a bit more temperate than Ernest Gellner’s attack, but not all that much more sympathetic, akin to Timothy Williamson‘s recent urgings that we forget about all those ordinary language anecdotes and platitudes and once more get down to solving logical and metaphysical issues for all time. Reading Williamson’s “Must Do Better” seems to indicate that we haven’t come very far in the last 50 years:

What about progress on realism and truth? Far more is known in 2004 about truth than was known in 1964, as a result of technical work by philosophical and mathematical logicians such as Saul Kripke, Solomon Feferman, Anil Gupta, Vann McGee, Volker Halbach and many others on how close a predicate in a language can come to satisfying a full disquotational schema for that very language without incurring semantic paradoxes. Their results have significant and complex implications, not yet fully absorbed, for current debates concerning deflationism and minimalism. One clear lesson is that claims about truth need to be formulated with extreme precision, not out of kneejerk pedantry but because in practice correct general claims about truth often turn out to differ so subtly from provably incorrect claims that arguing in impressionistic terms is a hopelessly unreliable method. Unfortunately, much philosophical discussion of truth is still conducted in a programmatic, vague and technically uninformed spirit whose products inspire little confidence.

Precision is often regarded as a hyper-cautious characteristic. It is importantly the opposite. Vague statements are the hardest to convict of error. Obscurity is the oracle’s self-defense. To be precise is to make it as easy as possible for others to prove one wrong. That is what requires courage. But the community can lower the cost of precision by keeping in mind that precise errors often do more than vague truths for scientific progress.

In addition to the humdrum methodological virtues, we need far more reflectiveness about how philosophical debates are to be subjected to enough constraints to be worth conducting. For example, Dummett’s anti-realism about the past involved, remarkably, the abandonment of two of the main constraints on much philosophical activity. In rejecting instances of the law of excluded middle concerning past times, such as ‘Either a mammoth stood on this spot a hundred thousand years ago or no mammoth stood on this spot a hundred thousand years ago’, the anti-realist rejected both common sense and classical logic. Neither constraint is methodologically sacrosanct; both can intelligibly be challenged, even together. But when participants in a debate are allowed to throw out both simultaneously, methodological alarm bells should ring: it is at least not obvious that enough constraints are left to frame a fruitful debate.

When law and order break down, the result is not freedom or anarchy but the capricious tyranny of petty feuding warlords. Similarly, the unclarity of constraints in philosophy leads to authoritarianism. Whether an argument is widely accepted depends not on publicly accessible criteria that we can all apply for ourselves but on the say-so of charismatic authority figures. Pupils cannot become autonomous from their teachers because they cannot securely learn the standards by which their teachers judge. A modicum of willful unpredictability in the application of standards is a good policy for a professor who does not want his students to gain too much independence.

Timothy Williamson, “Must Do Better” (2004) [I wish he had called it “Must Fail Better”]

The details are different, but the resemblance to Mates’, Ayer’s, and yes, even Gellner’s criticism of the post-Wittgensteinian movements in analytic philosophy is uncanny, right down to the excoriation of mystic philosophical oracles. And Cavell’s defense could just as well apply to the unnamed folks whom Williamson is bashing:

 But the philosopher who proceeds from ordinary language is concerned less to avenge sensational crimes against the intellect than to redress its civil wrongs; to steady any imbalance, the tiniest usurpation, in the mind. This inevitably re­quires reintroducing ideas which have become tyrannical (e.g., exist­ence, obligation, certainty, identity, reality, truth . . . ) into the specific contexts in which they function naturally.

This is not a question of cutting big ideas down to size, but of giving them the exact space in which they can move without corrupting. Nor does our wish to rehabilitate rather than to deny or expel such ideas (by such sentences as, “We can never know for certain . . . “; “The table is not real (really solid)”; “To tell me what I ought to do is always to tell me what you want me to do . . . “) come from a sentimental altruism. It is a question of self-preservation: for who is it that the philosopher punishes when it is the mind itself which assaults the mind?

Stanley Cavell, “Must We Mean What We Say?” (1957)

This reintroduction that Cavell recommends inevitably carries with it all the ambiguity and unprovability that Williamson (and Gellner) detest. It comes as little surprise that Williamson’s take on Wittgenstein and Austin is rather off-the-mark:

A standard framework for description is an incipient theory; it embodies a view of the important dimensions of the phenomena to be described. Since Wittgenstein and Austin were notoriously suspicious of philosophical theory, they inhibited theory-making even of this mild kind. Of course, many philosophers of the period escaped their influence. Austin himself permitted philosophical theories, if they were not premature; it was just that he put the age of maturity so late.

Wittgenstein held that philosophical theories were symptoms of philosophical puzzlement, not answers to it, but that was itself one of his philosophical theories. His work was always driven by theoretical concerns. This applies in particular to his account of family resemblance terms, his specific contribution to the study of vagueness, as it does to Friedrich Waismann’s similar notion of open texture, developed under Wittgenstein’s influence. However, theory does not flourish when it must be done on the quiet. It needs to be kept in the open, where it can be properly criticized.

Timothy Williamson, Vagueness (1994)

Williamson’s demands pose positivistic, scientific criteria for theories that much of Wittgenstein’s work cannot meet, and I gather Williamson is happy to throw that out and keep only what he deems satisfactory. But regardless of accuracy or inclusiveness, if the question comes down to whether I prefer Cavell’s Wittgenstein or Williamson’s Wittgenstein, the choice for me is obviously Cavell, as much as it must seem obviously Williamson to others. But I also don’t think that we know more about truth today than we did 50 years ago, at least not in any ordinary language sense of that claim.

And yet there is a worthy theory behind Cavell and Cavell’s Wittgenstein, but not one having to do with vagueness or predication. It is closer to the early Quine, and it certainly is miles from Williamson’s emphasis on referential semantics. It comes out toward the end of “Must We Mean What We Say?” and it speaks of a cultural, functionalist holism:

Few speakers of a language utilize the full range of perception which the language provides, just as they do without so much of the rest of their cultural heritage. Not even the philosopher will come to possess all of his past, but to neglect it deliberately is foolhardy. The consequence of such neglect is that our philosophical memory and perception become fixated upon a few accidents of intellectual history.

The mistake, however, is to suppose that the ordinary use of a word is a function of the internal state of the speaker.

I should urge that we do justice to the fact that an individual’s intentions or wishes can no more produce the general mean­ing for a word than they can produce horses for beggars, or home runs from pop flies, or successful poems out of unsuccessful poems.

Stanley Cavell, “Must We Mean What We Say?” (1957)

I take this to first propose an externalist, functionalist idea of meaning: what we “mean” when we say something has nothing to do with some private intention we may possess, and everything to do with the rules and standards of language use in our linguistic community. Cavell’s specific contribution is to say that if this is so, philosophy must take on the full burden of the linguistic and cultural history of our community, which includes (and even privileges) the difficult and arcane effects produced by literature. This is a huge responsibility, and no doubt a huge burden to those like Williamson who would rather examine meaning on a semantic or locally pragmatic level. Unfortunately, I think the burden of a more holistic pragmatism, one that inevitably requires heuristic inexactitude, is unavoidable.

A more formal attempt to describe this sort of functionalist pragmatism had already been given in 1948 by Wilfrid Sellars. Sellars later refined this vision to be considerably more complex, but already Sellars’ grasp of the problem in a non-skeptical way is inspiring. Rejecting empiricism, he describes a meeting of idealist and analytic traditions in a hybrid of metaphysical realism and linguistic idealism:

I like to think we have reformulated in our own way a familiar type of Idealistic argument. It has been said that human experience can only be understood as a fragment of an ideally coherent experience. Our claim is that our empirical language can only be (epistemologically) understood as an incoherent and fragmentary schema of an ideally coherent language. The Idealism, but not the wisdom, disappears with the dropping of the term ‘experience.’ Formally, all languages and worlds are on an equal footing. This is indeed a principle of indifference. On the other hand, a reconstruction of the pragmatics of common sense and the scientific outlook points to conformation rules requiring a [world-]story to contain sentences which are confirmed but not verified. In this sense the ideal of our language is a realistic language; and this is the place of Realism in the New Way of Words.

Wilfrid Sellars, “Realism and the New Way of Words“, in Pure Pragmatics and Possible Worlds (1948)

It is not that language defies all attempts to place it under precise understanding. It’s just that we are only local participants in a huge linguistic world to which we have only limited access, which makes the problem very, very hard, but also much richer the problems posed by Williamson. Determinations of meaning are theoretically possible, but in practice inexact, though not indeterminate. We can still proceed with provisional, pragmatic investigations, much in the way that Peirce did, within Sellars’ overarching structure, which I think is a great achievement.

For contrast, see Williamson here, trying to localize problems of vagueness in meaning. Williamson’s view of this community of meaning is limited and emaciated because of the limits imposed on it by his demands for atomistic quantification. The bottom line is that I wouldn’t want to live in a world and a community  in which language could be sufficiently quantified in the way that Williamson thinks it can.


  1. Williamson apologist here (though an appreciative one). You attribute to him ‘urgings that we forget about all those ordinary language anecdotes and platitudes and once more get down to solving logical and metaphysical issues for all time’. I think it’s important to note that individual attempts at solutions to metaphysical issues aren’t expected to last long at all – there is, for all time, a truth of the matter that we can get closer to, but that’s all. For instance, he can accept your ‘ huge linguistic world to which we have only limited access’, and that his own answers to e.g. questions about vagueness are far from correct. (He doesn’t insist that language must be simple or easily made precise – he just requires that philosophical claims be clear.) His stance is that he does his best for now, and when he is shown that his answer is wrong in a way that he and any other competent person can understand, we are closer to the truth and everyone is better off.

    I think there only really needs to be trouble between Cavell’s approach and Williamson’s if Cavell tries to claim understanding of ‘the exact space in which [big ideas] can move without corrupting’. This sort of thing tends not to be universally and precisely teachable, and for Williamson that’s no good because it leads to ‘capricious tyranny’ (well, hopefully not! – but he thinks it’s unhealthy). Williamson can accept that there is such an exact space, and that he currently fails to work within it, but I don’t think he can accept jumping straight to an awareness of its boundaries as a possibility for philosophy. (Some individuals may manage it, but if they can’t say how, why should he or I care?)

    All that said, I think that Cavell’s point about ‘tyrannical’ ideas is also a good one that really does weaken Williamson’s position. A culture of producing, refuting and gradually improving very precise claims can easily end up being tyrannised by a few tyrant ideas that lie behind all or many of those precise claims. I’d guess that to begin with keeping your ideas few and broad aids communication, and then, as claims are only gradually attacked and altered, various common pivot points develop, towards which the strongest ideas migrate and become unassailable. But of course, Williamson accepts that we are very fallible – that’s why he thinks we have to adopt his careful method of ‘law and order’ in the first place. And attacks on the tyrants can be conducted through the Williamsonian methodology itself, once they are recognised for what they are.

  2. David Auerbach

    17 October 2011 at 00:45

    Thanks C; I appreciate the reply.

    I was caricaturing Williamson a bit with that phrase, admittedly. (Obviously I take Williamson more seriously than I take Derrida….) I agree with much of what you say, yet I think Williamson falls short of the very metrics he sets up, methodological and otherwise. Williamson, like many other metaphysical analytics, appeals to intuition quite a bit. I think it is his core mechanism to circumvent “the linguistic turn,” in fact: he considers the metaphysical questions reasonable and conceptually viable because most people think of them as being so. Yet epistemicism is not at all an intuitive position, yet I don’t see that Williamson has ever identified why it is that intuitions against epistemicism are less worthy than intuitions against dialetheism. Perhaps he has explained this somewhere.

    Likewise, I don’t see that Vann McGee’s famed “counterexample” to modus ponens is substantive, given that it relies on appeals to intuition (one of which I reject) in order to make the non-intuitive claim that modus ponens has counterexamples. It’s not even that I think that MP is guaranteed 100% correct in natural language scenarios, but that McGee’s armchair methodology seems incredibly sloppy. I certainly don’t think it contributes to our understanding of “truth,” logical or otherwise.

    To put it another way, intuition seems just as much a hegemonic tyranny as many other devices. This is a general position and it doesn’t just apply to Williamson: I am baffled that Naming and Necessity can be thought of as a technical work of logic when it claims to “prove” dualism in its slim number of pages. Whatever my disagreements with them (and there are many!), Dummett, Quine, Sellars, Davidson, Wright, Priest, Chisholm, Horwich, Grice, Hintikka, Sosa, Lewis, and the Pittsburgh School all seem to me to have contributed much more to the Peircean pragmatic philosophical project than anyone coming out of the school with which Williamson identifies himself.

    I don’t expect to convince people of this view, certainly not here, but I hope it explains why I feel that Williamson’s own lack of charity toward Dummett and others is not grounded in any sort of rightful authority.

  3. I, in turn, mostly agree with your reply. I meant only to defend Williamson’s standards, not to argue that he lives up to them (I don’t know if I can really judge that).

    As for intuition, it looks like a lot of references to it are actually obscuring a strange procedure that relies on a mix of Quine, Darwin (or Spencer) and maybe Lakatos. It goes like this: there are philosophical theories, which are analogous to and sometimes mixed up with scientific ones; they compete for philosophical, and wider, respect; some die through internal incoherence, incompatibility with existing successful theories, or starvation of respect; if a theory, or part of it, survives, eliminates its competitors and becomes sufficiently central to (naturalistic) philosophy, then it is true and all its originators win prizes. Making claims about intuitions is often just a way of attempting to farm respect for your theory by insisting that it is most compatible with what is already widely held, and by suggesting that it already commands a great deal of respect anyway, perhaps even is already sure to win. (You don’t want to fail to back the winner, because that means you’re playing the same role in the history of science as Lamarck and Ptolemy, who we all know were both chumps.)

    Of course, it doesn’t really work like that – I’m not sure if anyone expects a real winner or merely an ideal one – but I think that claims of the type ‘these intuitions are stronger’ are meant to gain support rather than rationally demand it. The idea is that this is all OK in the end because what is rationally demanded is what is ultimately right by the best theory, which is the one that ultimately gains the most support. (I’m not sure McGee is arguing like this, though – probably there are more straightforward uses of intuition too.)

    I think that it’s possible to rely on intuition and still meet Williamson’s standards for communicability and non-slipperiness. Intuitions probably can’t be taught (as opposed to being passed on via exposure and repeated reinforcement, which sounds a bit tyrannical), but as long as the claims they’re meant to support are clearly and rigidly expressed, so their details can be taught to and evaluated by others, I don’t think it matters. It’s probably OK, having clearly expressed an idea, to then go on to try to drum up support for it by emphasis on your and others’ intuitions, though this could also result in tyranny if done wrong.

  4. This is very helpful, and of course I agree with you. (“Emaciated” is a good word, too.) I think “exact” in Cavell’s formulation “exact space” is tricky and possibly misleading. It’s a question of always trying to get the balance better, not to legislate with precision. If the space is too big, they corrupt. If too small they’re not given their due. So you should always try to do better, which is what makes both Cavell and Wittgenstein so liable to be confusing, much more so than Austin, who really does seem to think he’s gotten the space right once and for all.

    Helpful also for my understanding of Sellars, whom I like in a still-to-come way since I find him so howlingly difficult. (Which I regard as my problem and hope one day to overcome.)

Leave a Reply

© 2024 Waggish

Theme by Anders NorenUp ↑