David Auerbach on literature, tech, film, etc.

Tag: language (page 2 of 2)

Phenomenology of Punctuation

Raoul Hausmann, Phonetic Poem (1918)

Hello everyone and happy new years. While travelling in the last few weeks I had a conversation with the ever-acute Juliet Clark, who told me about the latest trends in editing. One piece of news is that semicolons between independent clauses are very out of fashion, even more than colons.

I was surprised, because I’ve always thought the colon was a more finicky piece of punctuation, but its more particular usage case probably has kept a place for it while the semicolon has come to seem more superfluous and easily replaced with a period. That I feel that my semicolons should not be replaced with periods (not usually, anyway) doesn’t have much bearing on the semicolon’s popularity.

I use semicolons frequently enough that they’ve taken on a particular feel for me that is at most vaguely approximated inside anyone else’s head. Even when I’m not using them, uniformity of sentence rhythm can bother me; I’ll change up sentence structure and adopt more ornate phrasing to get the feel of a semicolon’s half-pause without actually using the character itself. So for me the semicolon also has the regulative function of releasing the accumulated pressure of the monotony of seemingly repetitious sentence patterns.

(Starting sentences with conjunctions also shifts the pacing, though I never do so for the clause right after a semicolon; wouldn’t this clause seem strange starting with an “and,” stranger than if I’d used a period instead of a semicolon just now? Conjunctions need to be capitalized to look right to me when they start independent clauses.)

Reading too much of the flat, staccato American fiction of the 1980s and 1990s caused me to cling desperately to a more flowing and/or baroque style when I was growing up. It wasn’t just limited to Raymond Carver and his kin, though. I had similar negative reactions to Iris Murdoch’s prose, which seems to stick far too often to a thudding subject-verb-object windshield wiper rhythm that sets my teeth on edge. For contrast, German strictly mandate placement of parts of speech in such a way as to frequently yield free-form chaos on a word by word level, making such monotony rarer.

The vagaries of these perceptions of the flow and rhythm of punctuation are more particular than I could fully document. At least in the case of the Oxford comma, everyone knows that there’s no agreement as to how sentences with or without it should feel. But there seems to be the tacit agreement that usage of most punctuation has the same effect on speakers of the same language (well, within the same socio-economic class and dialect and geographical background and so on, but you see my point).

Which leads to the next, greater problem. Even the colon is out of fashion these days, frequently replaced by the em dash—at least when the colon’s not being given its strictest usage preceding a list or similar. And the allure of the em dash is a headache for me, because somewhere along the line I was taught that it was wrong to use an em dash just to provide a break in a sentence—like this, just now. I learned that the only proper use—and this is not a rule in Strunk and White, so it must have been a particularly insistent teacher somewhere along the line—was as a substitute for parentheses. Whichever Ancient Mariner taught me that rule was really irresponsible. Em dashes were suitable—no, required!—when using parentheses to offset an embedded clause would wrongly subordinate the clause. Parentheses were suitable for sotto voce asides or digressions, but not for crucial interjections. But I ceased to use the single, lonely em dash.

Unfortunately, the persistent sense that a single em dash was wrong-headed blinded me to the sense of it in prose over the years. Instead of having some sort of mental sense of a pause or a break, I’d just think “Whoop, casually incorrect usage” and proceed on. By never using it in my own writing, I didn’t gain any sense of how it shaped prose from the inside, and so it remained a mystery marker in others’ prose, never gaining a rightful sense of place in the lexicon of punctuation.

So now, much later on, I’m left having very little feel for how an unpaired em dash affects the flow of a sentence, or at least a feel that is vastly different from that of most people’s. When the punctuation is aberrant anyway—as in Tristram Shandy, say, or in Celine, which is probably where I first was preoccupied by the visual and phenomenological effects of punctuation on verbal pacing—it’s not such a problem, but in everyday writing, I’m left missing part of the sensus communis.

Of course this argument could be extended to all sorts of words and phrases as well….

Stanley Cavell and Timothy Williamson: Must We Mean What We Say? And How?

This is an extension of earlier thoughts on Wittgenstein, and particularly about how philosophers think of meaning and to what extent culture gets involved in it. I want to contrast Stanley Cavell, for whom culture is very nearly the starting point of philosophical investigation, and Timothy Williamson, for whom it seems to be a recurrent nuisance. Both claim very different aspects of Wittgenstein for their own projects. I side with Cavell.

“Must We Mean What We Say?” is very early Cavell, dating from 1957, before he had gotten his PhD. I am not sure how widely read it is today, because it is written in the argot of the Ordinary Language Philosophy of the time (Cavell was a student of J.L. Austin’s). Although the essay goes far beyond Austin in its underlying concerns, Cavell is still working within an orthodoxy that he would soon transcend.

The signs are clearly already there, as Cavell concertedly links the technical aspects of Wittgenstein and Ordinary Language Philosophy to looser concerns of art, literature, and taste. He yokes the ideas of language games and social practice to somewhat Kantian ideas about the experience of art, beauty, and meaning. His skill in doing so is already manifest. His employment of technical discourse (here Wittgenstein, elsewhere psychoanalysis) never overshadows the literary humanist sense that comes to the forefront in his later work; Cavell fits in my mind next to William Empson, Erich Auerbach and Northrop Frye rather than to Austin or Ryle.

Notably, he draws out those aspects of Wittgenstein closest to this sensibility, which Austin and Ryle clearly did not possess: the amazement and bafflement at culture, the ability to be temporarily transported by a “game,” be it a work of art or a conversation, the sense of awe. Wittgenstein’s deployment of these moments was very sparing and always cautiously conditioned by his radical uncertainty. Cavell seems to possess more holistic certainty, and as Nightspore suggested in a comment, this allows parts of Wittgenstein’s work to come forward more fully in a way that Wittgenstein would never have allowed.

Cavell does defend Ordinary Language Philosophy from an attack by the logician and skeptic Benson Mates. I have not read the attack, but from Cavell’s quotes, it seems a bit more temperate than Ernest Gellner’s attack, but not all that much more sympathetic, akin to Timothy Williamson‘s recent urgings that we forget about all those ordinary language anecdotes and platitudes and once more get down to solving logical and metaphysical issues for all time. Reading Williamson’s “Must Do Better” seems to indicate that we haven’t come very far in the last 50 years:

What about progress on realism and truth? Far more is known in 2004 about truth than was known in 1964, as a result of technical work by philosophical and mathematical logicians such as Saul Kripke, Solomon Feferman, Anil Gupta, Vann McGee, Volker Halbach and many others on how close a predicate in a language can come to satisfying a full disquotational schema for that very language without incurring semantic paradoxes. Their results have significant and complex implications, not yet fully absorbed, for current debates concerning deflationism and minimalism. One clear lesson is that claims about truth need to be formulated with extreme precision, not out of kneejerk pedantry but because in practice correct general claims about truth often turn out to differ so subtly from provably incorrect claims that arguing in impressionistic terms is a hopelessly unreliable method. Unfortunately, much philosophical discussion of truth is still conducted in a programmatic, vague and technically uninformed spirit whose products inspire little confidence.

Precision is often regarded as a hyper-cautious characteristic. It is importantly the opposite. Vague statements are the hardest to convict of error. Obscurity is the oracle’s self-defense. To be precise is to make it as easy as possible for others to prove one wrong. That is what requires courage. But the community can lower the cost of precision by keeping in mind that precise errors often do more than vague truths for scientific progress.

In addition to the humdrum methodological virtues, we need far more reflectiveness about how philosophical debates are to be subjected to enough constraints to be worth conducting. For example, Dummett’s anti-realism about the past involved, remarkably, the abandonment of two of the main constraints on much philosophical activity. In rejecting instances of the law of excluded middle concerning past times, such as ‘Either a mammoth stood on this spot a hundred thousand years ago or no mammoth stood on this spot a hundred thousand years ago’, the anti-realist rejected both common sense and classical logic. Neither constraint is methodologically sacrosanct; both can intelligibly be challenged, even together. But when participants in a debate are allowed to throw out both simultaneously, methodological alarm bells should ring: it is at least not obvious that enough constraints are left to frame a fruitful debate.

When law and order break down, the result is not freedom or anarchy but the capricious tyranny of petty feuding warlords. Similarly, the unclarity of constraints in philosophy leads to authoritarianism. Whether an argument is widely accepted depends not on publicly accessible criteria that we can all apply for ourselves but on the say-so of charismatic authority figures. Pupils cannot become autonomous from their teachers because they cannot securely learn the standards by which their teachers judge. A modicum of willful unpredictability in the application of standards is a good policy for a professor who does not want his students to gain too much independence.

Timothy Williamson, “Must Do Better” (2004) [I wish he had called it “Must Fail Better”]

The details are different, but the resemblance to Mates’, Ayer’s, and yes, even Gellner’s criticism of the post-Wittgensteinian movements in analytic philosophy is uncanny, right down to the excoriation of mystic philosophical oracles. And Cavell’s defense could just as well apply to the unnamed folks whom Williamson is bashing:

 But the philosopher who proceeds from ordinary language is concerned less to avenge sensational crimes against the intellect than to redress its civil wrongs; to steady any imbalance, the tiniest usurpation, in the mind. This inevitably re­quires reintroducing ideas which have become tyrannical (e.g., exist­ence, obligation, certainty, identity, reality, truth . . . ) into the specific contexts in which they function naturally.

This is not a question of cutting big ideas down to size, but of giving them the exact space in which they can move without corrupting. Nor does our wish to rehabilitate rather than to deny or expel such ideas (by such sentences as, “We can never know for certain . . . “; “The table is not real (really solid)”; “To tell me what I ought to do is always to tell me what you want me to do . . . “) come from a sentimental altruism. It is a question of self-preservation: for who is it that the philosopher punishes when it is the mind itself which assaults the mind?

Stanley Cavell, “Must We Mean What We Say?” (1957)

This reintroduction that Cavell recommends inevitably carries with it all the ambiguity and unprovability that Williamson (and Gellner) detest. It comes as little surprise that Williamson’s take on Wittgenstein and Austin is rather off-the-mark:

A standard framework for description is an incipient theory; it embodies a view of the important dimensions of the phenomena to be described. Since Wittgenstein and Austin were notoriously suspicious of philosophical theory, they inhibited theory-making even of this mild kind. Of course, many philosophers of the period escaped their influence. Austin himself permitted philosophical theories, if they were not premature; it was just that he put the age of maturity so late.

Wittgenstein held that philosophical theories were symptoms of philosophical puzzlement, not answers to it, but that was itself one of his philosophical theories. His work was always driven by theoretical concerns. This applies in particular to his account of family resemblance terms, his specific contribution to the study of vagueness, as it does to Friedrich Waismann’s similar notion of open texture, developed under Wittgenstein’s influence. However, theory does not flourish when it must be done on the quiet. It needs to be kept in the open, where it can be properly criticized.

Timothy Williamson, Vagueness (1994)

Williamson’s demands pose positivistic, scientific criteria for theories that much of Wittgenstein’s work cannot meet, and I gather Williamson is happy to throw that out and keep only what he deems satisfactory. But regardless of accuracy or inclusiveness, if the question comes down to whether I prefer Cavell’s Wittgenstein or Williamson’s Wittgenstein, the choice for me is obviously Cavell, as much as it must seem obviously Williamson to others. But I also don’t think that we know more about truth today than we did 50 years ago, at least not in any ordinary language sense of that claim.

And yet there is a worthy theory behind Cavell and Cavell’s Wittgenstein, but not one having to do with vagueness or predication. It is closer to the early Quine, and it certainly is miles from Williamson’s emphasis on referential semantics. It comes out toward the end of “Must We Mean What We Say?” and it speaks of a cultural, functionalist holism:

Few speakers of a language utilize the full range of perception which the language provides, just as they do without so much of the rest of their cultural heritage. Not even the philosopher will come to possess all of his past, but to neglect it deliberately is foolhardy. The consequence of such neglect is that our philosophical memory and perception become fixated upon a few accidents of intellectual history.

The mistake, however, is to suppose that the ordinary use of a word is a function of the internal state of the speaker.

I should urge that we do justice to the fact that an individual’s intentions or wishes can no more produce the general mean­ing for a word than they can produce horses for beggars, or home runs from pop flies, or successful poems out of unsuccessful poems.

Stanley Cavell, “Must We Mean What We Say?” (1957)

I take this to first propose an externalist, functionalist idea of meaning: what we “mean” when we say something has nothing to do with some private intention we may possess, and everything to do with the rules and standards of language use in our linguistic community. Cavell’s specific contribution is to say that if this is so, philosophy must take on the full burden of the linguistic and cultural history of our community, which includes (and even privileges) the difficult and arcane effects produced by literature. This is a huge responsibility, and no doubt a huge burden to those like Williamson who would rather examine meaning on a semantic or locally pragmatic level. Unfortunately, I think the burden of a more holistic pragmatism, one that inevitably requires heuristic inexactitude, is unavoidable.

A more formal attempt to describe this sort of functionalist pragmatism had already been given in 1948 by Wilfrid Sellars. Sellars later refined this vision to be considerably more complex, but already Sellars’ grasp of the problem in a non-skeptical way is inspiring. Rejecting empiricism, he describes a meeting of idealist and analytic traditions in a hybrid of metaphysical realism and linguistic idealism:

I like to think we have reformulated in our own way a familiar type of Idealistic argument. It has been said that human experience can only be understood as a fragment of an ideally coherent experience. Our claim is that our empirical language can only be (epistemologically) understood as an incoherent and fragmentary schema of an ideally coherent language. The Idealism, but not the wisdom, disappears with the dropping of the term ‘experience.’ Formally, all languages and worlds are on an equal footing. This is indeed a principle of indifference. On the other hand, a reconstruction of the pragmatics of common sense and the scientific outlook points to conformation rules requiring a [world-]story to contain sentences which are confirmed but not verified. In this sense the ideal of our language is a realistic language; and this is the place of Realism in the New Way of Words.

Wilfrid Sellars, “Realism and the New Way of Words“, in Pure Pragmatics and Possible Worlds (1948)

It is not that language defies all attempts to place it under precise understanding. It’s just that we are only local participants in a huge linguistic world to which we have only limited access, which makes the problem very, very hard, but also much richer the problems posed by Williamson. Determinations of meaning are theoretically possible, but in practice inexact, though not indeterminate. We can still proceed with provisional, pragmatic investigations, much in the way that Peirce did, within Sellars’ overarching structure, which I think is a great achievement.

For contrast, see Williamson here, trying to localize problems of vagueness in meaning. Williamson’s view of this community of meaning is limited and emaciated because of the limits imposed on it by his demands for atomistic quantification. The bottom line is that I wouldn’t want to live in a world and a community  in which language could be sufficiently quantified in the way that Williamson thinks it can.

9 Tired and Wrong Received Ideas

Flaubert's Bouvard and Pécuchet, by Guy Davenport

Flaubert's Bouvard and Pécuchet, by Guy Davenport

These nine ideas are all wrong. (I believed many of these, whether explicitly or as an unstated assumption, at some point or another, so this post is directed at my past self as much as anyone.)

  1. The Greeks (Athens specifically) had a free direct democracy with open discussion, free of tyranny.
  2. Descartes formulated the fundamental concepts of rational subjectivity and selfhood under which we all still operate today, thus originating modernity.
  3. Enlightenment thinkers shared a rationalist, Panglossian optimism about controlling humanity and the state.
  4. The French Revolution was a seminal, epochal event that drastically and uniquely changed attitudes toward humanity, history, and politics.
  5. American religious fanaticism originates with the Puritans and associated peoples in the 17th and 18th centuries.
  6. Hegel’s dialectic is of the form “thesis-antithesis-synthesis.”
  7. Prior to the 20th century (or prior to Schleiermacher, Saussure, Wittgenstein, Derrida, etc.), language was taken to have determinate, definite meaning that directly referred to reality.
  8. Universal laws of Chomsky’s Universal Grammar, hard-wired into the brain, have been discovered, which apply to all known languages.
  9. A two part slippage of political terms (note how one term appears in both lists):
    1. Capitalism = libetarianism = free markets = laissez-faire = trickle-down = globalization = free trade = neoliberalism = liberalism = supply-side = mercantilism = etc.
    2. Communism = Marxism = Leninism = socialism = regulated market = welfare state = liberalism = Keynesianism = Great Society = etc.

These are some of the ones that I think about most often, ones that are taken seriously by some people I respect. (I’m not going to list “Barack Obama wasn’t born in the United States” or “Edward Said was a Muslim fundamentalist,” because I’m lucky enough not to deal with people who believe these things, and I’m trying to list these in order to change people’s minds, which would be impossible with anyone who believes those two.)

These ideas are frequently debunked or contested, but still I frequently hear them stated with blithe certainty. Even when the case is debatable, as with the French Revolution, there is such exaggeration of its singular importance that no event short of the Second Coming could fulfill the importance assigned to it.

Oversimplification is the main sin here. Two forms of it present here are origination and conflation. Origination states that a certain idea, concept, or practice began with a certain person or people at a certain time and place, and simply did not exist before that. Conflation simply packages together terms like “subjectivity” and “selfhood” and “rationalism,” so that an attack on one serves as an attack on all of them. And with both of these these goes Inflation, where the key idea/event/person is elevated to such singular importance that it becomes an excuse not to search for any lesser-known ideas/events/people that might serve to complicate matters.

While discussing Derrida’s critique of Husserl, I criticized Derrida for invoking a simplistic, received view of language, and then tarring huge swaths of the linguistic and philosophical tradition with it. I’m far from the first to make that critique, and he’s far from the first to make that move. It’s a variation on the straw man argument. Via conflation, the straw man is used against many opponents, not just one. (It’s far more efficient.) By finding the same straw man in thinker after thinker, entire traditions can be invalidated and subverted, so much the better to make the critique appear more sweeping, profound, and revolutionary. Derrida was taking after Heidegger here, who was the absolute master of this technique. (Presence is always present.)

But such straw man arguments aren’t necessarily used for critiques. The ideas above are used both positively and negatively. They are pieces of conceptual history that seem so widely accepted that in a hundred years, people may have trouble figuring out that these assumptions underlay so much contemporary writing. People often no longer bother to explain them or even to state them. As an analogy, Frederick Beiser has spent the last 20 years attempting to explain the impact of Jacobi and Lessing’s “Pantheism controversy” on the philosophy of Kant and most everyone else in that period. It was a huge imbroglio at the time, but people like me read Kant with no knowledge of it.

I’ll close with some wise words about conceptual generalization and simplification  from Albert O. Hirschman, who inspired this post. Here he is remarking on Marx’s famous “history repeats, first as tragedy, then as farce” remark:

This is the second time I find a well-known generalization or aphorism about the history of events to be more nearly correct when applied to the history of ideas. The first time was with regard to Santayana’s famous dictum that those who do not learn from history are condemned to repeat it. Generalizing on the firm basis of this sample of two, I am tempted to formulate a “metalaw”: historical “laws” that are supposed to provide insights into the history of events come truly into their own in the history of ideas.

Does anyone else have particular favorite received ideas they’d like to give?

Hugh Kenner on Louis Zukofsky, Canadian Proofreading, the MLA, the OED, and Everything Else

Hugh Kenner was a very sharp, eccentric critic best known for his work on James Joyce and Ezra Pound (two writers whose critical apparati seem to welcome eccentrics more than most), but he also was responsible for a textbook on geodesic domes that demonstrated his reverence for R. Buckminster Fuller, and a tutorial book on the Heathkit computer. (Thanks to Dan Visel for educating me on those last points.) The Heathkit was a bit before my time, but let’s hear it for computer-polymath types: Kenner, J.M. Coetzee, Elvis Costello, Scott Miller, Ray Davis.

Kenner was also almost completely deaf, which had to have had a huge impact on how he perceived language, and a peculiar counterpart to Joyce’s near-blindness in the last decades of his life. It probably helped to account for his enthusiasm for Buster Keaton and Chuck Jones. (He wrote books on both.)

I was reading through his essay collection Mazes, which collects febrile bits and pieces from mostly popular magazines like Harper’s and Life and National Review (ugh), and while his opinions range from enlightening to crackpot, he does frequently pull out amazing anecdotes. A few that jumped out at me:

Critical Texts

[Edmund Wilson complains about his American classics project being suppressed by some MLA conspiracy.]

Edmund Wilson was especially funny about eighteen Twain editors reading Tom Sawyer, word for word, backward, “in order to ascertain, without being diverted from this drudgery by attention to the story or the style, how many times ‘Aunt Polly’ is printed as ‘aunt Polly,’ and how many times ‘ssst!’ is printed as ‘sssst!'” Since the MLA had ordained that “plain texts”–books you just read–were to await the establishment of “critical texts”–books that with full display of evidence sift out printer’s errors and restore lost auctorial revisions–we’d be waiting, he estimated, “a century or longer.”



Set promised trouble as early as 1881, when James Murray, the chief editor, came to doubt if the language contained a more perplexing word. An assistant had already spent forty hours on it, and Murray anticipated forty hours more. Set (the verb) was completed more than three decades later, and the time its final arrangement took Murray’s chief associate, Henry Bradley, was something like forty days, in the course of which he improvised twelve main classes with no fewer than 154 subdivisions, the last of which (set up) required forty-four further subsections.

The result, a treatise two-thirds as long as Paradise Lost, is from most points of view a triumph of ingenious uselessness, reminiscent of Yeats’s A Vision in being nearly impenetrable through sheer complexity of classification. Someone who had heard of hunters “setting” to fowl would toil long and hard through those columns en route to his quarry, low down in the final clause of #110: “set: to get within shooting distance by water.


Canadian Proofreading

A newspaper editor once told me why proofreading standards in Canada declined in the 1940s. Reading proof–a dull underpaid job–had once kept retired clergymen from starving. It was when the aged clergy commenced to draw pensions that papers had no recourse save to hire less literate drifters.


William Empson and George Orwell

[Is this really true?]

Orwell’s wartime BBC acquaintance, William Empson, warned him in 1945 that Animal Farm was liable to misinterpretation, and years later provided an object lesson himself when he denied that 1984 was “about,” some future communism. It was “about,” Empson insisted, as though the fact should have been obvious, that pit of infamy, the Roman Catholic Church.


Mortimer Adler

In thirty years Adler’s Institute for Philosophical Research have only made a start on repackaging “the whole realm of the great ideas”–so far “two volumes on the idea of freedom; one volume each on the ideas of justice, happiness, love, progress, and religion; and a monograph on the idea of beauty”? That such books will help save mankind is a notion so high-minded it verges on self-parody.

[I remember reading something or other by Adler for a class in high school and writing a sneering dismissal of it, referring to him as “Morty” all the way through. I don’t remember anything about the content, but I suspect the sort of tone Kenner describes is what set me off.]


Louis Zukofsky

No one that I’ve known knew English half as minutely as the late Louis Zukofsky, who began its acquisition at twelve and kept the habit of looking up everything including “a” and “the.”

[Is it common knowledge that Zukofsky’s first language was Yiddish? I feel like I should have known this a long time ago.]



Barthes has little to say about real literature. He flutters brightly around its edges: “Proust and Names,” “Flaubert and the Sentence.” Its coercive powers exceed what the codes account for. And decade by decade we keep remaking it in replenishing its power to remake us.


Kafka: Diogenes


In my case one can imagine three circles, an innermost one, A, then B, then C. The core A explains to B why this man must torment and mistrust himself, why he must renounce, why he must not live. (Was not Diogenes, for instance, gravely ill in this sense? Which of us would not have been happy under Alexander’s radiant gaze? But Diogenes frantically begged him to move out of the way of the sun. That tub was full of ghosts.) To C, the active man, no explanations are given, he is merely terribly ordered about by B; C acts under the most severe pressure, but more in fear that in understanding, he trusts, he believes, that A explains everything to B and that B has understood everything rightly.

Kafka (tr. Kaiser/Wilkins)

I don’t see this parable mentioned too often, but it portrays the most severe internalization of some of Kafka’s obsessions. Everything is internalized. But while we have A, the controlling deity who answers to no one (shades of Jaynes!) and C, the unknowing worker, who is this B? Is it Klamm or the Mayor from The Castle? The father in “The Judgment”? The academy of “Report to an Academy”? Karl, Huld, or Titorelli in The Trial?

Yet it is B that chooses not to share the knowledge B gains from A with C, the very knowledge that would assuage C’s fear, or at least temper it with some sense of duty, responsibility, necessity, anything. Or does B? C does not get to ask B that question. Maybe B cannot explain to C what C does not understand. Maybe B is mediating between two entities that speak incompatible languages: one of command, one of action. Maybe C does not have an option other than to act, and C’s dreams of explanation are meaningless and cannot be satisfied. C only waits for the next order. B’s barked commands may be the only thing that C can understand. B, the messenger and interpreter, can never be sure of being properly understood. And what then of A?

And why is it that we are inside of C’s mind, while B and A are opaque? Are we reading only in C’s language?

Newer posts »

© 2020 Waggish

Theme by Anders NorenUp ↑