David Auerbach on literature, tech, film, etc.

Tag: academia (page 1 of 4)

Is Social Science a Joke?

Richard Biernacki’s book, cursed with the unwieldy title Reinventing Evidence in Social Inquiry: Decoding Facts and Variables, is frequently incisive, sometimes inspirational, and sometimes frustrating. Biernacki vigorously attacks the use of quantitative methods in social science, particularly as applied to texts. He finds their usage to be slapdash, prejudiced, and dependent on lumping disparate phenomena under a single label, often in whatever way happens to serve the researcher’s pre-ordained goal.

I have to cheer when he cites Erving Goffman and Clifford Geertz as spiritual guardians:

“Whatever it is that generates sureness,” Goffman intimated darkly, “is precisely what will be employed by those who want to mislead us.” Goffman left it to us to discern how the riddle of cognitive framing applies to sociological practice and to one’s framing of one’s own results. Geertz expressed a similar kind of caution more cheerfully: “Keeping the reasoning wary, thus useful, thus true, is, as we say, the name of the game.” The only intellectual building material is self-vigilance, not the reified ingredients “theory” or “method.”

Damn straight.

Biernacki’s points are very well-taken, and his individual critiques are devastating. He has little trouble justifying his main charge:

If you reconstruct how sociologists mix quantitative and text-interpretive methods, combining what is intrinsically uncombinable, you discover leg-pulling of several kinds: from the quantitative perspective, massaging of the raw data to identify more clearly the meanings one “knows” are important or, again, standardized causal interpretations of unique semiotic processes; to zigzagging between quantitative and interpretive logic to generate whatever meanings the investigator supposes should be there.

Each study was narrated as a tale of discovery, yet each primary finding was guaranteed a priori.

Where I have a problem is his suggested retreat to a “humanist” mode of inquiry, which, while extremely attractive to people like myself, does not necessarily solve the underlying problem. I will explain this later.

The Indictment

Biernacki has a huge range of reading behind him and he quotes a number of people of whom I’m very fond: Robert Musil (who gets the last word in the book), Erving Goffman, Flaubert (Dictionary of Received Ideas), Michael Frede, Ronald Giere, Barrington Moore, William Empson, Jeanne Fahnestock, Wilfrid Sellars, Kenneth Burke, Samuel Beckett, Mary Douglas, Novalis, Cosma Shalizi, Eleanor Rosch, Valerio Valeri, Ludwig Wittgenstein, Andrea Wilson Nightingale, Erwin Panofsky, and Erich Auerbach. (Bibliography available online here.) Now that I’ve written it out, let me go further: that’s an amazing list.

I’m not particularly keen on most of his targets either, so we overlap sufficiently that I’m baffled at his elevation of Giorgio Agamben, whose attack on quantitative sampling is needlessly overwrought and jargony. Biernacki’s prose, unfortunately, tends toward the same. His thinking is in fact quite clear and rigorous, but the overlay of sociological jargon gets quite dense at times and needlessly prolongs things. (I’ll offer paraphrases of less transparent passages below.)

This applies to the general terms as well. Biernacki defines the social science term “coding” as such:

Coding, a word that may introduce an aura of scientism, is just the sorting of texts, or of subunits such as paragraphs, according to a classificatory framework.

What the social sciences deem “coding”–the application of a common typological label to variable individual cases–would better be simply called “labeling” or perhaps “classification.” I prefer “labeling” because it is the simplest and the most informal. As Biernacki demonstrates, the research being carried out is anything but formal, and so building a fence around a particular textual method is misleading. While it may make it easier to delegitimize that particular method, it also limits the scope of his critique. It also makes it seem as though this process is distinct from the labeling we do every day of objects and actions, when I think any difference is one of degree and not of kind.

To make the broadness of the critique clear, my article The Stupidity of Computers describes very similar methods, except applied to people and objects as well as texts. I used “ontology” instead of “classificatory framework” and “labeling” instead of “coding,” but they’re fundamentally analogous. Or as I put it:

Who decided on these categories? Humans. And who assigned individual blogs to each category? Again humans. So the humans decided on the categories and assigned the data to the individual categories—then told the computers to confirm their judgments. Naturally the computers obliged.

The Stupidity of Computers

If anything, things seem worse in academic sociology, which is the field Biernacki treats. I am not familiar with the subfields Biernacki investigates and after his dip into those waters, I don’t have much desire to become familiar with them. Here is Biernacki’s brief:

Ironically, researchers who visualize a pattern in the “facts” often assert it symbolizes an incorrigible theory for which no data were required anyway.

They would turn meaningful texts into unit facts for the sake of converting these units back into meanings. What are the epistemological functions of the curious process of decontextualizing for the sake of recontextualizing? Cumulating the coding outputs purchases generality only if we know the codes rest on justifiable equivalencies of meaning, which is to return us to the original verbal settings that may vary incommensurably.

Paraphrase: sociologists are engaging in circular reading of texts. The squeeze a corpus into their frameworks and then reapply the frameworks onto specific examples to produce pre-ordained results.

My thesis is that coding procedures in contemporary sociology, the beachhead for coding texts that is spreading into history and literature, follow the rites by which religious believers relabel portions of the universe in a sacred arena for deep play. As in fundamentalist religious regimes, rejecting the enchantment of coding “facts” is nothing less than blasphemy.

Paraphrase: precisely because of their lack of any more fundamental support, the frameworks are sufficiently shaky that they are protected by hierarchical social structures that emerge around vulnerable belief systems, shutting down critics and elevating allies/toadies/grad students. For less opaque examples, see the conservative movement’s classification of “liberal” bias, or much of the talk that constitutes privilege-checking. Both utilize postulated frameworks supported by mantric rhetoric and repetition to obscure the lack of conceptual support. (And yes, I know the former is far more harmful, but today’s Right doesn’t have a monopoly on all forms of stupidity, since a large number of people have not realized that this chart is a joke.)

The ultimate point of this book is to stand social “science” on its head as less rigorous than humanist approaches. The social “scientists” of culture, those claiming a kind of epistemological advantage via their coding apparatuses, are instead intuitive cultists without openly sharable procedure. Opposite much orthodoxy, humanist craft workers who footnote and who convey symptomatically the wondrous in their readings are truer to the ideals of so-called hard science conventionally understood. As I endeavor to show, the nonsystematizing humanists still appreciate the obstacles to induction, the gift of an acute trial, the insurance of shared documentation, and the transformative power of anomalies. My brief is not the cliché that humanist interpretation aims at insight different in kind. More subversively, I insist such interpretation better fulfills the consecrated standards to which social “scientists” ostensibly subscribe.

Paraphrase: the use of quantitative metrics in social science is usually decorative frosting utilized in order to make preconceived notions seem more objective. In actuality they’re rigged games. A thoughtful, passionate, genuinely humanist approach is more scientific than vacuous tables.

It is more transparent, therefore more faithful to inquiry, to assume radical difference in a population than to rush toward aggregating modern “facts” out of corpuses whose members are artificially assumed to have homologous structures.

He’s talking about texts here, but this would apply to any grouping of anything. How to put this into practice is a much thornier question.

The Evidence

Biernacki then presents three case studies of prominent papers in recent sociology. He has done the legwork of looking through the original sources to see how “objective” the classification process was. The results are disastrous. All three are not just littered with slanted interpretations, selective omissions, and poor fits, but outright errors and holes in logic. The demolition is extremely thorough, and the time required to do the research might have boosted Biernacki’s ire further. Here are representative examples from the three cases.

Bearman and Stovel, Becoming a Nazi: A Model for Narrative Networks (2000)

All the network data were extracted from a single Nazi story, but it was not an actual autobiography from Abel’s collection. Help from Peter Bearman together with detective hunting established that the researchers coded instead from “The Story of a Middle-Class Youth,” a condensation published in an appendix to Abel’s book in 1938. Although the intact story was at hand for Bearman and Stovel, and although they had secured English translations of complete stories from the Abel collection, they coded instead from an adaptation that indicated with ellipses where connecting segments had been deleted.

Bearman and Stovel adopt the same vocabulary to describe their own scientific outlook as they apply to a Nazi. They feature “abstraction” for converging on the essential: “Comparison within and across narratives necessitates abstraction . . . This is accomplished by grouping elements into equivalency classes” [83; see also 20]. When the researchers present the Nazi cognitive style, “abstraction” is again the key feature, but now using it to “order experience” is a character defect [85]. It is not we as network reductionists who have a rigid response in analyzing qualitatively incomparable situations, it is the Nazis with a “master identity” who do. [NB: They also complain about another researcher’s “abstraction”: “Real lives are lost in the process, and real process is lost in the movement away from narrative by this abstraction.”]

Wendy Griswold, The Fabrication of Meaning: Literary Interpretation in the Unites States, Great Britain, and the West Indies (1987)


This presentation, which appeared in 1987 in sociology’s most exacting journal, was greeted far and wide as offering confirmable and generalizable results. It remains probably the most broadly circulated classic whose findings rest on systematic coding of text contents.

Griswold combined the reviews from each of her three regions—the United States, Great Britain, and the West Indies—to see if she could explain why some of George Lamming’s novels resonated more powerfully than others in her sample of reviews of his six novels in all. She guessed that “ambiguity” would not only engross readers in disambiguating the novels, but doing so would stimulate appreciative reviews. This just-so account presumes we can know what ambiguity is according to its function rather than by its verbal expression in a review. How exactly does creative engagement by the critics appear when articulated on the page of a book review? What is ambiguity on site? The blurring of appealing scientific hypothesis-testing with exegesis of highly compacted reviews produced a baffling gap: Griswold did not offer an example from her evidence to concretize this entity called “ambiguity,” yet social scientists propagated news about the abstraction in every direction.

When I took reviews in hand, it astonished me to find that at the individual level ambiguity is “specifically mentioned” (to my mind) primarily when the reviewer expresses frustration and disappointment. This dislike of ambiguity more often pushed a review over to a mixed or negative appraisal of a novel, reverse from Griswold’s report of correlations at the aggregate level…. Consider how baffling it is to identify “ambiguity” and “positive appraisal” on the ground.

If a resonant review, like a seminal novel, is multidimensional, and if the reviewer therefore does not try to locate the book on a metric of approval, the overall categories “positive,” and “mixed/negative” are not there in the text ready for translation. The summary is only a fabrication of the social “scientist.”

More subtly, by introducing the binary of colonialism as present or absent, the ritual cordons off the reality that it was daunting for British critics to avoid incorporating the relations of a concept as permeating as colonialism. Griswold never illustrates what counts as mention of colonialism or of any other theme.

John Evans, Playing God?: Human Genetic Engineering and the Rationalization of Public Bioethical Debate (2002)

To launch the sampling and coding ritual, we have to take up a schizophrenic consciousness between the quantitative-scientific and the humanistic-interpretive perspectives. We cannot acknowledge in one frame what we do in the other. Evans wrote that “the two foremost proponents of the form of argumentation in the bioethics profession as I have defined it,” Beauchamp and Childress, are not among authors charted as statistically influential. Indubitable knowledge from the humanist frame does not impinge on the “scientific” procedure for equating influence with citations.

Evans in the 2002 book Playing God produced importantly different diagrams out of the same data inputs as in the 1998 dissertation “Playing God.” How did this change transpire? For the 1992–1995 interval of debate, Evans raised the threshold for inclusion as an influential author in the cluster diagram from nine citations in the dissertation to ten in the book. This chart trimming changed the storyline significantly. For instance, the sociologist Troy Duster, whose work seems to run contrary to Evans’s thesis for the final period, 1992–1995, is among several other authors who dropped out of the diagram.

For a self-fulfilling prophecy Playing God filters out the epistles most pertinently aimed at the public. “If an item did not contain four or more citations, it was not included in the sample, because the primary technology of a citation study is measures of association between citations. I examined 765 randomly selected items from the universe. Of these, 345 fit the parameters for inclusion” [G 208].

“In my research,” Evans wrote, “the question was which top-cited authors were most similar to each other based on the texts that cited them” [G 209]. Similar how? Decades ago the analytic philosopher Nelson Goodman convincingly showed “similarity” lacks sense beyond particular and incommensurable practices of contrast and comparison. Whatever might we be talking about when we demonstrate what relative “influence” means by frequency citations and when we have no operative concept of influence outside this arbitrary measurement? As with ritual process, the models of citation counts merely bring to life a visual experience of a symbol’s use and substitute for the symbol’s conceptual definition.

Evans quotes Jonathan Glover as follows: “What he [Glover] envisions is a ‘genetic supermarket,’ which would meet ‘the individual specifications (within certain moral limits) of prospective parents’” [G 161]. Here again, findings appear to emerge by mischance. The words Evans attributed to Glover occur in a passage of Robert Nozick’s libertarian Anarchy, State, and Utopia, which Glover happened to quote before advancing toward a different position.

The kicker comes with a particularly noxious passage from Evans’ book, revealing the deep-seated self-justifying elitism at work in Evans’ a priori theorizing. Biernacki writes:

If my framing of Playing God as a ritual affirmation were plausible, we would predict that the policy recommendations with which the book concludes, while impracticably “utopian” [G 198], would impart an essential verity. That happens when Evans dismisses the need for real-world brakes on how elites would match particular means to an array of ends, once those ends were chosen by the public:

“If an ends commission decided that its ends to forward in genetic research were beneficence, nonmaleficence, and maintenance of the current specificity of genetic change as possible in the reproductive act, I have no doubt that bioethicists could determine which, if any, forms of HGE [human genetic engineering] advanced these ends. [G 203]”

As you might suspect given the abstractness of “ends in themselves,” it seems unlikely their implementation is a neutral technical job entrustable to specialist intellectuals. The experts in deciding how to pursue a mandated goal would, by concretizing it, subject it to reinterpretation. Would not the means that elites chose to institutionalize populist HGE policy have ramifying implications for practice, and thus values, in other spheres of life, short-circuiting public deliberation? Dealing with these practical issues in ritual is beside the point of affirming the transhistorical message that deliberation over ends should be protected from instrumental degradation.

The quote Biernacki cites here is incredibly damning, evoking images of a bioethical Comintern insisting that its ends are right and proper. Evans is the sort of powerless person you do not want in power.1

More generally, all three come off as tendentious, obfuscatory, and disingenuous, using numbers as a smokescreen for their unjustified suppositions. Biernacki is dead-on in stating that with more classical humanist criticism, you get to see upfront what sort of conceptual abstractions are taking place, subjective and case-based as they may be. Here, they hide behind the guise of objective abstractions plugged into a computational framework.  (Shades of Ann Coulter’s Lexis-Nexis searches.)

The Dangers

I don’t doubt that these three works are representative. And Biernacki’s most fascinating point is that this misuse of science plays directly into theories of cultural determinism that have become very common across the humanities and social sciences:

The same problem of mixing scientific controls with texts occurs in demonstrating the theory of cultural power. That proposed theory starts firmly within the interpretive perspective, because it makes categories of understanding the “variable” that interacts with the novel to produce an engrossing experience. As Kenneth Burke emphasized, in an ideology-saturated society, readers deal with a plethora of contradictory schemas from which they choose how to interpret a text. Alternatively, much important literature, such as Beckett’s plays in the 1950s, from inside its own lines blatantly models unprecedented schemas from which a reader may learn to decipher the work as a whole—“the absurd.” To probe the fabrication of meaning, the reading process might be analyzed more fruitfully as a rhetorical operation rather than as a social one. Kenneth Burke intimated that inquiry into the schemas for reading might include syllogistic progression (step-by-step appreciation of a kind of argument pressing forward via the narrative), qualitative progression (the appreciation of feelings post-hoc from narrative action), antecedent categorical forms (such as “the sonnet”), or technical schemas (such as chiasmus and reversal). In any event, by underspecifying the cultural workings of the literary experience, we arrive at “society” as the default explanation of differences in the received meanings of the novels. The more you attend to the critics’ professional know-how and to the generative schemas with which they read, the weaker the rationale for leaping to a generally shared “percipience” to explain coding outputs. Sociologists since the nineteenth century have invested so much energy in solidifying “society” as a “cause,” they can invoke it without asking whether more tangible but less spirit-like forces may be operating.

Paraphrase: by reducing texts to a handful of ostensibly constituent effects and declaring them to constitute the text, researchers rob the texts of any power they might really have, using them as interchangeable totems for empty confirmation of unsubstantiated theories of cultural domination. Everything feeds back into a giant phantom of “culture” (or “capitalism” or “modernity” or “secularism” or take-your-pick) that ensures the identical outcome. Hence Biernacki’s point:

Ironically, researchers who visualize a pattern in the “facts” often assert it symbolizes an incorrigible theory for which no data were required anyway.

This is not only true, but even if they do not assert such, this is what’s going on anyway. There has to be some underlying theory conditioning the coding/labeling in the first place.

This complements Hans Blumenberg’s observations about the nature of generalized maladies. While Blumenberg emphasized the vagueness and generality of such overarching theories of discontent, Biernacki completes the thought by demonstrating that when the incorrigible theory is reapplied to specific cases, the specific cases become interchangeable.

In considering the prevalent openness to theories of ‘capitalism,’ one cannot fail to notice not only that there always seems to be a need for a causal formula of maximum generality to account for people’s discontent with the state of the world but that there also seems to be a constant need on the part of the ‘bourgeois’ theorist to participate in the historical guilt of not having been one of the victims. Whether people’s readiness to entertain assertions of objective guilt derives from an existential guiltiness of Dasein vis-a-vis its possibilities, as Heidegger suggested in Being and Time, or from the “societal delusion system” of Adorno’s Negative Dialectics, in any case it is the high degree of indefiniteness of the complexes that are described in these ways that equips them to accept a variety of specific forms. Discontent is given retrospective self-evidence. This is not what gives rise to or stabilizes a theorem like that of secularization, but it certainly does serve to explain its success.

Hans Blumenberg, The Legitimacy of the Modern Age

Biernacki’s point is that these theories not only accept a wide variety of specific forms, but that they also homogenize these forms. Cultural theory commodifies its subject matter.

Yet at this point the particular issue of quantification has fallen by the wayside in favor of the problem of incorrigible theories. For quantification per se, Biernacki’s evidence is less than ideal, because all three case studies contain such elementary errors in reportage and logic that they would be poor even if the quantitative aspects of the papers were removed. That is, I have no doubt that were Griswold or Evans to write a qualitative assessment of the texts they treated, they would not produce very good work either.

Biernacki is right to say that the scientific frosting obscures the poor quality of their work and exacerbates reductionistic tendencies toward cultural determinism, but the question of “coding” gets into problems that come up even in the absence of quantitative metrics, because coding is labeling, and labeling is what we do all the time.

The Solutions?

Though Biernacki limits the scope of his critique to labeling applied to texts, the arguments go through for ontologies applied to any phenomena. I think Biernacki gets into a muddle in trying to specify texts as specifically exempt from classification, contrasting words like “novel” with words like “dog”:

The intensional definition of “dog” is historically closed, whereas newly discovered literary works and financial instruments stretch and revise the anterior category of “novel” or of “a hedge-fund practice.” A previously unconsidered novel that stretches the distinctions between biography and fiction, for example, can remake the denotation of the label “novel.”

Intensions are dangerous things, and I think you could find that even seemingly clear concepts like “dog” can prove slippery in themselves. You would find more agreement among people, certainly, but who’s to say it’s enough? Labels are inherently unstable things. I think the very point of Beirnacki’s book makes it impossible for him to draw such a clear-cut line. Biernacki sometimes seems to assume that a stable “code” label is being assigned to unstable and ambiguous “data,” but there’s no reason to suppose the label is in general that much more stable  than in the specific text.This is to enter philosophy of language issues that would derail this post entirely, so I will just leave matters at that unless someone wants to debate the point.

Consequently, the ultimate effect of Biernacki’s critique is to make the remaining space for quantitative science very small indeed. In this he is similar to Rudolf Carnap, whose requirements for science were so rigorous and unattainable that many philosophers of science (Popper among them) complained that he would put scientists out of a job. Certainly Griswold, Evans, and Bearman/Stovel come off much closer to Carnap’s idea of bad poetry (e.g., Heidegger) than science.

Contrariwise, I don’t see why the inclusion of quantitative measures in and of itself is a bad thing as long as the labeling is done in a sufficiently responsible way. Are interpretive reading and quantitative analysis “intrinsically uncombinable,” as Biernacki says? I admit that “sufficiently responsible” is a very high bar to clear. But while I agree that so-called “raw data,” is a misnomer, there is a difference between medium-rare and well-done.  I would like to see Biernacki apply his methods to far more intelligent usages of corpus linguistics, such as those performed by Martin Mueller, Eleanor Dickey, Ian Lancashire, or Brian Vickers. All work at a far lower lexical level than Biernacki’s subjects, and all are better scholars. (And none is a sociologist. Biernacki does take a few swipes at Franco Moretti for following Griswold’s bad tendencies, but mostly leaves literature alone.)

But I want to push in the opposite direction as well against Biernacki’s elevation of what he loosely terms humanist interpretation (much as I love it). It is interesting that Biernacki makes a claim of rigor for his humanistic methodology. This is very tricky. When I read Auerbach and Spitzer and Fahnestock, I certainly get the impression of intense intellectual rigor, but rigor applied both to the careful reading of texts and to the holistic grasp of the whole. That is, because of the great difficulties in labeling, rigor must be accomplished by having both

  1. a heuristic, intuitive feel for the whole of one’s field and beyond, stemming from vast reading and reflection, and
  2. a complementary sense of where one’s knowledge is incomplete, where variations might occur, and what should be left open and tentative.

The blunt use of statistics can cover up the need for either of these time-consuming and tenure-threatening processes. Punch a corpus into a computer and analyze it and your work “seems” complete without your brain needing to process all the ambiguities and elisions. Clearly that is unacceptable. But ruling out quantitative measures is not necessarily more rigorous. Biernacki thinks very highly of Weber, and I do as well, to a point. But Weber’s theory of secularization and disenchantment has ultimately been overadopted by less imaginative minds than his. I think and hope that Weber intended his theses to be provisional, to be reassessed and revised (just as scientific theories should be) with the passage of time and research, not mindlessly parroted by crypto-conservative postmodernists looking to smuggle religion back into intellectual discourse under the guise of “reenchantment.”

To put it another way: is the generalized, reductive application of Freud’s theories any better than the generalized, reductive application of the DSM-IV?

This is not a complaint against Weber as much as it is frustration with general intellectual incompetence. What I mean to stress here is that I’m not so sure that this intellectual incompetence is so different in kind from the sort of intellectual incompetence Biernacki exposes in his subjects. Both stem from sloppiness, laziness, and a sheer lack of creativity. So while Biernacki rightly praises Panofsky:

The historian Robert Marichal followed Panofsky’s thesis to explain why the style of breaks in Gothic letters on parchment appeared simultaneously with the same breaks in stone, intersecting ribs in Gothic vaults. Both shifts expressed an analysis of whole lines to cut them down and regroup them into clearer, hierarchically ordered parts of parts. Compare this depth of analysis to a quantitative argument about net trends in abstract codes. Such blurred social “science” is less stringent about the patterning required for confirmation and too indefinite to isolate productive anomalies. Again the humanist focus on precise designs draws it closer to the rigor of the “hard” sciences.

I still think he overstates his case somewhat, because the “codes” at work here are just as subject to dispute. They are, however, more explicit, and this is a good thing, as Biernacki says. The issue, however, is that such great humanist works as he identifies are by their very nature exceptions, works of prodigious and unique minds that cannot be replicated en masse. The weaker philological work of years past is, alas, very nearly as formulaic as some of the scholarship Biernacki condemns (though far less sloppy).

As a prescription for better work, the humanist traditions provide little help in the mass production of research other than to set the bar so high for work that most people should immediately drop out of the field. (Not that this would be a bad thing, necessarily.) But it makes his prescriptions very difficult to imagine practically, unless academia is to return to being a elite, cordoned-off field as it was prior to the postwar higher education boom. (Though that may well happen.)

I am being speculative here, and none of this dampens the force of Biernacki’s critique. It just steers his critique more in the direction of “Don’t use numbers to cover up your incompetence” rather than “Don’t ever use quantitative measures on texts.”2

Science, ideally speaking, provides a workable means for adjudication of disputes, and even occasionally consensus, that is less dependent on the most powerful person around dictating what’s right. To a point, Biernacki employed science, in tandem with humanistic close reading, in his book to undermine the very bad “science” of the works he examined. That, I think, is the best model going forward that we have.


  1. Perhaps not so powerless. Only after writing this entry did I discover that John Evans was involved in a UCSD scandal to attempt to prevent Biernacki from investigating Evans’ work. In 2009, UCSD’s Social Sciences Dean Jeffrey Elman threatened to censure and dismiss Biernacki on the grounds that Biernacki’s research “may damage the reputation of a colleague and therefore may be considered harassment.” Full story here. IHE article here. It is appalling that Jeffrey Elman has retained his position as Dean after sending such a letter. Needless to say, my support for Biernacki’s pursuit of this research is total.
  2. The sociological establishment is having an easier time attacking the second thesis, however, judging by Andrew Perrin’s nasty review. Perrin adopts a ridiculous “They aren’t trying to be scientific” defense, which leaves you wondering what all those charts are doing in the papers, as well as wondering what the point of such sociology is. Perrin also didn’t disclose that he is friends with John Evans until pressed in the comments.

Keir Elam: Shakespearean Without Taste

J.W. Lever’s Arden edition of Measure for Measure was a great help to me when I was writing that last post, and his, ahem, measured prose made for pleasant reading.

Now I’m going to pick on Keir Elam a bit, because his recent Arden edition of Twelfth Night bugs me. He writes in a verbose, inexact style that seems to be striving for cleverness but makes little effort in that direction.

Consider the lead sentences from various sections of Elam’s Twelfth Night:

Preface: “Editing a play, like staging a play, is a collaborative enterprise.”

Introduction: “A play, like a cat, has several lives.”

Gloss on First Line of Play: “1 music It is not by chance that this is the comedy’s opening noun.”

Appendix 1: “The script of a play is never definitive. It is a metamorphic creature, destined to undergo continuous change.”

Appendix 3: “Music is not a decorative addition to Twelfth Night but an essential part of the play’s dramatic economy.”

Or Elam’s fondness for ill-thought-out analogies and cliches:

Everything in a play text is filtered through language, the most important weapon in the dramatist’s armoury.

Twelfth Night is a play unusually aware of its destiny as a script for performance.

In the case of Twelfth Night, the spectator plays the part of co-protagonist.

Antonio’s homoeroticism is an open secret in contemporary performances.

The comedy offers a veritable anatomy of the most fashionable of humours, melancholy.

The liver, organ of passion, works overtime in Twelfth Night.

The play’s economy of space feeds into its poetics of place.

Or how about this one?

The body in Twelfth Night is not always an edifying text; it sometimes resembles an epidemiological treatise.

An unedifying treatise, I guess?

And sometimes Elam’s writing is just leaden:

The success of Twelfth Night onstage is in part demonstrated by the sheer number and frequency of productions.

A particularly important discursive role in the comedy is played by doors, virtual and (in performance) actual.

I love that parenthetical.

What these quotes raise is not a question of content or interpretation, or even of terminology, but a question of taste. Are we to entrust ourselves to a guide who writes like this? Can we expect him to be sensitive to the nuances of Shakespeare’s language and meaning when his ear for rhetoric appears hopelessly deficient?

Cultural Illogic: David Golumbia and The Cultural Logic of Computation

David Golumbia does not like computers. Toward the end of The Cultural Logic of Computation, after lumping computers and the atom bomb into a single “Pandora’s Box” of doom, he observes:

The Germans relied on early computers and computational methods provided by IBM and some of its predecessor companies to expedite their extermination program; while there is no doubt that genocide, racial and otherwise, can be carried out in the absence of computers, it is nevertheless provocative that one of our history’s most potent programs for genocide was also a locus for an intensification of computing power.

This sort of guilt by association is typical of The Cultural Logic of Computation. The book is so problematic and so wrong-headed as to be shocking, and as philosophical and cultural excursions into technological analysis are still comparatively rare, the book merits what programmers would term a postmortem.

Throughout the book, Golumbia, an English and Media Studies professor who worked for ten years as a product manager in software at Dow Jones, insists that computers are creating and enforcing a socio-political hegemony that reduces human beings to servile automatons. They aren’t just the tools of oppression, they oppress by their very nature. Golumbia attacks the encroachment by “computation” on human life. He defines “computation” as the rationalist, symbolic approach of computers and logic.

Or at least he seems to sometimes. Other times “computation” stands in for an amorphous mass of cultural issues that just happen to involve computers. Much of the the book focuses on political issues that don’t bear on “computation” in the least, such as a tired attack on Thomas Friedman and globalization that adds nothing new to Friedman’s already-long rap sheet. Golumbia spends ten pages criticizing real-time strategy games like Age of Empires, complaining:

There is no question of representing the Mongolian minority that exists in the non-Mongolian part of China, or of politically problematic minorities such as Tibetans and Uyghurs, or of the other non-Han Chinese minorities (e.g., Li, Yi, Miao).

A true Hobbesian Prince, the user of Age of Empires allows his subjects no interiority whatsoever, and has no sympathy for their blood sacrifices or their endless toil; the only sympathy is for the affairs of state, the accumulation of wealth and of property, and the growth of his or her power.

The critique could apply just as easily to Monopoly, Diplomacy, Stratego, or chess.

Golumbia gives away the game, so to speak, when he implies that connectionism (a non-symbolic artificial intelligence approach used in neural networks) is somehow less politically suspect than the symbolic AI approaches he attacks. In fact, non-symbolic approaches like Bayes networks and neural networks are themselves used ubiquitously in the data mining he (rightly) worries about. Golumbia has confused science with scientism, and computers’ uses with their structure.

Without a critique of the technical side of computers, Golumbia’s book would be just another tired retread of Chomsky, Hardt/Negri, Spivak, Thomas Frank, and the like. Unfortunately, his actual excursions into technical issues are woefully uninformed. A surreal attack on XML as a “top-down” standard ends with him praising Microsoft Word as an alternative, confusing platform and application. He hates object-oriented programming because…well, I’m honestly not quite sure.

Because the computer is so focused on “objective” reality—meaning the world of objects that can be precisely defined—it seemed a natural development for programmers to orient their tools exactly toward the manipulation of objects. Today, OOP is the dominant mode in programming, for reasons that have much more to do with engineering presumptions and ideologies than with computational efficiency (some OOP languages like Java have historically performed less well than other languages, but are preferred by engineers because of how closely they mirror the engineering idealization about how the world is put together).

The lack of citation, pervasive throughout the book, makes it impossible even to pinpoint what this objection means. I’d be curious as to how he feels about functional languages like Lisp, ML, and Haskell, but Golumbia shows no signs of even having heard of them. Unfortunately, XML and object-oriented programming are pretty much his two main points of technical attack, which indicates a lack of technical depth.

Yet Golumbia’s greatest anger is reserved for Noam Chomsky. Golumbia devotes a quarter of the book to him, with Jerry Fodor serving as assistant villain. Somehow, Chomsky’s computational linguistics become far more than just a synecdoche for modern corporatism and materialism; Chomsky is actually one of the main culprits.

To Golumbia, Chomsky is “fundamentally libertarian”; he is a Ayn Randian “primal conservative” who accepted military funding. He has “authoritarian” institutional politics which require strict adherence to his “religious” doctrine:

Chomsky’s institutional politics are often described exactly as authoritarian.

[His work] tends to attract white men (and also men from notably imperial cultures, such as those of Korea or Japan).

The scholars who pursue Chomskyanism and Chomsky himself with near-religious fervor are, almost without exception, straight white men who might be taken by nonlinguists to be ‘computer geeks.’

Golumbia is evidently fond of the ad hominem. Golumbia also associates “geeks” with “straight, white men,” insulting 19th century programmer Ada Lovelace, gay theoretician Alan Turing, and the vast population of queer and non-white programmers, linguists, and geeks that exists today (many not even Korean or Japanese).

Yet Golumbia finds time to praise Wikipedia, founded and run by fundamentally libertarian Ayn Rand acolyte Jimmy Wales. It’s strange for Golumbia to call Wikipedia a salutary effort to demote expert opinion when Wales himself says it should not be cited in academic papers. And strange for Golumbia to see Wikipedia as progressive when many of its entries still come from that well-known bastion of hegemonic opinion, the 1911 Encyclopedia Britannica. (The explicitly racist ones have been scrubbed.)

Beyond the technological confusions, Golumbia’s philosophical background is notably defective. The book is plagued by factual errors; Voltaire is bizarrely labeled a “counter-Enlightenment” thinker, while logicians Bertrand Russell and Gottlob Frege somehow end up on opposite sides: Russell is a good anti-rationalist (despite having written “Why I Am a Rationalist”), Frege is a bad rationalist. (He also enlists Quine and Wittgenstein to his leftist cause, which I suspect neither would have appreciated.) He thinks Leibniz preceded Descartes. He misappropriates Kant’s ideas of the noumenal and mere reason.

Here is a typically confused passage, revealing Golumbia’s fondness for incoherent Manicheistic dichotomies:

In Western intellectual history at its most overt, mechanist views typically cluster on one side of political history to which we have usually attached the term conservative. In some historical epochs it is clear who tends to endorse such views and who tends to emphasize other aspects of human existence in whatever the theoretical realm. There are strong intellectual and social associations between Hobbes’s theories and those of Machiavelli and Descartes, especially when seen from the state perspective. These philosophers and their views have often been invoked by conservative leaders at times of consolidation of power in iconic or imperial leaders, who will use such doctrines overtly as a policy base.

This contrasts with ascendant liberal power and its philosophy, whose conceptual and political tendencies follow different lines altogether: Hume, Kant, Nietzsche, Heidegger, Dewey, James, etc. These are two profoundly different views of what the State itself means, what the citizen’s engagement with the State is, and where State power itself arises. Resistance to the view that the mind is mechanical is often found in philosophers we associate with liberal or radical views—Locke, Hume, Nietzsche, Marx.

So it is not simply the technological material that is the problem. The quality of even the academic, philosophical portions of the book is dismaying, and the general lack of evidence and citation is egregious. Harvard University Press, who published the book, have a fine track record in the general areas that Golumbia inhabits. I am not certain how The Cultural Logic of Computation slipped through, nor how many of its blatant errors were not caught. It is an embarrassment and will only confirm the prejudices of those who feel that the humanities have nothing to offer the sciences but spite and ignorance.

For contrast, Samir Chopra’s Decoding Liberation: The Promise of Free and Open Source Software (Routledge) is an excellent and rigorous examination of some of the political and social issues around software and software development, strong on both the technical and philosophical fronts. I would urge anyone looking at Golumbia’s book to read it instead.

Notes on The Future of Academia

This started as a comment on a post over on New Savannah, where Bill Benzon was talking about cognitive science researcher Mark Changizi’s decision to leave academia. But I think it’s a red herring as far as the structural problems of academia go.

Changizi left because despite having tenure, the whole nature of grants is such that they do not allow for work on potentially paradigm-shifting ideas, because they have too great a chance for failure. He cites Vinay Deolalikar’s valiant but seemingly wrong proof that P=NP as an example of the sort of work that can only be done outside academia.

But I don’t think the Changizi incident reflects anything too new about academia. And I think when people talk about the problems in academia today vs the problems forty or fifty years ago, Changizi isn’t running up against anything new. Paradigm-shifting work has never gotten funding except when there was a clear military interest, in which case the floodgates (or cashgates) opened.

So when assessing academia, there are three interlinked but distinct factors here that vary independently by field:

  1. The Finance Factor: The ability to get funding for research in that field from anywhere other than a university.
  2. The Infrastructure Factor: The non-overhead resources (time, money, people, equipment) required for research in the field.
  3. The Prestige Factor: The field’s self-determined metric of success for research (influence, “impact,” prestige).

Literature, psychology, and computer science are affected in different ways by these factors. Even within a field, there are variances, which is why Deolalikar isn’t such a great example.

People like Deolalikar wander between academia and corporate research labs quite a bit, as there’s much closer coordination between them in the computer science world, the profit motive being far more obvious. Even beyond that, Deolalikar’s capital needs are very cheap: a living wage for himself, an office, etc. He didn’t need a “lab.”

Theoretical computer science issues like P=NP are akin to theoretical math, requiring little beyond pen and paper and a brain with very particular capacities.

On the other hand, applied computer science research can be tremendously expensive. So expensive that academia can’t even provide the infrastructure even with funding. If you want to analyze the entirety of the internet or examine database issues with petabytes of data, acquiring and processing meaningful that amount of meaningful data is just not within the reach of a university. This may change in the future with joint efforts, but I suspect that corporations will always have some edge because the financial motive is so present (unlike, say, with supercolliders).

The financial motive is not always so imminently present, even within computer science. For things like neuroscience and psychology, where the profits are clearly possible but harder to predict, grants come into play. If you need a lab and funding for it, there will be politics to getting it, period. Research labs spend thousands of person-hours filling out grant applications in order to convince the pursestring-holders (the government, frequently) that they’re doing the “right” thing.

Where the finance factor is high, things haven’t changed that much, even with increases in bureaucracy. High-cost research will continue to be done within institutions as long as there’s profit in it. It will always be somewhat conservative because people with money want results for their research.

Where the finance factor is low, the infrastructure factor is also frequently low, because there’s nowhere to get money for infrastructure other than the university, and the university is unlikely to fund much that can’t be funded by other sources.

The exception is if the prestige factor is high. If the top people in a field have a huge impact on the world around them, then the university will invest money simply because it will draw attention and (indirectly) more money to the university. Economists, political scientists, and even (in Europe) anthropologists and philosophers: they frequently possess enough prestige outside of academia that they will continue to draw people and money because they are part of the larger society. Jurgen Habermas and Michael Ignatieff, for example. And success in these fields is partly measured by that sort of outside prestige. How could it not be?

So where things have changed are in fields which lack external sources of funding and lack external prestige. Fields meeting these criteria:

  1. Funding Factor: Low
  2. Infrastructure Factor: Low
  3. Prestige Factor: Low

These are fields in which the measurement of a researcher’s success is determined near-exclusively by people within the field, and the researchers, even the top ones, have little pull outside of academia. Many of the traditional humanities meet these criteria today.

And these fields are in trouble in a way they were not fifty years ago, where they seemed to comfortably sustain themselves. But today, we see the demand for “impact” in the British university system:

Henceforth a significant part of the assessment of a researcher’s worth – and funding – will be decided according to the impact on society that his or her work is seen to have. The problem is that impact remains poorly defined; it isn’t clear how it will be measured, and the weighting given to it in the overall assessment has been plucked out of the air. It is a bad policy: it will damage research in the sciences and corrupt it in the humanities, as academics will have a strong financial incentive to become liars.

If no one really knows what impact is, it is at least clear what it isn’t: scholarship is seen as of no significance. What the government and Hefce are interested in is work that is useful, in a crudely defined way, for business or policy-making. The effect of impact will be to force researchers to focus even more than they do already on research that pays off – or can be made to appear as if it does – within the assessment cycle, rather than on fundamental work whose significance might take years, even decades, to be appreciated.

Iain Pears, LRB

This is a problem for the sciences as well, as it corporatizes the grant process and makes immediate results far more necessary. But it is a far, far greater problem for some of the humanities, which don’t really traffic in “results” of this sort. But when put it this way, it doesn’t exactly seem surprising. Isn’t the better question why this sort of reckoning hasn’t happened until now?

The changing economic situation is obviously a factor, but there’s a social one as well. The prestige factor used to be higher. The connections between the academic humanities and the rest of the world used to be stronger. But through some process, and I think that it is not a trivial or obvious one, some of the humanities turned hermetically inward and/or the world started ignoring them, and so their prestige diminished.

Fifty years ago, there were scholarly books put out by major presses (Harper, Penguin) that no non-academic publisher would touch today. Was there an audience for them outside of academia? I don’t have a strong sense. There certainly isn’t now. Pears is a bit too specific: money and politics are certainly high-prestige forms of impact, but what impact really seems to mean is any perceived societal value outside of academia.

Low-cost research will always continue to be done by enthusiasts. Michael Ventris made huge steps in deciphering Linear B, despite being a low-level architect with no credentials. But the “impact” business seems to be at trailing indicator rather than a leading one, signifying that the more disconnected humanities have been living on borrowed time for quite a while. And I don’t see how that will reverse without a larger shift in the relation of those fields to society at large.

Note on Corporate and Academic Institutions

Ray wrote:

Cholly on Software : The Signifying Code Monkey

There are benefits — non-financial, obviously — to working for an institution of higher education. But in a 27-year career I remember only four people with whom I couldn’t establish some sort of working relationship, and I met three of them after leaving what’s oddly called “private industry.” Similarly, two of the three pieces I’ve regretted publishing were written within the context of a (mostly) academic website. Maybe it’s true that there’s something peculiarly toxic about this environment? Or maybe this particular pachyderm happens to find my own blend of tones and pheromones peculiarly noxious? For whatever reason, I’ve spent a painful number of turns playing the wrong side of Whac-a-Mole.

I responded:

Academia tolerates and even fosters antisocial behavior in various forms, while the private sector is much more strict in its codes of behavior hewing to some practical norm. Coders who work in academic nonprofits tend to be those who were “too weird” for industry, by their own account. Much of this may have to do with the ultimate bottom line of the holy dollar asserting itself far more incessantly in the private sector. (The exceptions were research-focused places like Bell Labs, which also attracted the types of pig-headed people you simply could not deal with. They have gone under precisely because their employees would rather spend their time perfecting an IETF RFC than writing server monitoring scripts in Python or, god forbid, Perl.) So given an insufferable, ambitious, and/or dogmatic person, that person will either have the rare good fortune to rise to a management position in the private sector that he (occasionally she, mostly he) will then use to attempt to realize his treasured, pure vision of paradise, and fail repeatedly while inflicting untold suffering on his peons; OR, that person will be thrown aside by the capitalist machinery and will seek refuge in locations where the almighty dollar holds less immediate sway. There, in academia or a like-minded non-profit, their high rhetoric and uncompromising passion will convince prestige-hungry administrators that here is a person with the vision to save the university from the capitalist chopping block, money and reasonableness be damned! Lather, rinse, repeat. See Albert O. Hirschmann’s The Passions and the Interests for what I genuinely believe is the dynamic at work.


« Older posts

© 2019 Waggish

Theme by Anders NorenUp ↑