Waggish

David Auerbach on literature, tech, film, etc.

Tag: politics (page 6 of 15)

Cultural Illogic: David Golumbia and The Cultural Logic of Computation

David Golumbia does not like computers. Toward the end of The Cultural Logic of Computation, after lumping computers and the atom bomb into a single “Pandora’s Box” of doom, he observes:

The Germans relied on early computers and computational methods provided by IBM and some of its predecessor companies to expedite their extermination program; while there is no doubt that genocide, racial and otherwise, can be carried out in the absence of computers, it is nevertheless provocative that one of our history’s most potent programs for genocide was also a locus for an intensification of computing power.

This sort of guilt by association is typical of The Cultural Logic of Computation. The book is so problematic and so wrong-headed as to be shocking, and as philosophical and cultural excursions into technological analysis are still comparatively rare, the book merits what programmers would term a postmortem.

Throughout the book, Golumbia, an English and Media Studies professor who worked for ten years as a product manager in software at Dow Jones, insists that computers are creating and enforcing a socio-political hegemony that reduces human beings to servile automatons. They aren’t just the tools of oppression, they oppress by their very nature. Golumbia attacks the encroachment by “computation” on human life. He defines “computation” as the rationalist, symbolic approach of computers and logic.

Or at least he seems to sometimes. Other times “computation” stands in for an amorphous mass of cultural issues that just happen to involve computers. Much of the the book focuses on political issues that don’t bear on “computation” in the least, such as a tired attack on Thomas Friedman and globalization that adds nothing new to Friedman’s already-long rap sheet. Golumbia spends ten pages criticizing real-time strategy games like Age of Empires, complaining:

There is no question of representing the Mongolian minority that exists in the non-Mongolian part of China, or of politically problematic minorities such as Tibetans and Uyghurs, or of the other non-Han Chinese minorities (e.g., Li, Yi, Miao).

A true Hobbesian Prince, the user of Age of Empires allows his subjects no interiority whatsoever, and has no sympathy for their blood sacrifices or their endless toil; the only sympathy is for the affairs of state, the accumulation of wealth and of property, and the growth of his or her power.

The critique could apply just as easily to Monopoly, Diplomacy, Stratego, or chess.

Golumbia gives away the game, so to speak, when he implies that connectionism (a non-symbolic artificial intelligence approach used in neural networks) is somehow less politically suspect than the symbolic AI approaches he attacks. In fact, non-symbolic approaches like Bayes networks and neural networks are themselves used ubiquitously in the data mining he (rightly) worries about. Golumbia has confused science with scientism, and computers’ uses with their structure.

Without a critique of the technical side of computers, Golumbia’s book would be just another tired retread of Chomsky, Hardt/Negri, Spivak, Thomas Frank, and the like. Unfortunately, his actual excursions into technical issues are woefully uninformed. A surreal attack on XML as a “top-down” standard ends with him praising Microsoft Word as an alternative, confusing platform and application. He hates object-oriented programming because…well, I’m honestly not quite sure.

Because the computer is so focused on “objective” reality—meaning the world of objects that can be precisely defined—it seemed a natural development for programmers to orient their tools exactly toward the manipulation of objects. Today, OOP is the dominant mode in programming, for reasons that have much more to do with engineering presumptions and ideologies than with computational efficiency (some OOP languages like Java have historically performed less well than other languages, but are preferred by engineers because of how closely they mirror the engineering idealization about how the world is put together).

The lack of citation, pervasive throughout the book, makes it impossible even to pinpoint what this objection means. I’d be curious as to how he feels about functional languages like Lisp, ML, and Haskell, but Golumbia shows no signs of even having heard of them. Unfortunately, XML and object-oriented programming are pretty much his two main points of technical attack, which indicates a lack of technical depth.

Yet Golumbia’s greatest anger is reserved for Noam Chomsky. Golumbia devotes a quarter of the book to him, with Jerry Fodor serving as assistant villain. Somehow, Chomsky’s computational linguistics become far more than just a synecdoche for modern corporatism and materialism; Chomsky is actually one of the main culprits.

To Golumbia, Chomsky is “fundamentally libertarian”; he is a Ayn Randian “primal conservative” who accepted military funding. He has “authoritarian” institutional politics which require strict adherence to his “religious” doctrine:

Chomsky’s institutional politics are often described exactly as authoritarian.

[His work] tends to attract white men (and also men from notably imperial cultures, such as those of Korea or Japan).

The scholars who pursue Chomskyanism and Chomsky himself with near-religious fervor are, almost without exception, straight white men who might be taken by nonlinguists to be ‘computer geeks.’

Golumbia is evidently fond of the ad hominem. Golumbia also associates “geeks” with “straight, white men,” insulting 19th century programmer Ada Lovelace, gay theoretician Alan Turing, and the vast population of queer and non-white programmers, linguists, and geeks that exists today (many not even Korean or Japanese).

Yet Golumbia finds time to praise Wikipedia, founded and run by fundamentally libertarian Ayn Rand acolyte Jimmy Wales. It’s strange for Golumbia to call Wikipedia a salutary effort to demote expert opinion when Wales himself says it should not be cited in academic papers. And strange for Golumbia to see Wikipedia as progressive when many of its entries still come from that well-known bastion of hegemonic opinion, the 1911 Encyclopedia Britannica. (The explicitly racist ones have been scrubbed.)

Beyond the technological confusions, Golumbia’s philosophical background is notably defective. The book is plagued by factual errors; Voltaire is bizarrely labeled a “counter-Enlightenment” thinker, while logicians Bertrand Russell and Gottlob Frege somehow end up on opposite sides: Russell is a good anti-rationalist (despite having written “Why I Am a Rationalist”), Frege is a bad rationalist. (He also enlists Quine and Wittgenstein to his leftist cause, which I suspect neither would have appreciated.) He thinks Leibniz preceded Descartes. He misappropriates Kant’s ideas of the noumenal and mere reason.

Here is a typically confused passage, revealing Golumbia’s fondness for incoherent Manicheistic dichotomies:

In Western intellectual history at its most overt, mechanist views typically cluster on one side of political history to which we have usually attached the term conservative. In some historical epochs it is clear who tends to endorse such views and who tends to emphasize other aspects of human existence in whatever the theoretical realm. There are strong intellectual and social associations between Hobbes’s theories and those of Machiavelli and Descartes, especially when seen from the state perspective. These philosophers and their views have often been invoked by conservative leaders at times of consolidation of power in iconic or imperial leaders, who will use such doctrines overtly as a policy base.

This contrasts with ascendant liberal power and its philosophy, whose conceptual and political tendencies follow different lines altogether: Hume, Kant, Nietzsche, Heidegger, Dewey, James, etc. These are two profoundly different views of what the State itself means, what the citizen’s engagement with the State is, and where State power itself arises. Resistance to the view that the mind is mechanical is often found in philosophers we associate with liberal or radical views—Locke, Hume, Nietzsche, Marx.

So it is not simply the technological material that is the problem. The quality of even the academic, philosophical portions of the book is dismaying, and the general lack of evidence and citation is egregious. Harvard University Press, who published the book, have a fine track record in the general areas that Golumbia inhabits. I am not certain how The Cultural Logic of Computation slipped through, nor how many of its blatant errors were not caught. It is an embarrassment and will only confirm the prejudices of those who feel that the humanities have nothing to offer the sciences but spite and ignorance.

For contrast, Samir Chopra’s Decoding Liberation: The Promise of Free and Open Source Software (Routledge) is an excellent and rigorous examination of some of the political and social issues around software and software development, strong on both the technical and philosophical fronts. I would urge anyone looking at Golumbia’s book to read it instead.

Robert Wiebe’s Self-Rule and American Democracy

I criticized Christopher Lasch’s The True and Only Heaven for reductionism, and in turn commenter Tocqueville criticizes me for reductionism. I think in the reductionism sweepstakes, it’s hard to beat a line like this:

The growing tolerance of profanity, sexual display, pornography, drugs, and homosexuality seemed to indicate a general collapse of common decency.

Christopher Lasch, The True and Only Heaven

By the seventh or eighth time Lasch lists his grab-bag of decadent bugaboos, he really pushes credibility. For comparison, Lasch only mentions Vietnam about three times in the entirety of Heaven, which is fairly ridiculous for a book claiming to explain American culture during the 60s and 70s.

For a better, less-blinkered look at the 60s, the times which caused Lasch so much heartbreak, consider Morris Dickstein‘s Gates of Eden, which is ambivalent toward the movements of those times, but acknowledges their partial strengths and, more importantly, the logic of their evolution and collapse.

And for a better history of democracy and class in America, consider Robert Wiebe’s Self-Rule: A Cultural History of American Democracy, which addresses many of Lasch’s points in far more nuanced fashion.

Comparing the annotated bibliography’s of Wiebe and Lasch’s books is instructive. While Wiebe lists dozens if not hundreds of works of history and documentary, Lasch tends to focus on theoretical and ideological work. There are a couple exceptions, such as Lasch’s detailed list of works on 19th century populism and syndicalism. These were his areas of expertise in his younger years, and indeed he displays a far richer understanding of them than of the FDR and LBJ eras.

As one barometer, while Wiebe has read Lasch, Lasch has not read Wiebe, whose The Search for Order was already considered a classic at the time Lasch wrote Heaven. Lasch preferred to stick to Carlyle and Emerson, despite their being absurd elitists themselves.

Wiebe, in contrast, has comprehensively studied the entirety of American history in reasonably rigorous fashion, and it shows. Even when his conclusions are debatable, they do not seem to have arisen out of abiding prejudice.

Wiebe’s central thesis, like Lasch, revolves around the loss of popular involvement in American democracy, but Wiebe doesn’t blame it on the hippies and feminists. For Wiebe, Lasch’s beloved populists and syndicalists represented a dying breath rather than an invigoration. Wiebe points to the period of 1890 to 1920 as one in which the “people” started to drop out of democracy. During this time, he claims, the two-class system of enfranchised white men and the disenfranchised everyone else gave way to a three-class system of national elites, the local middle-class, and the lower class.

The history of the 20th century becomes the history of the first two of those new classes coming into increasing conflict while both ignored the lower class.

What emerged with industrialization in the United States was a three-class system, conflictual but not revolutionary: one class geared to national institutions and policies, one dominating local affairs, and one sunk beneath both of those in the least rewarding jobs and least stable environments–in the terminology of my account, a national class, a local middle class, and a lower class. New hierarchies ordered relations inside both the national and lower middle class; both of those classes, in turn, counted on hierarchies to control the lower class.

Like Lasch, Wiebe bemoans the national elites trying to assist the lower class without bothering to raise their civic awareness or solicit their votes, but Wiebe’s point is that this technocratic policy machine was in place long before the dirty hippies and the Warren Court showed up.

Turn of the century labor and suffragette movements fought for increasing political influence while tacitly accepting the emerging class divisions. The sheer homogeneity of white men and their nepotistic political clubs had helped form an egalitarian, populist sensibility amongst them that necessarily could not survive the enfranchisement of minorities and women. Wiebe of course has no nostalgia for those days, but he identifies in them a sense of white men’s investment in civics that has never been restored to the American people since.

Further fissures emerged around the time of the Great War. Popular support for the war convinced many intellectuals and policymakers that the “people” could not be trusted to act on their own behalf, and so they embraced a more centralized technocratic regime. By the time of FDR, Wiebe can point to a figure like government antitrust lawyer Thurman Arnold, whose Folklore of American Capitalism (1937) “derisively dismissed the very thought of popular rule.”

This system held stable to a point, but with the increasingly liberalized, top-down stance of the national class (at least domestically) and the increasingly visible consequences of those policies, the conservative local middle class grew antsy and alienated, leading to “Reagan Democrats” and the eventual reactionary shifts that then followed.

I think Wiebe underestimates the mostly unfortunate role that the media played in controlling the discourse from the late 70s onward, but his point that neither the local middle class nor the national class could claim popular legitimacy is well-taken, and it continues to be a genuinely serious problem for any national progressivism. This of course is Lasch’s point too, but Wiebe shows that Lasch has completely mistaken its origins.

The book turns more speculative at its end, where Wiebe prescribes a loose, multi-level communitarianism as a panacea for Americans’ alienation from their government. Far less draconian than the visions of Alastair MacIntyre or even Michael Sandel, his vision is a bit too diffuse to be convincing, as though the depths of the difficulties and conflicts he has just chronicled have overwhelmed him, a sign that he realizes that things have become too complicated and huge to make the reinclusion of the lower classes an easy thing. But I take this ultimately to be a sign of the strength of his historical account.

Christopher Lasch on Raising Children

Christopher Lasch was a confused and confusing cultural and political thinker, and The True and Only Heaven (1978) is not a good book, full of misappropriated intellectual ideas used in service of a fairly reductive and conservative attack on progressive thinking and cultural politics. (Albert Hirschman’s three reactionary tropes of perversity, futility, and jeopardy are on display throughout.) Lasch’s wide intellectual canvas, which incorporates Blumenberg and Löwith as well as Carlyle and Sorel, makes his simplistic agenda all the more regrettable.

But the introduction, which gives the personal background for how he came to such grouchy views, is rather touching and worth reading.

Like so many of those born in the Depression, my wife and I married early, with the intention of raising a large family. We were part of the postwar “retreat to domesticity,” as it is so glibly referred to today. No doubt we hoped to find some kind of shelter in the midst of general insecurity, but this formulation hardly does justice to our hopes and expectations, which included much more than refuge from the never-ending international emergency.

In a world dominated by suspicion and mistrust, a renewal of the capacity for loyalty and devotion had to begin, it seemed, at the most elementary level, with families and friends. My generation invested personal relations with an intensity they could hardly support, as it turned out; but our passionate interest in each other’s lives cannot very well be described as a form of emotional retreat. We tried to re-create in the circle of our friends the intensity of a common purpose, which could no longer be found in politics or the workplace.

We wanted our children to grow up in a kind of extended family, or at least with an abundance of “significant others.” A house full of people; a crowded table ranging across the generations; four-hand music at the piano; nonstop conversation and cooking; baseball games and swimming in the afternoon; long walks after dinner; a poker game or Diplomacy or charades in the evening, all these activities mixing children and adults— that was our idea of a well-ordered household and more specifically of a well-ordered education.

We had no great confidence in the schools; we knew that if our children were to acquire any of the things we set store by—joy in learning, eagerness for experience, the capacity for love and friendship—they would have to learn the better part of it at home. For that very reason, however, home was not to be thought of simply as the “nuclear family.” Its hospitality would have to extend far and wide, stretching its emotional resources to the limit.

Our failure to educate them for success was the one way in which we did not fail them—our one unambiguous success. Not that this was deliberate either; it was only gradually that it became clear to me that none of my own children, having been raised not for upward mobility but for honest work, could reasonably hope for any conventional kind of success.

The “best and brightest” were those who knew how to exploit institutions for their own advantage and to make exceptions for themselves instead of playing by the rules. Raw ambition counted more heavily, in the distribution of worldly rewards, than devoted service to a calling—an old story, perhaps, except that now it was complicated by the further consideration that most of the available jobs and careers did not inspire devoted service in the first place.

Diplomacy, really? That makes me smile.

And yet from this he drew the wrong lesson, blaming the failure of his children to integrate themselves into the world successfully (he says as much) on changing social mores, rather than on the “old story” he even cites above. He dismissively says:

Liberalism now meant sexual freedom, women’s rights, gay rights; denunciation of the family as the seat of all oppression; denunciation of “patriarchy”; denunciation of “working-class authoritarianism.”

Well, as someone who believes in some of the above, I can say that the sort of “extended family” he cites is not incompatible with them. But perhaps he saw his children rebelling and couldn’t distinguish the youthful urge to reject everything from the necessary push for social justice at the same time. It was probably something that was difficult to assess while it was going on. But it’s been 50 years and I think we can separate out what is and is not compatible with his happy vision above.

Notes on The Future of Academia

This started as a comment on a post over on New Savannah, where Bill Benzon was talking about cognitive science researcher Mark Changizi’s decision to leave academia. But I think it’s a red herring as far as the structural problems of academia go.

Changizi left because despite having tenure, the whole nature of grants is such that they do not allow for work on potentially paradigm-shifting ideas, because they have too great a chance for failure. He cites Vinay Deolalikar’s valiant but seemingly wrong proof that P=NP as an example of the sort of work that can only be done outside academia.

But I don’t think the Changizi incident reflects anything too new about academia. And I think when people talk about the problems in academia today vs the problems forty or fifty years ago, Changizi isn’t running up against anything new. Paradigm-shifting work has never gotten funding except when there was a clear military interest, in which case the floodgates (or cashgates) opened.

So when assessing academia, there are three interlinked but distinct factors here that vary independently by field:

  1. The Finance Factor: The ability to get funding for research in that field from anywhere other than a university.
  2. The Infrastructure Factor: The non-overhead resources (time, money, people, equipment) required for research in the field.
  3. The Prestige Factor: The field’s self-determined metric of success for research (influence, “impact,” prestige).

Literature, psychology, and computer science are affected in different ways by these factors. Even within a field, there are variances, which is why Deolalikar isn’t such a great example.

People like Deolalikar wander between academia and corporate research labs quite a bit, as there’s much closer coordination between them in the computer science world, the profit motive being far more obvious. Even beyond that, Deolalikar’s capital needs are very cheap: a living wage for himself, an office, etc. He didn’t need a “lab.”

Theoretical computer science issues like P=NP are akin to theoretical math, requiring little beyond pen and paper and a brain with very particular capacities.

On the other hand, applied computer science research can be tremendously expensive. So expensive that academia can’t even provide the infrastructure even with funding. If you want to analyze the entirety of the internet or examine database issues with petabytes of data, acquiring and processing meaningful that amount of meaningful data is just not within the reach of a university. This may change in the future with joint efforts, but I suspect that corporations will always have some edge because the financial motive is so present (unlike, say, with supercolliders).

The financial motive is not always so imminently present, even within computer science. For things like neuroscience and psychology, where the profits are clearly possible but harder to predict, grants come into play. If you need a lab and funding for it, there will be politics to getting it, period. Research labs spend thousands of person-hours filling out grant applications in order to convince the pursestring-holders (the government, frequently) that they’re doing the “right” thing.

Where the finance factor is high, things haven’t changed that much, even with increases in bureaucracy. High-cost research will continue to be done within institutions as long as there’s profit in it. It will always be somewhat conservative because people with money want results for their research.

Where the finance factor is low, the infrastructure factor is also frequently low, because there’s nowhere to get money for infrastructure other than the university, and the university is unlikely to fund much that can’t be funded by other sources.

The exception is if the prestige factor is high. If the top people in a field have a huge impact on the world around them, then the university will invest money simply because it will draw attention and (indirectly) more money to the university. Economists, political scientists, and even (in Europe) anthropologists and philosophers: they frequently possess enough prestige outside of academia that they will continue to draw people and money because they are part of the larger society. Jurgen Habermas and Michael Ignatieff, for example. And success in these fields is partly measured by that sort of outside prestige. How could it not be?

So where things have changed are in fields which lack external sources of funding and lack external prestige. Fields meeting these criteria:

  1. Funding Factor: Low
  2. Infrastructure Factor: Low
  3. Prestige Factor: Low

These are fields in which the measurement of a researcher’s success is determined near-exclusively by people within the field, and the researchers, even the top ones, have little pull outside of academia. Many of the traditional humanities meet these criteria today.

And these fields are in trouble in a way they were not fifty years ago, where they seemed to comfortably sustain themselves. But today, we see the demand for “impact” in the British university system:

Henceforth a significant part of the assessment of a researcher’s worth – and funding – will be decided according to the impact on society that his or her work is seen to have. The problem is that impact remains poorly defined; it isn’t clear how it will be measured, and the weighting given to it in the overall assessment has been plucked out of the air. It is a bad policy: it will damage research in the sciences and corrupt it in the humanities, as academics will have a strong financial incentive to become liars.

If no one really knows what impact is, it is at least clear what it isn’t: scholarship is seen as of no significance. What the government and Hefce are interested in is work that is useful, in a crudely defined way, for business or policy-making. The effect of impact will be to force researchers to focus even more than they do already on research that pays off – or can be made to appear as if it does – within the assessment cycle, rather than on fundamental work whose significance might take years, even decades, to be appreciated.

Iain Pears, LRB

This is a problem for the sciences as well, as it corporatizes the grant process and makes immediate results far more necessary. But it is a far, far greater problem for some of the humanities, which don’t really traffic in “results” of this sort. But when put it this way, it doesn’t exactly seem surprising. Isn’t the better question why this sort of reckoning hasn’t happened until now?

The changing economic situation is obviously a factor, but there’s a social one as well. The prestige factor used to be higher. The connections between the academic humanities and the rest of the world used to be stronger. But through some process, and I think that it is not a trivial or obvious one, some of the humanities turned hermetically inward and/or the world started ignoring them, and so their prestige diminished.

Fifty years ago, there were scholarly books put out by major presses (Harper, Penguin) that no non-academic publisher would touch today. Was there an audience for them outside of academia? I don’t have a strong sense. There certainly isn’t now. Pears is a bit too specific: money and politics are certainly high-prestige forms of impact, but what impact really seems to mean is any perceived societal value outside of academia.

Low-cost research will always continue to be done by enthusiasts. Michael Ventris made huge steps in deciphering Linear B, despite being a low-level architect with no credentials. But the “impact” business seems to be at trailing indicator rather than a leading one, signifying that the more disconnected humanities have been living on borrowed time for quite a while. And I don’t see how that will reverse without a larger shift in the relation of those fields to society at large.

Profiles in Type L: Some Engineer at Microsoft

(Original typology in Battle Lines: Type L are the free-market technocrats and Type C are the conservative old boys in American society. Once more, I don’t identify with either of them.)

The always-intriguing corporate-insider blog Mini-Microsoft is the venting place for many of the R+D people dissatisfied with the state of affairs at that company. One anonymous commenter effectively summarizes the Type L’s case against the Type C, much as Paul Van Riper did. The parallels in content and attitude are very striking. I don’t get some of the terminology in the comment either, but this person’s point comes across anyway.

There are some geniuses over in Microsoft Research; somebody needs to set them free to productize.

It isn’t a lack of IC [individual contributor] talent. Although that is rapidly changing. it’s the decline of technical talent and integrity at almost all levels of management.

With “trios”, no individual is charged with cross-discipline technical oversight until GM or VP level. This is no the job of a GM or VP. It *was* the job of the now-extinct Product Unit Manager. Doubtless trios was sold as a way to commoditize skills by narrowing the remit of individuals along discipline lines. Unfortuately, those with broad skill sets that can envision how to actually make a prodcut (rather than a document or a nice report) have been pushed out. It is the age of the bureaucrat.

With trios, the notion of “product team” has vanished. A product team comprised all disciplines, and (usually) et weekly, with their PUM. This has been replaced by layers of tripartite committees based around the arbitrary notion of Dev, Test, PM. The meetings required have grown exponentially. A product team may only get together at a divisional all-hands.

BY GM/VP level, reporting on product state has been so sanitized that the majority of issues are never even surfaced. Yes, there is of course a category of issues that should never require a VPs intervention, but this goes way beyond that. “No bad news, ever”, is the rule. Anyone who rocks the boat is one of those negative, non-team-player 10%ers who will shortly be gone.

More senior ICs are, by definition, supposed to raise broad issues by dint of their level and years of experience. The existing culture makes this a very dangerous thing to do. That’s why I left in January after 10+ years.

The various disasters/missed opportunities over the last 10 years were well known to engineers at the front line… but due to a viciously-enforced policy of “no bad news, ever”, those who might have taken corrective action don’t find out until its too late.

There is a clear pattern of failure to execute… and it is not the doing of engineers. It’s a culture that rewards the suppression of “bad news”. It’s the lack of spine in the management chain to unpromise things that were promised, and blame their “underperforming” ICs when the crap hits the fan. Those with a spine soon find their prospects blighted.

Changing VPs won’t help much. They rely on their generals amd below to garner a picture of the situation. If those generals don’t provide truthful reporting, it simply isn’t possible to execute effectively. It’ll take an IBM/GE/HP/Honeywell (etc) sttyle intervention to fix this problem – it won’t get fixed by those who benefit (hugely) from it.

It’s like watching the third season of The Wire!

« Older posts Newer posts »

© 2024 Waggish

Theme by Anders NorenUp ↑