This started as a comment on a post over on New Savannah, where Bill Benzon was talking about cognitive science researcher Mark Changizi’s decision to leave academia. But I think it’s a red herring as far as the structural problems of academia go.
Changizi left because despite having tenure, the whole nature of grants is such that they do not allow for work on potentially paradigm-shifting ideas, because they have too great a chance for failure. He cites Vinay Deolalikar’s valiant but seemingly wrong proof that P=NP as an example of the sort of work that can only be done outside academia.
But I don’t think the Changizi incident reflects anything too new about academia. And I think when people talk about the problems in academia today vs the problems forty or fifty years ago, Changizi isn’t running up against anything new. Paradigm-shifting work has never gotten funding except when there was a clear military interest, in which case the floodgates (or cashgates) opened.
So when assessing academia, there are three interlinked but distinct factors here that vary independently by field:
- The Finance Factor: The ability to get funding for research in that field from anywhere other than a university.
- The Infrastructure Factor: The non-overhead resources (time, money, people, equipment) required for research in the field.
- The Prestige Factor: The field’s self-determined metric of success for research (influence, “impact,” prestige).
Literature, psychology, and computer science are affected in different ways by these factors. Even within a field, there are variances, which is why Deolalikar isn’t such a great example.
People like Deolalikar wander between academia and corporate research labs quite a bit, as there’s much closer coordination between them in the computer science world, the profit motive being far more obvious. Even beyond that, Deolalikar’s capital needs are very cheap: a living wage for himself, an office, etc. He didn’t need a “lab.”
Theoretical computer science issues like P=NP are akin to theoretical math, requiring little beyond pen and paper and a brain with very particular capacities.
On the other hand, applied computer science research can be tremendously expensive. So expensive that academia can’t even provide the infrastructure even with funding. If you want to analyze the entirety of the internet or examine database issues with petabytes of data, acquiring and processing meaningful that amount of meaningful data is just not within the reach of a university. This may change in the future with joint efforts, but I suspect that corporations will always have some edge because the financial motive is so present (unlike, say, with supercolliders).
The financial motive is not always so imminently present, even within computer science. For things like neuroscience and psychology, where the profits are clearly possible but harder to predict, grants come into play. If you need a lab and funding for it, there will be politics to getting it, period. Research labs spend thousands of person-hours filling out grant applications in order to convince the pursestring-holders (the government, frequently) that they’re doing the “right” thing.
Where the finance factor is high, things haven’t changed that much, even with increases in bureaucracy. High-cost research will continue to be done within institutions as long as there’s profit in it. It will always be somewhat conservative because people with money want results for their research.
Where the finance factor is low, the infrastructure factor is also frequently low, because there’s nowhere to get money for infrastructure other than the university, and the university is unlikely to fund much that can’t be funded by other sources.
The exception is if the prestige factor is high. If the top people in a field have a huge impact on the world around them, then the university will invest money simply because it will draw attention and (indirectly) more money to the university. Economists, political scientists, and even (in Europe) anthropologists and philosophers: they frequently possess enough prestige outside of academia that they will continue to draw people and money because they are part of the larger society. Jurgen Habermas and Michael Ignatieff, for example. And success in these fields is partly measured by that sort of outside prestige. How could it not be?
So where things have changed are in fields which lack external sources of funding and lack external prestige. Fields meeting these criteria:
- Funding Factor: Low
- Infrastructure Factor: Low
- Prestige Factor: Low
These are fields in which the measurement of a researcher’s success is determined near-exclusively by people within the field, and the researchers, even the top ones, have little pull outside of academia. Many of the traditional humanities meet these criteria today.
And these fields are in trouble in a way they were not fifty years ago, where they seemed to comfortably sustain themselves. But today, we see the demand for “impact” in the British university system:
Henceforth a significant part of the assessment of a researcher’s worth – and funding – will be decided according to the impact on society that his or her work is seen to have. The problem is that impact remains poorly defined; it isn’t clear how it will be measured, and the weighting given to it in the overall assessment has been plucked out of the air. It is a bad policy: it will damage research in the sciences and corrupt it in the humanities, as academics will have a strong financial incentive to become liars.
If no one really knows what impact is, it is at least clear what it isn’t: scholarship is seen as of no significance. What the government and Hefce are interested in is work that is useful, in a crudely defined way, for business or policy-making. The effect of impact will be to force researchers to focus even more than they do already on research that pays off – or can be made to appear as if it does – within the assessment cycle, rather than on fundamental work whose significance might take years, even decades, to be appreciated.
Iain Pears, LRB
This is a problem for the sciences as well, as it corporatizes the grant process and makes immediate results far more necessary. But it is a far, far greater problem for some of the humanities, which don’t really traffic in “results” of this sort. But when put it this way, it doesn’t exactly seem surprising. Isn’t the better question why this sort of reckoning hasn’t happened until now?
The changing economic situation is obviously a factor, but there’s a social one as well. The prestige factor used to be higher. The connections between the academic humanities and the rest of the world used to be stronger. But through some process, and I think that it is not a trivial or obvious one, some of the humanities turned hermetically inward and/or the world started ignoring them, and so their prestige diminished.
Fifty years ago, there were scholarly books put out by major presses (Harper, Penguin) that no non-academic publisher would touch today. Was there an audience for them outside of academia? I don’t have a strong sense. There certainly isn’t now. Pears is a bit too specific: money and politics are certainly high-prestige forms of impact, but what impact really seems to mean is any perceived societal value outside of academia.
Low-cost research will always continue to be done by enthusiasts. Michael Ventris made huge steps in deciphering Linear B, despite being a low-level architect with no credentials. But the “impact” business seems to be at trailing indicator rather than a leading one, signifying that the more disconnected humanities have been living on borrowed time for quite a while. And I don’t see how that will reverse without a larger shift in the relation of those fields to society at large.