The academy is often thought of as the ideal for developing knowledge:
select the brightest minds in the country, guarantee them jobs, allow
them all the resources they need to research anything, don’t interfere
with any of their conclusions. On some issues, these
independent-minded academics form a consensus and we tend to give
their consensus very heavy weight. They can’t all be wrong, can they?
And yet, in my empirical research, I find they very often are. A short
blog post is no place to do a careful study, but I can mention some
examples. The classic works in industrial relations turn out to be
complete hoaxes, yet they’ve dominated the teach of the field for over
half a century. (See Alex Carey’s book for details.) In political
science, the most respected practioner’s most famous work shades and
distorts his own findings to support a theory wildly at odds with the
facts. (See Who Really Rules?) The whole field of fMRI studies are so flat-out
ridiculous that journal articles are even making jokes about them. And, maybe most blatantly today, economics was dominated by a
paradigm that believed substantive unemployment was impossible, despite
that notion having been famously and thoroughly debunked by Keynes and, of course,
reality.
How is this possible? I think the key, as in most institutional
studies, is that of the filter. To become a professor of X, one must
first spend several years receiving an undergraduate major in X, then
several more years going to graduate school in X, then perhaps work as
a postdoc or adjunct for a bit, before getting a tenure-track position
and working like mad to make enough of a dent in the field of X to be
seen as deserving of a prominent permanent position. When your time is
called, a panel of existing professors of X passes judgment on your
work to decide if it passes muster. Can you imagine a better procedure
for forcing impressionable young minds to believe crazy things?
And so this process forms what I call disciplinary bubbles. Take the
case of industrial relations for a moment. The field was largely
created by the Rockefellers, who wanted research into how they could
get rid of their unions. They paid lavishly and, not surprisingly,
found people who told them what they wanted to hear: that treating
workers nicely made unions unnecessary and companies more efficient.
The studies were completely bogus but the people who conducted them
were hailed as heroes, and provided with lavish funding to continue
their research. The funding started new departments which trained new
proteges, each of whom was taught that the founding studies as
gospel. They were told to work on expanding and refining the results,
not results, not questioning then, and so they did, becoming
industrial relations professors in their own right and continuing the
cycle.
Like other bubbles, disciplinary bubbles are difficult to pop. Imagine
you do research outside their incorrect assumptions. Your research
will simply be marginalized and ignored — you don’t get into the
conferences or the journals, it’s just not seen as valid work. And
even if you try to disprove the bogus assumptions, you get ignored.
Everyone already in the field has built their careers on those
assumptions. They’ve long rationalized them to themselves; nobody is
going to support someone who argues their life’s work is built on
sand.
Thus ignorance marches on.
I say this with some irony, but this sounds a bit like Bourdieu:
http://en.wikipedia.org/wiki/Field_(Bourdieu)
posted by Joseph Reagle
on October 20, 2009 #
Everything I’m about to say is related to engineering, for context.
While perhaps this used to be the case, cross disciplinary fertilization, particularly in engineering, is quite evident now. Though I’m not lucky enough to be permanent, folks I’ve seen do very well perform research and handle theory that wasn’t even taught in a given department 10 years ago, examples being effects of surface tension on self assembly being studied by electrical engineers, biological tailoring by chemical engineers, and mechanical engineers designing complex electrochemical circuits.
In fact, those I know that endeavor to simply carry their advisor’s banner, no matter how big a deal said advisor was, often have trouble getting a job in that field, but if they think laterally an apply the concepts to a problem in a different field often find work.
As for momentum, fads come and go, some things stick (AFM, MEMS for research) and some don’t (superconductors), and some hang around too long (fuel cells), but this is the case with everything. Single source funding is probably responsible for prolonging efforts on a given topic, for better or worse, but nothing lasts forever.
There are probably one billion things wrong with academia at the moment, and this, as far as I can tell, isn’t as generalized as your essay suggests.
posted by Dan
on October 20, 2009 #
I’m sure there are such disciplinary bubbles; equally, I’m sure there are bubbles outside academia (consider management). But I have to say that I’ve never encountered one in my own disciplines: doing well there does not depend on repeating accepted pieties; in fact, repeating accepted pieties is usually a rather bad idea.
posted by Sam C
on October 20, 2009 #
What? You mean the fields of economics, industrial relations, and political science is full of people so hopeless to get at accurate models of their fields that they form mental-model tribes completely lacking in a realistic basis? Color me not shocked. And do add psychology to that list:
http://thelastpsychiatrist.com/2009/10/more_on_amygdala_anxiety_and_m.html
I think this is the inevitable and painfully obvious outcome of any field for which we do not possess adequate models. These are the astrologies and alchemies of today - studies of phenomena which so significantly impacts our lives, yet which we have absolutely no mathematical language to deal with, nor mental tools to comprehend.
posted by Andrey Fedorov
on October 20, 2009 #
Dan: I think the phenomena Aaron is describing is probably inversely proportional to how closely the disciple is tied to uncontroversial empirical measures of success and failure. Paradigmatic cases of “uncontroversial empirical measures of success and failure” would include such engineering-specific cases as “this rocket successfully took off” and “this bridge hasn’t collapsed after ten years of heavy use”. The more controversial the measures, the less empirical they are, the looser they are tied to the discipline, the more the likelihood that other factors like founder-effects, wishful thinking, etc. are to swamp truth-tracking, leading to institutional bubbles.
Sam C: It seems to me very possible that disciplinary bubbles can coexist with the outward appearance of “not repeating accepted pieties”. Practitioners of even the most opaque and wooliest forms of Frenchified post-modernism generally took themselves to be exposing errors of their colleagues and teachers, e.g. finding hidden phallocentrism/heteronormativity lurking even in the teachings of Derrida or Foucault. It does not follow that post-modernism of that sort therefore wasn’t a disciplinary bubble. Like in this anecdote from Kierkegaard:
“Heraclitus the obscure has said: ‘One cannot pass through the same river twice.’ Heraclitus the obscure had a disciple who did not stop there; he went further and added, ‘One cannot even do it once.’”
posted by Bryan
on October 20, 2009 #
I totally agree with this article. (Joseph: I agree, Engineering has much less of this problem. But the problem does exist in other fields.)
I once read an article about how the Phone Company Monopolies are prepetuated by the law schools. (Wish I could remember where I saw it). Basically, some of the professors at prestigious law schools had a hand in the various decisions to create baby bell monopolies. So they taught their students on why this was right, and didn’t allow any students to write papers to the contrary. The students grow up and become professors, and still think the decisions are perfect, so they teach their students..
posted by Anonymouse
on October 21, 2009 #
What I love about fields like AI and Machine Learning is that they are progressing their field by coming up with new ways to discover, analyze, and use data, and constantly questioning and improving their methods at every step. In my brief stint in this field my data came from all sorts of places: photos of plankton, feedback from motor controllers, sounds from microphones, spike counts from neurons, and more.
What was cool as that we constantly revisited our basic assumptions, what are we trying to do, why are we doing it, how are we doing it, etc. Testing the null hypothesis was something you just did, because you are always weary of your tools and your assumptions, (statistical, mechanical, procedural, et al.).
I don’t see this kind of “discipline” in most disciplines. What is the biology field equivalent to Cohen’s “Empirical Methods in Artificial Intelligence”? Or how often are sociologists assigning books like van der Vaart’s “Asymptotic Statistics” to their grad students?
I think the disciplinary bubbles we should fear most are the undisciplined ones.
posted by Joshua Gay
on October 22, 2009 #
To sum up: Institutions tend to replicate themselves. That’s about it, right?
It’s a valid and important insight (but not a particularly novel one, and leveled by the right at least as often as by the left.) But don’t stop there. What conclusions should we draw from this?
You couch this insight as a broad-brush indictment of the academy as a flawed model. If you’re arguing that the academy is flawed, great. But if you’re implying that it’s useless, that it tends on the whole to perpetuate ignorance, or that its flaws outweigh its virtues, you’ve lost me.
There are two things you can do with this sort of structural critique. The first is to propose an alternate model. I have yet to encounter one that serves better than universities whose faculty possess the basics of academic freedom, though I’m all ears. The second is to use the structural critique to propose refinements to the current model that tend to counter the effects of its inherent tendencies. I’d love to see you give that a try.
Just to get you started, I’ll take a couple of stabs at it myself. One solution is to ensure a greater diversity of faculty on dissertation and tenure committees - to deliberately draw in people outside the field. There are flaws here, too - outsiders may lack the technical knowledge to fairly evaluate the work, and their own disciplinary paradigms may predispose them to find fault - yet on the whole, it would be healthy if more academics had to justify their work to people with different backgrounds. Another might be to provide more early-career development funding opportunities in the humanities and social sciences. Giving academics the financial security to spend a few years taking a risk on a new approach, and the time to validate their work, is important; it happens in the sciences, and helps increase the array of approaches to material.
What are your ideas? If you see the flaws in the present model, what solutions would you propose that don’t introduce greater flaws of their own?
posted by
on November 5, 2009 #
You can also send comments by email.
Comments
I say this with some irony, but this sounds a bit like Bourdieu: http://en.wikipedia.org/wiki/Field_(Bourdieu)
posted by Joseph Reagle on October 20, 2009 #
Everything I’m about to say is related to engineering, for context.
While perhaps this used to be the case, cross disciplinary fertilization, particularly in engineering, is quite evident now. Though I’m not lucky enough to be permanent, folks I’ve seen do very well perform research and handle theory that wasn’t even taught in a given department 10 years ago, examples being effects of surface tension on self assembly being studied by electrical engineers, biological tailoring by chemical engineers, and mechanical engineers designing complex electrochemical circuits.
In fact, those I know that endeavor to simply carry their advisor’s banner, no matter how big a deal said advisor was, often have trouble getting a job in that field, but if they think laterally an apply the concepts to a problem in a different field often find work.
As for momentum, fads come and go, some things stick (AFM, MEMS for research) and some don’t (superconductors), and some hang around too long (fuel cells), but this is the case with everything. Single source funding is probably responsible for prolonging efforts on a given topic, for better or worse, but nothing lasts forever.
There are probably one billion things wrong with academia at the moment, and this, as far as I can tell, isn’t as generalized as your essay suggests.
posted by Dan on October 20, 2009 #
I’m sure there are such disciplinary bubbles; equally, I’m sure there are bubbles outside academia (consider management). But I have to say that I’ve never encountered one in my own disciplines: doing well there does not depend on repeating accepted pieties; in fact, repeating accepted pieties is usually a rather bad idea.
posted by Sam C on October 20, 2009 #
What? You mean the fields of economics, industrial relations, and political science is full of people so hopeless to get at accurate models of their fields that they form mental-model tribes completely lacking in a realistic basis? Color me not shocked. And do add psychology to that list:
http://thelastpsychiatrist.com/2009/10/more_on_amygdala_anxiety_and_m.html
I think this is the inevitable and painfully obvious outcome of any field for which we do not possess adequate models. These are the astrologies and alchemies of today - studies of phenomena which so significantly impacts our lives, yet which we have absolutely no mathematical language to deal with, nor mental tools to comprehend.
posted by Andrey Fedorov on October 20, 2009 #
Dan: I think the phenomena Aaron is describing is probably inversely proportional to how closely the disciple is tied to uncontroversial empirical measures of success and failure. Paradigmatic cases of “uncontroversial empirical measures of success and failure” would include such engineering-specific cases as “this rocket successfully took off” and “this bridge hasn’t collapsed after ten years of heavy use”. The more controversial the measures, the less empirical they are, the looser they are tied to the discipline, the more the likelihood that other factors like founder-effects, wishful thinking, etc. are to swamp truth-tracking, leading to institutional bubbles.
Sam C: It seems to me very possible that disciplinary bubbles can coexist with the outward appearance of “not repeating accepted pieties”. Practitioners of even the most opaque and wooliest forms of Frenchified post-modernism generally took themselves to be exposing errors of their colleagues and teachers, e.g. finding hidden phallocentrism/heteronormativity lurking even in the teachings of Derrida or Foucault. It does not follow that post-modernism of that sort therefore wasn’t a disciplinary bubble. Like in this anecdote from Kierkegaard:
“Heraclitus the obscure has said: ‘One cannot pass through the same river twice.’ Heraclitus the obscure had a disciple who did not stop there; he went further and added, ‘One cannot even do it once.’”
posted by Bryan on October 20, 2009 #
I totally agree with this article. (Joseph: I agree, Engineering has much less of this problem. But the problem does exist in other fields.)
I once read an article about how the Phone Company Monopolies are prepetuated by the law schools. (Wish I could remember where I saw it). Basically, some of the professors at prestigious law schools had a hand in the various decisions to create baby bell monopolies. So they taught their students on why this was right, and didn’t allow any students to write papers to the contrary. The students grow up and become professors, and still think the decisions are perfect, so they teach their students..
posted by Anonymouse on October 21, 2009 #
What I love about fields like AI and Machine Learning is that they are progressing their field by coming up with new ways to discover, analyze, and use data, and constantly questioning and improving their methods at every step. In my brief stint in this field my data came from all sorts of places: photos of plankton, feedback from motor controllers, sounds from microphones, spike counts from neurons, and more.
What was cool as that we constantly revisited our basic assumptions, what are we trying to do, why are we doing it, how are we doing it, etc. Testing the null hypothesis was something you just did, because you are always weary of your tools and your assumptions, (statistical, mechanical, procedural, et al.).
I don’t see this kind of “discipline” in most disciplines. What is the biology field equivalent to Cohen’s “Empirical Methods in Artificial Intelligence”? Or how often are sociologists assigning books like van der Vaart’s “Asymptotic Statistics” to their grad students?
I think the disciplinary bubbles we should fear most are the undisciplined ones.
posted by Joshua Gay on October 22, 2009 #
To sum up: Institutions tend to replicate themselves. That’s about it, right?
It’s a valid and important insight (but not a particularly novel one, and leveled by the right at least as often as by the left.) But don’t stop there. What conclusions should we draw from this?
You couch this insight as a broad-brush indictment of the academy as a flawed model. If you’re arguing that the academy is flawed, great. But if you’re implying that it’s useless, that it tends on the whole to perpetuate ignorance, or that its flaws outweigh its virtues, you’ve lost me.
There are two things you can do with this sort of structural critique. The first is to propose an alternate model. I have yet to encounter one that serves better than universities whose faculty possess the basics of academic freedom, though I’m all ears. The second is to use the structural critique to propose refinements to the current model that tend to counter the effects of its inherent tendencies. I’d love to see you give that a try.
Just to get you started, I’ll take a couple of stabs at it myself. One solution is to ensure a greater diversity of faculty on dissertation and tenure committees - to deliberately draw in people outside the field. There are flaws here, too - outsiders may lack the technical knowledge to fairly evaluate the work, and their own disciplinary paradigms may predispose them to find fault - yet on the whole, it would be healthy if more academics had to justify their work to people with different backgrounds. Another might be to provide more early-career development funding opportunities in the humanities and social sciences. Giving academics the financial security to spend a few years taking a risk on a new approach, and the time to validate their work, is important; it happens in the sciences, and helps increase the array of approaches to material.
What are your ideas? If you see the flaws in the present model, what solutions would you propose that don’t introduce greater flaws of their own?
posted by on November 5, 2009 #
You can also send comments by email.