Our performance as researchers is evaluated by the number of publications we have in peer reviewed journals. The higher the impact factor and the lower the acceptance rate, the more “impressive” our research output. This performance model has two severe, if not fatal, flaws: (1) much of our research is behind a paywall, meaning that only other academics have access to it and (2) even if our research were not behind a paywall, it is often written in a way that in inaccessible to an uninterested public.
The statistical methods we used to describe and “map” the natural and social world are so complex that only other academics have the training to interpret them. As a general rule of thumb, the simpler the statistical method used, the less likely it is to be published in a top journal. If you want to get published in the highest impact factor journals, your methods must be so sophisticated that many methodologists will not be able to follow the mathematical logic in your analyses.
The result, of course, is that we end up just talking to each other most of the time. Our debates, both theoretical and empirical, unfold in a siloed space where only we can understand the language used. My mom used to read all of my articles. Even though I consider my writing to be very accessible by academic standards, she always told me that she didn’t understand what I was saying. The problem, I explained, is that if I submitted a paper to an academic journal using words and ideas that she could understand, it would never get published.
“Our debates, both theoretical and empirical, unfold in a siloed space where only we can understand the language used.”
This is a travesty. Although I understand that complex research questions require complex methods and highly developed theoretical models, at the same time I believe that these ideas must be translated and inserted into public debates. The Conversation is a step in the right direction and a model for how our research should be communicated.
Our understanding of the world can add real value to public policy and public debates. Take, for example, the current debacle surrounding the U.S. election. Social psychologists like Erving Goffman offer useful frameworks for explaining the power struggle unfolding before our eyes. Biden and Trump are leveraging their connections to “define the situation” in a way that is favorable to them. Trump claims the election was fraudulent. Biden supporters and the media counter-claim that there is no evidence of “widespread fraud.” What does “widespread” mean? How can it be defined? What is the threshold between “fraud” and “widespread fraud?” No one, not even Biden, is arguing that all 150,000,000 votes were legitimate, but were the vote “legitimate enough?” These are deeply philosophical and empirical questions that could be answered (though perhaps not definitively) by academics. We should be engaging in these public debates, and throwing light on “what’s going on here.” That is the only question that researchers really care about: what’s going on here? We spend our lives trying to answer this question, but, sadly, we offer up our answers only to each other.
I’m not suggesting that we should abandon peer reviewed research. Basic science is, has been, and always will be extremely important to humanity. But the silos in which basic science is carried out should be torn down. All academics should be expected to be public academics. That’s one of the key reasons why I launched Dire Ed.
Prof. Andrew R. Timming
This article is published under a Creative Commons 4.0 License.