There are two types of academics: the realists and the idealists. The realists recognize that the games we play to get our manuscripts published are just that: games. The idealists pretend to be above those games. Full disclosure: I am in the realist camp.
Let’s start with what we all agree on. The academic publishing system is broken. It is often the case that “good” research gets left behind, destined for the old file drawer, and comparatively “bad” research can easily make it into our top journals. I don’t know about you, but when I look at the latest articles in the top journals in my field, I wonder how they were ever accepted. Yes, they are technically and econometrically very impressive, but most add almost no new value to what we already know.
The key problem boils down to how we define “good” and “bad” research. This is the million dollar question, so to speak. With regard to this question, obviously we, as individuals, cannot define good and bad. If we could, then virtually everything we write, as individuals, we would consider good, even if that’s not the case. Now that we’ve discarded an individual definition of good and bad, we need to turn to “system” definitions of good and bad.
What do I mean by “system” definitions? I’m talking about journal rankings. Impact factors. Citations. Journal acceptance/ rejection rates. You’ll notice that all of these “system” artifacts have one thing in common: they are numerical and therefore rank-orderable.
Now, the idealists will say that they reject these metrics outright. They will point out that all of these metrics are flawed to some degree or another. They will argue that only the “impact” of the research matters. They will say that journal rankings are an artificial construct with no intrinsic meaning. I am, I confess, somewhat sympathetic with the idealists. Yes, journal metrics are imperfect and artificial, and, yes, research should matter. The problem I have is that the idealists will excoriate realists for our instrumentalism, but at the same time submit their papers to Nature, Science, or whatever the top journal is in their fields. In other words, they claim to be above the system, but they play the game just like the rest of us.
As I said above, I am a realist. Perhaps I used to be an idealist. It’s likely that age is positively correlated with realist thinking. Now that I have a few grey hairs, I’ve accepted the fact that journal metrics, rightly or wrongly, are a part of our system, for better or for worse. I don’t particularly like them. I don’t particularly agree with them. But I use them to make decisions on where to submit, just like the idealists. The only difference is that I admit that I use them.
“I’ve accepted the fact that journal metrics, rightly or wrongly, are a part of our system, for better or for worse. I don’t particularly like them. I don’t particularly agree with them. But I use them to make decisions on where to submit.”
No measurement is perfect. We know this as scientists. We spend huge amounts of time developing valid and reliable measures, but we never get it just perfect. Social scientists really struggle to measure social variables, but even physicists can struggle to measure their variables precisely. That’s just a reflection of the complex world in which we live.
My advice to younger researchers is that they should abandon idealism. Yes, be forever critical of the “system” performance metrics. Yes, do research that matters. Yes, don’t be afraid to submit occasionally to “low” ranking journals because sometimes, as specialist outlets, they are the best places for your research. But, at the same time, I urge you to use metrics and journal lists to guide your submission decision-making. Whether you’re a realist or an idealist, you’ll be playing the game no matter what.
Professor Andrew R. Timming
This article is published under a Creative Commons 4.0 License.