Let’s just put a bright line down right now. 2016 is year 1. Everything published before 2016 is provisional. Don’t take publication as meaning much of anything, and just cos a paper’s been cited approvingly, that’s not enough either. You have to read each paper on its own. Anything published in 2015 or earlier is part of the “too big to fail” era, it’s potentially a junk bond supported by toxic loans and you shouldn’t rely on it.
And a recent article from the same excellent blog, on trolls: https://putanumonit.com/2018/08/22/player-of-games/
ReplyDeleteThis may be true for a lot of published science, not just psychology.
ReplyDeleteMay be. I know in my field, the protocol is that you must satisfy the committee that you can measure what you think you can measure using a "burn" sample before you can process the full dataset. And there are "unblinding" protocols as well, where you have to show that the _other_ quantities associated with the measurement make sense. Big experiments, big teams--it would be harder to hide cheating.
ReplyDeleteThat's not a solution for everybody, of course, but having teams that cross-check each other's research plans might be useful. Or they might just back-scratch.
The problems I'm thinking about have been in medicine, where a number of widely-cited studies couldn't be replicated later.
ReplyDeleteHere's a summary:
https://en.wikipedia.org/wiki/Replication_crisis
Two rules of thumb mentioned in my undergrad and grad stat classes (because they're that wide-spread): one is that if you're looking for something subtle in a lot of noise, run up your n until you achieve significance. The other, somewhat antithetical to that one is that if your hypothesis wants a linear relationship, collect two data points. If you need a curve, collect a third.
ReplyDeleteThe professors seemed to be tongue in cheek.
Eric Hines
It is encouraging that most people can learn to spot it with minimal training.
ReplyDelete