pull down to refresh
106 sats \ 4 replies \ @Rothbardian_fanatic 17 Jan \ on: The strain on scientific publishing science
Community based reviews are a good idea as long as you can get a diverse (in opinion and method) population of reviewers, not just paradigm followers. That is the danger of community review, more groupthink. I still think just putting out the details of the experiment and let people interested in it try to replicate the results or falsify them.
On the fifth point, I am not trusting AI at all. Not one bit. It is nothing but a statistical correlator of words. One word that statistically follows another is not intelligence and as the scientists like to quote quite often; correlation does not equal causation. I will still trust people doing the work and correlations more than the machines that have the biases due to their training.
On the first point, some kind of non-binding review by fellow experts might still be useful to guide the reader in what is considered correct or not, from a methodological point of view. But indeed, the risk of groupthink is always lurking.
I agree with your assessment of the 5th point. AI really hasn't convinced me yet for me blindly trust it.
reply
The real problem comes in when you follow the money. If the non-binding reviewers are paid shills for someone else, their reviews are just as possibly bogus as a groupthink review. I think that the only valid review would be re-running the experiment under the same conditions with the same methods to see the results and then report those results.
AI has a very, very long way to go before I could consider trusting it. I do not like the bias in training, either.
reply
I think that the only valid review would be re-running the experiment under the same conditions with the same methods to see the results and then report those results.
That's indeed the only valid review. That's how science is ultimately self-correcting. In theory.
The problem is that this process can take a lot of time. Some of the experiments in my field require such specialized and expensive equipment, that only a few groups in the world have the expertise to carry them out. Same with the theory and simulations, it requires sometimes very specific math and codes, that again, only few people have the skills to repeat the previous results.
And all of this comes back to the discussion of financial or reputation incentives. Why would I spend time repeating work that has already been done?
reply
You would spend time repeating work that has already been done to validate or falsify it. As you say, it is the only valid review.
How many times have we seen falsified data, bogus results and other scientific hijinks in the peer-reviewed but not replicated published papers? I realize that sometimes you cannot replicate because of expense or lack of equipment and talent, but what else can you do?
reply