this post was submitted on 03 Feb 2024
167 points (98.8% liked)

Science

3220 readers
129 users here now

General discussions about "science" itself

Be sure to also check out these other Fediverse science communities:

https://lemmy.ml/c/science

https://beehaw.org/c/science

founded 2 years ago
MODERATORS
top 17 comments
sorted by: hot top controversial new old
[–] PrinceWith999Enemies@lemmy.world 29 points 9 months ago (3 children)

I started noticing this trend about 15 years ago. There was this point where I suddenly started receiving solicitation spam from pay to publish Chinese journals. It was obvious they didn’t know who I was or what my work consisted of. It was very easy to jump to the conclusion that this was a huge push on the part of China to get their national pub counts boosted, and on the part of a large number of academics who were totally just looking to get their papers in print.

Whenever I see a pub in a journal I don’t know, and I’m interested enough to bother, I’ll check the impact factor (imperfect but established) and the other papers published by the author(s).

I think I’ve paid to publish all of my papers to make them open access - I’d always build that into my budgets. But this is on a whole other level. I always think of this when a paper like the NYT compares Chinese to US science using publication counts.

There are brilliant Chinese scientists and research institutions, but there’s also a lot of gaming the system. We need a better quality metric for publications and papers.

[–] Deceptichum@kbin.social 23 points 9 months ago (2 children)

Even science is being enshittified.

[–] sik0fewl@kbin.social 11 points 9 months ago

Anywhere there's a buck to be made.

[–] moistclump@lemmy.world 1 points 9 months ago

Capitalism’s yellow brick road

[–] glomag@kbin.social 18 points 9 months ago (1 children)

The whole system is so messed up on multiple levels. You not only have to publish some result that is correct (true) but it also has to be positive (support your hypothesis) and sufficiently "important " to your field or else your whole career is at risk.

I'm posting this while running an experiment at 11pm on a Saturday night trying to collect data for a grant application. Of course I'm going to lose if I'm competing against people who just make shit up.

[–] Endward23@futurology.today 1 points 9 months ago

The whole system is so messed up on multiple levels. You not only have to publish some result that is correct (true) but it also has to be positive (support your hypothesis) and sufficiently "important " to your field or else your whole career is at risk.

The publication or reproduction crises comes for a reason.

In my opinion, the flaws of the current system are well-documented and even understand to a degree. The actuall problem is to come up with a new system. This system has to be objectiv and fair and must measure the quality of scientists' work.

[–] Endward23@futurology.today 2 points 9 months ago

Its not a perfect metric, but one that allows us to make a quantitative comparison.

[–] CmdrS@kbin.social 10 points 9 months ago (1 children)
[–] Kissaki@feddit.de 9 points 9 months ago

48 min long

Their video-description-linked text source: https://laskowskilab.faculty.ucdavis.edu/2020/01/29/retractions/

Knowing that our data were no longer trustworthy was a very difficult decision to reach, but it’s critical that we can stand behind the results of all our papers. I no longer stand behind the results of these three papers.

There has been some questions of why I (and others) didn’t catch these problems in the data sooner. This is a valid question. I teach a stats course (on mixed modeling) and even I harp on my students about how so many problems can be avoided by some decent data exploration. So let me be clear: I did data exploration. I even followed Alain Zuur’s “A protocol for data exploration to avoid common statistical problems“. I looked through the raw data, looking for obvious input errors and missing values. […]

Altogether, I was left with the conclusion that there was good variation in the data, no obvious outliers or weird groupings, and an excess of 600 values which was expected due to the study design. As a scientist, I know that I have a responsibility to ensure the integrity of our papers which is something I take very seriously, leading me to be all the more embarrassed (& furious) that my standard good practices failed to detect the problematic patterns in the data. Multiple folks have since looked at these data and came to the same conclusion that until you know what to look for, the patterns are not obvious.

[–] ebits21@lemmy.ca 8 points 9 months ago* (last edited 9 months ago) (1 children)

The Freakonomics podcast covered this topic pretty nicely just recently. Would recommend a listen! It’s not just international or low impact journals that are having issues.

I feel like zero trust research could be a thing in the future in some areas.

So for example, the study would be pre registered with expected outcome as is starting to be done more often now. But also the third party has a private encryption key and the experiments data is encrypted somehow during collection with a public encryption key.

Obviously very much depends on the type of study, but data is very often collected with collection software of some sort that could implement this.

The scientist could not snoop the data even if they wanted. The public key can encrypt data but only the private key can unlock it.

Then once uploaded to the third party they can unlock it with their private key. Then the data is public before any analysis.

Seems to me that this would force science to be done the way it ought to be done!

[–] bananabenana@lemmy.world 3 points 9 months ago* (last edited 9 months ago) (1 children)

Totally unnecessary and is not how science works.

If you make data public before analysis, labs will get scooped with their own data. No one would invest in data collection.

Often things are found or worked out during the process, which can change week to week or month to month, iteratively. Experiments don't go to plan, data is cooked and can only be used in reduced ways etc. Researchers are meant to share their raw data anyway which should prevent this sort of stuff. Basic statistical analysis on datasets usually reveals tampering.

The issue is the insane academic standards and funding bodies (public grant $) which reward high volume and high 'impact' work. These incentives need re-evaluation and people should not be punished for years of low activity. Sometimes science and discovery just doesn't work the way you think it will, and that's okay. We need a system which acknowledges that which everyone in science knows.

[–] ebits21@lemmy.ca 2 points 9 months ago* (last edited 9 months ago)

All it would do is create an audit trail of your data to keep scientists honest. You can still iterate and change course but now you’re responsible for the record (if you look at the data at some point the data at that point could be recorded as is and a log keeps track when you check the data). Why did you change course and when? Was that appropriate? The data is verified when and if you decide to review it.

How science is done has a problem, just suggesting a solution. I know that’s not how it’s done.

All the data is a matter of record. It makes sure the raw data is ACTUALLY the raw data without bias. It makes sure you’re not ignoring negative results (a huge issue). Statistical detection of cheating will never be as good as reviewing the raw data and changes over time.

As for scooping data, it’s a matter of the record now. There’s data available showing that they scooped you. Currently there’s nothing. The data doesn’t have to be public until the study is published.

I think the main barrier would be scientists and the incentives inherent in the system (career, money, prestige) that creates the cheating in the first place.

[–] Blaze@discuss.tchncs.de 7 points 9 months ago

Thank you for sharing. This is concerning indeed.

[–] autotldr@lemmings.world 6 points 9 months ago (1 children)

This is the best summary I could come up with:


Tens of thousands of bogus research papers are being published in journals in an international scandal that is worsening every year, scientists have warned.

The practice has since spread to India, Iran, Russia, former Soviet Union states and eastern Europe, with paper mills supplying ­fabricated studies to more and more journals as increasing numbers of young ­scientists try to boost their careers by claiming false research experience.

The products of paper mills often look like regular articles but are based on templates in which names of genes or diseases are slotted in at random among fictitious tables and figures.

Others are more bizarre and include research unrelated to a journal’s field, making it clear that no peer review has taken place in relation to that article.

The spokesperson added that Wiley had now identified hundreds of fraudsters present in its portfolio of journals, as well as those who had held guest editorial roles.

“We have removed them from our systems and will continue to take a proactive … approach in our efforts to clean up the scholarly record, strengthen our integrity processes and contribute to cross-industry solutions.”


The original article contains 957 words, the summary contains 186 words. Saved 81%. I'm a bot and I'm open source!

[–] Endward23@futurology.today 0 points 9 months ago
[–] Comradesexual@lemmygrad.ml 1 points 9 months ago

I'm a hobby research reader. This is sad to see. So much research is locked behind pay walls. What if the not locked away one is fake and spreading misinformation with nothing to compare it to? Sad.

[–] Endward23@futurology.today -1 points 9 months ago

The image that some unserious magazines pushed the main part of the fake papers is not sattled by research as I have read long ago. Some scientometrists has checked out and find out that even many of the "small magazines" use regular peer review. Is actuall a small minority of magazines which let them be paid to publish "science spam".