this post was submitted on 17 Jun 2023
212 points (100.0% liked)

Gaming

30428 readers
317 users here now

From video gaming to card games and stuff in between, if it's gaming you can probably discuss it here!

Please Note: Gaming memes are permitted to be posted on Meme Mondays, but will otherwise be removed in an effort to allow other discussions to take place.

See also Gaming's sister community Tabletop Gaming.


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

According to their website, Publications owned by GAMURS Group include:

Destructoid

The Escapist

Siliconera

Twinfinite

Dot Esports

Upcomer

Gamepur

Prima Games

PC Invasion

Attack of the Fanboy

Touch, Tap, Play

Pro Game Guides

Gamer Journalist

Operation Sports

GameSkinny

you are viewing a single comment's thread
view the rest of the comments
[–] Grimace@beehaw.org 8 points 1 year ago (1 children)

I just don't see ChatGPT being capable enough quite yet. These articles are going to be low quality, written in the same voice, and filled with factual errors. Not to mention released at a volume that nobody will bother to keep up with. Seems like self destruction on their part.

[–] parrot-party@kbin.social 3 points 1 year ago (2 children)

An AI writer is always going to be trash. AI can't experience anything, only remix preexisting content. So it'll always be a regurgitation of what others have posted. But if we keep cutting out humans, then it'll eventually be nothing content on repeat.

[–] Wiredfire@kayb.ee 6 points 1 year ago

Of course it’ll be trash. Quality isn’t the goal, just bulk with the aim of getting maybe fewer views per article but pumping out so so many that it’s more views, or rather ad impressions, overall with much lower cost.

Problem is it’s shortsighted. Once those sources quickly get a reputation for trash quality folk will learn not to bother clicking through to those sources.

[–] Kaldo@kbin.social 5 points 1 year ago

There is also always a chance that it's simply going to be wrong, ML cannot differentiate what is the truth or not. We see it happen with easy mistakes that people wouldn't make and it's going to be even worse when they get used for something more nuanced or complex.