this post was submitted on 10 Aug 2023
3 points (100.0% liked)

TechTakes

1427 readers
252 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
 

Pretty soon, paying for all the APIs you need to make sure your Midjourney images are palatable will be enough to pay a human artist!

you are viewing a single comment's thread
view the rest of the comments
[–] self@awful.systems 1 points 1 year ago (1 children)

it goes without saying that none of the hackermen point out that if it were as simple as applying a CV model to the generated output, the company generating the output would already be doing it (and I’ve given up on the idea that a hacker news poster could begin to analyze why that is). however:

A meta question for y'all if you're willing to share. I've seen y'all launch what I think is now 5 different ideas under the same name? I want to say that I remember a platform for musicians, and a platform to automatically convert a codebase into hostable inference servers, among a few other things.

this feels like a con to me, or at least an indicator that these folks have no idea what they want to do. a post with that kind of tone would most likely get flagged dead by the mods, though, so the orange site remains a target-rich environment if you’re looking for folk who might invest in your scam/bad idea

[–] 200fifty@awful.systems 1 points 1 year ago (1 children)

there actually is a comment making this point now:

Isn't this product kind of impossible? Like a compression program that compresses compressed files? If you have an algorithm for determining whether a generated image is good or bad couldn't the same logic be incorporated into the network so that it doesn't generate bad images?

the reply is a work of art:

We’re optimistic about using our own algorithms and models to evaluate another model. In theoretical computer science, it is easier to verify a correct solution than to generate a correct solution (P vs NP problem).

it's not even wrong, as they say

[–] self@awful.systems 1 points 1 year ago (2 children)

In theoretical computer science, it is easier to verify a correct solution than to generate a correct solution (P vs NP problem).

wh-what? I — there’s just so much wrong. is this the maximum information density of wrongness? the shortest string that encodes the most incorrect information? have these absolute motherfuckers just invented and then solved inverse Kolmogorov complexity?

[–] froztbyte@awful.systems 1 points 1 year ago* (last edited 1 year ago)

interestingly the replies have the same kind of tone that you often see in cryptographic kookery, so that's another strong warning signal

[–] dgerard@awful.systems 1 points 1 year ago (1 children)

that sounds like an oversimplification that was further oversimplified by at least two editors

[–] gerikson@awful.systems 1 points 1 year ago

They used an LLM.