this post was submitted on 31 Oct 2024
65 points (98.5% liked)

technology

23375 readers
37 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 4 years ago
MODERATORS
top 19 comments
sorted by: hot top controversial new old
[–] buh@hexbear.net 72 points 1 month ago (1 children)

When a compiler takes my human readable code and converts it into executable machine code, that’s AI

[–] Hexboare@hexbear.net 22 points 1 month ago

when the machine executes code, that's AI

[–] hexaflexagonbear@hexbear.net 51 points 1 month ago (1 children)
[–] GaveUp@hexbear.net 47 points 1 month ago* (last edited 1 month ago) (2 children)

I wouldn't be surprised if it's technically true but it's more like, coder starts writing out a line of code, AI autocompletes the rest of the line, and then the coder has to revise and usually edit it. And the amount of code by character count that the AI autocompleted is 25% of all new code. Like same shit as Github's CoPilot that came out years ago, nothing special at all

None of that 25% of AI generated code wasn't very heavily initiated and carefully crafted by humans every single time to ensure it actually works

It's such a purposeful misrepresentation of labour (even though the coders themselves all want to automate away and exploit the rest of the working class too)

[–] Thordros@hexbear.net 40 points 1 month ago* (last edited 1 month ago) (1 children)

coder starts writing out a line of code, AI autocompletes the rest of the line, and then the coder has to revise and usually edit it. And the amount of code by character count that the AI autocompleted is 25% of all new code.

When you dig past the clickbait articles and find out what he actually said, you're correct. He's jerking himself off about how good his company's internal autocomplete is.

[–] GaveUp@hexbear.net 22 points 1 month ago

I'm not going to read it but I bet it's nowhere near as good as he thinks it really is

I wouldn't be surprised if the statistics on "AI generated code" was like, I type 10 characters, I let AI autocompleted the next 40 characters, but then I have to edit 20 of those characters, and the AI tool counts "40 characters" as "AI generated" since that was what was accepted

Not to mention since it's probably all trained on their own internal codebase and there's a set certain coding style guide, it'd probably perform way worse for general coding if people weren't all trying to code following the exact same patterns, guidelines, and libraries,

[–] hexaflexagonbear@hexbear.net 13 points 1 month ago

I assume that's what it is as well. I'm guessing there's also a lot of boilerplate stuff and they're counting line counts inflated by pointless comments, and function comment templates that usually have to get fully rewritten.

[–] alexandra_kollontai@hexbear.net 40 points 1 month ago (1 children)

He is probably just lying.

[–] SkingradGuard@hexbear.net 18 points 1 month ago

Yeah this is for investors

[–] dave 39 points 1 month ago

I write minified JavaScript and the AI pretty-prints it with 8-space indentation. That’s well over 25% by weight.

[–] adultswim_antifa@hexbear.net 36 points 1 month ago

Is that why search results are getting so much worse so fast and it tells you to eat rocks to stay healthy?

[–] KoboldKomrade@hexbear.net 21 points 1 month ago (1 children)

Lol I'd bet 90% of that is of equal quality to the code you get by measuring lines written.

Another 9% is likely stolen.

The final 1% won't even compile, doesn't work right, or needs so much work you'd be better off redoing it.

The only useful result I've had with CS is asking for VERY basic programs that I have to then check the quality of. Besides that, I had ONE question that I knew would be answered in a text book somewhere, but couldn't get a search hit about. (I think it was something about the most efficient way to insert or sort or something like that.)

Worked with it a bit at work and the output was so unreliable I gave up and took the best result it gave me and hard coded it so I could have something to show off. Left it as a "in the future..." thing and last I heard its still spinning in the weeds.

[–] AlbigensianGhoul@lemmygrad.ml 2 points 1 month ago

I often help beginners with their school programming assignments. They're often dumbfounded when I tell them "AI" is useless because they "asked it to implement quicksort and it worked perfectly".

The next batch of software engineers are going to have huge dependency problems.

[–] TheDoctor@hexbear.net 9 points 1 month ago

In what kind of workflow? Because if I start typing, my copilot generates 20 lines, and I edit that 20 lines down to 5 that will compile and bear little resemblance to what was generated, I feel like that should count as 0 AI lines, but I have a feeling it counts for more.

SCoC is the only measurement worse than SLoC that I can think of

[–] kittin@hexbear.net 8 points 1 month ago

When dependabot makes a pull request, that’s AI

[–] menixator@lemmy.dbzer0.com 7 points 1 month ago

Riiiight. And I bet he'd tell you that 25% of their servers were powered by cold fusion if it were the newest thing that got investors to throw bags of money at them.

[–] AlbigensianGhoul@lemmygrad.ml 2 points 1 month ago

We're also using AI internally to improve our coding processes, which is boosting productivity and efficiency. Today, more than a quarter of all new code at Google is generated by AI, then reviewed and accepted by engineers. This helps our engineers do more and move faster.

When text editors automatically create templates for boilerplate, that's AI.

Source

[–] Antiwork@hexbear.net 1 points 1 month ago

Stfu and bring back cached websites.