5gruel

joined 1 year ago
[–] 5gruel@lemmy.world 8 points 6 days ago (2 children)

Have you even read the article?

[–] 5gruel@lemmy.world 3 points 1 week ago

Tankie or not, i giggled

[–] 5gruel@lemmy.world 2 points 2 weeks ago (1 children)

When will people finally stop parroting this sentence? It completely misses the point and answers nothing.

[–] 5gruel@lemmy.world 7 points 1 month ago

It's opt-in though?

[–] 5gruel@lemmy.world 2 points 2 months ago

The AI hate on Lemmy never fails to amaze me

[–] 5gruel@lemmy.world 0 points 2 months ago

If only more people would understand this, we would already have world peace.

[–] 5gruel@lemmy.world -5 points 2 months ago (3 children)

The world is getting better in the long run by almost every metric, don't get sucked into the doomer mentality.

Still fucked up ruling though.

[–] 5gruel@lemmy.world 7 points 2 months ago (3 children)

Oh you sweet summer child

[–] 5gruel@lemmy.world 0 points 2 months ago (1 children)

Why? That would only be the case if the original works already were the pinnacle of text quality and information density, which is quite a stretch.

[–] 5gruel@lemmy.world 1 points 3 months ago

You keep posting that but it is wrong. Ignoring that disabling installation of unsigned extensions is not censoring, you can install signed extensions via file in every version of Firefox, not only the developer one.

Stupid artificial outrage

[–] 5gruel@lemmy.world 5 points 4 months ago

Weird responses here so far. I'll try to actually answer the question.

I'm using copilot for 9 months at work now and it's crazy how it accelerates wiring code. I am writing class c code in C++ and rust, and it has become a staple tool like auto formatting. That being said, it cannot really do more abstract stuff like this architecture decisions.

Just try it for some time and see if it fits your use case. I'm hoping the local code models will catch up soon so I can get away from Microsoft, but until then, copilot it is.

[–] 5gruel@lemmy.world 2 points 4 months ago (1 children)

I'm not convinced about the "a human can say 'that's a little outside my area of expertise', but an LLM cannot." I'm sure there are a lot of examples in the training data set that contains qualification of answers and expression of uncertainty, so why would the model not be able to generate that output? I don't see why it would require an "understanding" for that specifically. I would suspect that better human reinforcement would make such answers possible.

view more: next ›