this post was submitted on 05 Feb 2025
202 points (82.0% liked)

Technology

61632 readers
4924 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 
top 50 comments
sorted by: hot top controversial new old
[–] zipzoopaboop@lemmynsfw.com 5 points 1 hour ago

I asked Gemini if the quest has an SD slot. It doesn't, but Gemini said it did. Checking the source it was pulling info from the vive user manual

[–] rumba@lemmy.zip 6 points 5 hours ago

Yeah and you know I always hated this screwdrivers make really bad hammers.

[–] gerryflap@feddit.nl 17 points 8 hours ago* (last edited 8 hours ago) (1 children)

These models don't get single characters but rather tokens repenting multiple characters. While I also don't like the "AI" hype, this image is also very 1 dimensional hate and misreprents the usefulness of these models by picking one adversarial example.

Today ChatGPT saved me a fuckton of time by linking me to the exact issue on gitlab that discussed the issue I was having (full system freezes using Bottles installed with flatpak on Arch). This was the URL it came up with after explaining the problem and giving it the first error I found in dmesg: https://gitlab.archlinux.org/archlinux/packaging/packages/linux/-/issues/110

This issue is one day old. When I looked this shit up myself I found exactly nothing useful on both DDG or Google. After this ChatGPT also provided me with the information that the LTS kernel exists and how to install it. Obviously I verified that stuff before using it, because these LLMs have their limits. Now my system works again, and figuring this out myself would've cost me hours because I had no idea what broke. Was it flatpak, Nvidia, the kernel, Wayland, Bottles, some random shit I changed in a config file 2 years ago? Well thanks to ChatGPT I know.

They're tools, and they can provide new insights that can be very useful. Just don't expect them to always tell the truth, or to actually be human-like

Just don't expect them to always tell the truth, or to actually be human-like

I think the point of the post is to call out exactly that: people preaching AI as replacing humans

[–] eggymachus@sh.itjust.works 24 points 12 hours ago

A guy is driving around the back woods of Montana and he sees a sign in front of a broken down shanty-style house: 'Talking Dog For Sale.'

He rings the bell and the owner appears and tells him the dog is in the backyard.

The guy goes into the backyard and sees a nice looking Labrador Retriever sitting there.

"You talk?" he asks.

"Yep" the Lab replies.

After the guy recovers from the shock of hearing a dog talk, he says, "So, what's your story?"

The Lab looks up and says, "Well, I discovered that I could talk when I was pretty young. I wanted to help the government, so I told the CIA. In no time at all they had me jetting from country to country, sitting in rooms with spies and world leaders, because no one figured a dog would be eavesdropping, I was one of their most valuable spies for eight years running... but the jetting around really tired me out, and I knew I wasn't getting any younger so I decided to settle down. I signed up for a job at the airport to do some undercover security, wandering near suspicious characters and listening in. I uncovered some incredible dealings and was awarded a batch of medals. I got married, had a mess of puppies, and now I'm just retired."

The guy is amazed. He goes back in and asks the owner what he wants for the dog.

"Ten dollars" the guy says.

"Ten dollars? This dog is amazing! Why on Earth are you selling him so cheap?"

"Because he's a liar. He's never been out of the yard."

[–] Grandwolf319@sh.itjust.works 32 points 13 hours ago* (last edited 13 hours ago) (1 children)

There is an alternative reality out there where LLMs were never marketed as AI and were marketed as random generator.

In that world, tech savvy people would embrace this tech instead of having to constantly educate people that it is in fact not intelligence.

[–] Static_Rocket@lemmy.world 2 points 4 hours ago

That was this reality. Very briefly. Remember AI Dungeon and the other clones that were popular prior to the mass ml marketing campaigns of the last 2 years?

[–] whotookkarl@lemmy.world 40 points 14 hours ago (3 children)

I've already had more than one conversation where people quote AI as if it were a source, like quoting google as a source. When I showed them how it can sometimes lie and explain it's not a primary source for anything I just get that blank stare like I have two heads.

[–] schnurrito@discuss.tchncs.de 8 points 11 hours ago

Me too. More than once on a language learning subreddit for my first language: "I asked ChatGPT whether this was correct grammar in German, it said no, but I read this counterexample", then everyone correctly responded "why the fuck are you asking ChatGPT about this".

load more comments (2 replies)
[–] autonomoususer@lemmy.world 2 points 7 hours ago

Skill issue

[–] VintageGenious@sh.itjust.works 59 points 17 hours ago (43 children)

Because you're using it wrong. It's good for generative text and chains of thought, not symbolic calculations including math or linguistics

[–] Grandwolf319@sh.itjust.works 14 points 14 hours ago

Because you're using it wrong.

No, I think you mean to say it’s because you’re using it for the wrong use case.

Well this tool has been marketed as if it would handle such use cases.

I don’t think I’ve actually seen any AI marketing that was honest about what it can do.

I personally think image recognition is the best use case as it pretty much does what it promises.

[–] Prandom_returns@lemm.ee 0 points 6 hours ago

So for something you can't objectively evaluate? Looking at Apple's garbage generator, LLMs aren't even good at summarising.

load more comments (41 replies)
[–] whynot_1@lemmy.world 34 points 17 hours ago (2 children)

I think I have seen this exact post word for word fifty times in the last year.

[–] pulsewidth@lemmy.world 1 points 29 seconds ago

And apparently, they apparently still can't get an accurate result with such a basic query.

And yet... https://futurism.com/openai-signs-deal-us-government-nuclear-weapon-security

[–] clay_pidgin@sh.itjust.works 16 points 16 hours ago (6 children)

Has the number of "r"s changed over that time?

[–] Tgo_up@lemm.ee 16 points 16 hours ago (2 children)

This is a bad example.. If I ask a friend "is strawberry spelled with one or two r's"they would think I'm asking about the last part of the word.

The question seems to be specifically made to trip up LLMs. I've never heard anyone ask how many of a certain letter is in a word. I've heard people ask how you spell a word and if it's with one or two of a specific letter though.

If you think of LLMs as something with actual intelligence you're going to be very unimpressed.. It's just a model to predict the next word.

[–] renegadespork@lemmy.jelliefrontier.net 25 points 15 hours ago (3 children)

If you think of LLMs as something with actual intelligence you're going to be very unimpressed.. It's just a model to predict the next word.

This is exactly the problem, though. They don’t have “intelligence” or any actual reasoning, yet they are constantly being used in situations that require reasoning.

load more comments (3 replies)
[–] Grandwolf319@sh.itjust.works 4 points 13 hours ago (3 children)

If you think of LLMs as something with actual intelligence you're going to be very unimpressed

Artificial sugar is still sugar.

Artificial intelligence implies there is intelligence in some shape or form.

[–] Scubus@sh.itjust.works 1 points 54 minutes ago

Thats because it wasnt originally called AI. It was called an LLM. Techbros trying to sell it and articles wanting to fan the flames started called it AI and eventually it became common dialect. No one in the field seriously calls it AI, they generally save that terms to refer to general AI or at least narrow ai. Of which an llm is neither.

[–] corsicanguppy@lemmy.ca 3 points 11 hours ago (1 children)

Artificial sugar is still sugar.

Because it contains sucrose, fructose or glucose? Because it metabolises the same and matches the glycemic index of sugar?

Because those are all wrong. What's your criteria?

[–] Grandwolf319@sh.itjust.works 1 points 7 hours ago

In this example a sugar is something that is sweet.

Another example is artificial flavours still being a flavour.

Or like artificial light being in fact light.

[–] JohnEdwa@sopuli.xyz 3 points 12 hours ago* (last edited 12 hours ago)

Something that pretends or looks like intelligence, but actually isn't at all is a perfectly valid interpretation of the word artificial - fake intelligence.

[–] FourPacketsOfPeanuts@lemmy.world 19 points 17 hours ago (1 children)

It's predictive text on speed. The LLMs currently in vogue hardly qualify as A.I. tbh..

[–] TeamAssimilation@infosec.pub 9 points 16 hours ago

Still, it’s kinda insane how two years ago we didn’t imagine we would be instructing programs like “be helpful but avoid sensitive topics”.

That was definitely a big step in AI.

[–] dan1101@lemm.ee 13 points 17 hours ago (1 children)

It's like someone who has no formal education but has a high level of confidence and eavesdrops on a lot of random conversations.

[–] zipzoopaboop@lemmynsfw.com 1 points 1 hour ago
load more comments
view more: next ›