this post was submitted on 29 Aug 2023
155 points (100.0% liked)
Technology
37747 readers
662 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
You know that a LLM is a statistical word prediction thing, no? That LLMs "hallucinate". That this is an inevitable consequence of how they work. They're designed to take in a context and then sound human, or sound formal, or sound like an excellent programmer, or sound like a lawyer, but there's no particular reason why the content that they present to you would be accurate. It's just that their training data contains an awful lot of accurate data which has a surprisingly large amount of commonality of meaning.
You say that the current crop of LLMs are good at Wikipedia style questions, but that's because their authors have trained them with some of the most reliable and easy to verify information on the Web. A lot of that is Wikipedia style stuff. That's it's core knowledge, what it grew up reading, the yardstick by which it was judged. And yet it still goes off on inaccurate tangents because there's nothing inherently accurate about statistically predicting the next word based on your training and the context and content of the prompt.
Yes, LLMs sound like they understand your prompt and are very knowledgeable, but the output is fundamentally not a fact-based thing, it's a synthesized thing, engineered to sound like its training data.
You do not query the LLM directly. The LLM just provides the baseline language understanding. You use the LLM to extract information out of websites and convert it into a machine readable format. You can do that with ChatGPT today:
That's the power of LLMs. They aren't better a Google, they are a way to interface with semantic information stored in human readable text (or pictures or sound). And with that extracted information you can go and built a better Google or just let the LLM browse the web and search for information relevant to you.
Well, sounds like you're well on your way to hand-rolling your own product comparison tool that's Powered By AI TM. You could make a popular price comparison site that initially filters out all that cruft and just gives you simple, clear, easy to read information about products.
Version 2 could have handy links to the cheapest websites.
Once it gets super popular you could offer retailers the chance to ensure their products and prices are correct. Perhaps a nice easy AI powered upload where you dump the info on whatever format you like, check it's understood and go live.
You could later offer retailers the chance to host a store front with you, or maybe allow initially just one or two, very tasteful, clearly marked-as-advertisment links for strictly AI-sanctioned relevant upselling, you know, offer the warranty with the product, or the printer with the fancier ink alongside the ones that exactly matched the criteria.
Once your engagement with retailers is strong, and they know they'll be missing out on a lot of custom, you can start maximising your income from them.
Or, wait did this whole cycle repeat itself many times over with many websites and many corporations?
Enshitification is real, and it's already AI powered. We don't know exactly why what's in front of us when we're online is the thing that is most likely to get us to keep scrolling and clicking and purchasing and maximising profits, but it's reasonable to assume that on a lot of successful websites, some sort of AI system chose it for exactly those purposes.
It's nice that you feel AI will get us away from the power of the multinational corporations, but I think it's vastly more likely that the AI we use will fall under their control and they will be twenty steps ahead of us. They were the ones who popularised it in the first place!
(Personally, I tend to use some reviewing sites that I trust and in particular for phones, a spec agreggator so I can filter out the five year old products that amazon is offering me.)