this post was submitted on 01 Aug 2024
2233 points (99.0% liked)

Technology

59693 readers
3105 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[โ€“] markon@lemmy.world 0 points 4 months ago* (last edited 4 months ago) (1 children)

I've used LLMs a lot over the post couple years. Pro tip. Use it a lot and learn the models. Then they look much more intelligent as you the user becomes better. Obviously if you prompt "Write me a shell script to calculate the meaning of life, make my coffee, and scratch my nuts before 9AM" it will be a grave disappointment.

If you first design a ball fondling/scratching robot, use multiple instances of LLMs to help you plan it out, etc. then you may be impressed.

I think one of the biggest problems is that most people interacting with llms forget they are running on computers and that they are digital and not like us. You can't make assumptions like you can with humans. Usually even when you do that with us you just get stuff you didn't want because you weren't clear enough. We are horrible at instructions and this is something I hope AI will help us learn how to do better. Because ultimately bad instructions or incomplete information doesn't lead to being able to determine anything real. Computers are logic machines. If you tell a computer to go ride a bike at best it'll go out and do all the work to embody itself in a robot and buy a bike and ride it. Wait, you don't even know it did it though because you never specified for it to record the ride....

A very few of us are pretty good at giving computers clear instructions some of the time. Also though, I have found just forcing models to reason in context is powerful. You have to know to tell it to "use a drill down tree style approach to problem solving. Use reflection and discussion to explore and find the optimal solution to reasoning through the problem." Might still give you bad results. That is why you have to experiment. It is a lot of fun if you really just let your thoughts run wild. It takes a lot of creative thinking right now to really get the most out of these models. They should all be 110% open source and free for all. BTW Gemini 1.5 and Claude and Llama 3.1 are all great, nd Llama you can run locally or on a rented GPU VM. OpenAI I'm on the fence about but given who all is involved over there I wouldn't say I would trust them. Especially since they want to do a regulatory capture.

[โ€“] markon@lemmy.world 0 points 4 months ago

Asking the chat models to have self-disccusion and use/simulate metacognition really seems to help. Play around with it. Often times I am deep in a chat and I learn from its mistakes, it kinda learns from my mistakes and feedback. It is all about working with and not against. Because at this time LLMs are just feed forward neural networks trained on supercomputer clusters. We really don't even know what they are capable of fully because it is so hard to quantify, especially when you don't really know what exactly has been learned.

Q-learning in language is also an interesting methodology I've been playing with. With an imagine generator for example though, you can just add (Q-learning quality) and you may get more interesting and quality results. Which itself is very interesting to me.