this post was submitted on 12 Oct 2024
183 points (95.5% liked)

Selfhosted

40133 readers
619 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

Instructions here: https://github.com/ghobs91/Self-GPT

If you’ve ever wanted a ChatGPT-style assistant but fully self-hosted and open source, Self-GPT is a handy script that bundles Open WebUI (chat interface front end) with Ollama (LLM backend).

  • Privacy & Control: Unlike ChatGPT, everything runs locally, so your data stays with you—great for those concerned about data privacy.
  • Cost: Once set up, self-hosting avoids monthly subscription fees. You’ll need decent hardware (ideally a GPU), but there’s a range of model sizes to fit different setups.
  • Flexibility: Open WebUI and Ollama support multiple models and let you switch between them easily, so you’re not locked into one provider.
you are viewing a single comment's thread
view the rest of the comments
[–] Player2@lemm.ee 10 points 1 month ago (3 children)

Wish I could accelerate these models with an Intel Arc card, unfortunately Ollama seems to only support Nvidia

[–] Deckweiss@lemmy.world 18 points 1 month ago* (last edited 1 month ago) (1 children)

They support AMD as well.

https://ollama.com/blog/amd-preview

also check out this thread:

https://github.com/ollama/ollama/issues/1590

Seems like you can run llama.cpp directly on intel ARC through Vulkan, but there are still some hurdles for ollama.

[–] Player2@lemm.ee 3 points 1 month ago

Interesting, I see that is pretty new. Some of the documentation must be out of date because it definitely said Nvidia only somewhere when I tested it about a month ago. Thanks for giving me hope!

[–] possiblylinux127@lemmy.zip 3 points 1 month ago

And AMD

You should be able to get llama.cpp to run on Arc but I'm not sure what performance you will get. It may not be worth it.