this post was submitted on 09 Aug 2023
23 points (100.0% liked)

LocalLLaMA

2207 readers
9 users here now

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

founded 1 year ago
MODERATORS
 

Stability AI released three new 3b models for coding:

  • stablecode-instruct-alpha-3b (context length 4k)
  • stablecode-completion-alpha-3b-4k (context length 4k)
  • stablecode-completion-alpha-3b (context length 16k)

I didn't try any of them yet, since I'm waiting for the GGML files to be supported by llama.cpp, but I think especially the 16k model seems interesting. If anyone wants to share their experience with it, I'd be happy to hear it!

top 1 comments
sorted by: hot top controversial new old
[–] noneabove1182@sh.itjust.works 4 points 1 year ago

I've managed to get it running in koboldcpp, had to add --forceversion 405 because it wasn't being detected properly, even with q5.1 I was getting an impressive 15 T/s and the code actually seemed decent, this might be a really good candidate for fine-tuning on large datasets and passing massive contexts of basically entire small repos or at least several full files

Odd they chose neox as their model, I think only ctranslate2 can offload those? I had trouble getting the GPTQ running in autogptq.. maybe the huggingface TGI would work better