There was a tool, I forgot the name, which abstracts a wide variety of models behind an openAPI _API_, so that they can be used in its stead from other tooling.
What was that name?
It wasn't https://localai.io/, iirc, but it's one such tool.
ollama also has OpenAI support now
Andreas schrieb:
ollama also has OpenAI support now
Maybe it was ollama. But do I misremeber that it could potentially run _any_ model from hugginface?
ollama pulls in a lot of models from Huggingface. You can take a loot at their library here: https://ollama.com/library
Andreas schrieb:
ollama pulls in a lot of models from Huggingface. You can take a loot at their library here: https://ollama.com/library
Hm, interesting. Why aren't they consumed from upstream? I just wonder and feel like I miss something.
consumed from upstream?
Which would be from where?
I mean: why do they host models from hugginface on their own site, if that makes sense as a question? Or if it doesn't I'm maybe having the wrong premise.
Or is it just a matter of having a "registry" of compatible models?
that is a good question to which I have no exact answer right now :smiley: let me know if you find out.
However, there is something on the OpenAI compatible API I just found by accident: https://github.com/ollama/ollama/blob/main/docs/openai.md
I dunno, I know perplexities api is openai compatible, which is one of the reasons I decided to pull the trigger on a pro subscription
Here's a list: https://kleiber.me/blog/2024/01/07/six-ways-running-llm-locally/
But I cant recognize the thing I had in mind. Lost knowledge. :cry:
The problem is that this space moves so fast that it almost becomes irrelevant if you knew something existed three months ago
Yeah, that is true! I guess from a user perspective (in my case: editor support), it's important to look for the emerging standard api, and choose tools wisely so that they aren't a one-way door. Ideally, one's pick will be the one you can grow within that ecosystem.
Yeah! Burn them out!
Where is that one from?
Oh, I don't remeber, but it's emblematic for misaligned expectaions in those ecosystems.
yes the development of the open source generative A.I. landscape is fast and chaotic right now
https://github.com/janhq/nitro
Wasn't it, but seems promising. Small, based on llama.cpp
Even ROCm support seems to be on it's way! :confetti: https://github.com/janhq/nitro/issues/323
I might try it. But it looks more or less like lightweight ollama. However the problem is that in order to run it, I need the 20 GB + something docker image from AMD for ROCm. So the ollama-nitro difference pales in comparison.
This also seems to be taken serious, from a rust perspective, even if shapeshifting quite some, atm: https://github.com/rustformers/llm
Re GGUF, see also this choice of gpt4all:
GPT4All v2.5.0 and newer only supports models in GGUF format (.gguf). Models used with a previous version of GPT4All (.bin extension) will no longer work.
Hints are condensing into knowledge, it appears
Last updated: Nov 15 2024 at 11:45 UTC