ylai@lemmy.ml to Free Open-Source Artificial Intelligence@lemmy.worldEnglish · 1 year agoMeet Mistral 7B, Mistral’s first LLM that beats Llama 2dataconomy.comexternal-linkmessage-square7fedilinkarrow-up129arrow-down11cross-posted to: [email protected]
arrow-up128arrow-down1external-linkMeet Mistral 7B, Mistral’s first LLM that beats Llama 2dataconomy.comylai@lemmy.ml to Free Open-Source Artificial Intelligence@lemmy.worldEnglish · 1 year agomessage-square7fedilinkcross-posted to: [email protected]
minus-squareVicFic!@iusearchlinux.fyilinkfedilinkEnglisharrow-up5arrow-down1·1 year agoIt’s good model, but it still requires 24gb vram. I’m waiting until something like llama.cpp is made for this.
minus-squareylai@lemmy.mlOPlinkfedilinkEnglisharrow-up6·edit-21 year agoNot true. See — or actually nothing to be seen here, since “it just works”: https://github.com/ggerganov/llama.cpp/discussions/3368 and https://huggingface.co/TheBloke/Mistral-7B-v0.1-GGUF https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF And here is someone describing how to do the quantization yourself: https://advanced-stack.com/resources/running-inference-using-mistral-ai-first-released-model-with-llama-cpp.html
minus-squareMechanize@feddit.itlinkfedilinkEnglisharrow-up5·1 year agoAFAIK Mistral does already work in llama.cpp, or am I misunderstanding something? I’ve yet to try it.
It’s good model, but it still requires 24gb vram.
I’m waiting until something like llama.cpp is made for this.
Not true. See — or actually nothing to be seen here, since “it just works”: https://github.com/ggerganov/llama.cpp/discussions/3368 and https://huggingface.co/TheBloke/Mistral-7B-v0.1-GGUF https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF
And here is someone describing how to do the quantization yourself: https://advanced-stack.com/resources/running-inference-using-mistral-ai-first-released-model-with-llama-cpp.html
Ooh, thanks. 🤗
AFAIK Mistral does already work in llama.cpp, or am I misunderstanding something? I’ve yet to try it.