boem@lemmy.world to Technology@lemmy.worldEnglish · 11 months agoPeople are speaking with ChatGPT for hours, bringing 2013’s Her closer to realityarstechnica.comexternal-linkmessage-square156fedilinkarrow-up1555arrow-down130cross-posted to: [email protected][email protected][email protected]
arrow-up1525arrow-down1external-linkPeople are speaking with ChatGPT for hours, bringing 2013’s Her closer to realityarstechnica.comboem@lemmy.world to Technology@lemmy.worldEnglish · 11 months agomessage-square156fedilinkcross-posted to: [email protected][email protected][email protected]
minus-squarekamenLady.@lemmy.worldlinkfedilinkEnglisharrow-up1·11 months agoGonna look into that - thanks
minus-squareNotMyOldRedditName@lemmy.worldlinkfedilinkEnglisharrow-up3·edit-211 months agoCheck this out https://github.com/oobabooga/text-generation-webui It has a one click installer and can use llama.cpp From there you can download models and try things out. If you don’t have a really good graphics card, maybe start with 7b models. Then you can try 13b and compare performance and results. Llama.cpp will spread the load over the cpu and as much gpu as you have available (indicated by layers that you can set on a slider)
Gonna look into that - thanks
Check this out
https://github.com/oobabooga/text-generation-webui
It has a one click installer and can use llama.cpp
From there you can download models and try things out.
If you don’t have a really good graphics card, maybe start with 7b models. Then you can try 13b and compare performance and results.
Llama.cpp will spread the load over the cpu and as much gpu as you have available (indicated by layers that you can set on a slider)