• CeeBee@lemmy.world
    link
    fedilink
    arrow-up
    37
    arrow-down
    2
    ·
    9 months ago

    It’s getting there. In the next few years as hardware gets better and models get more efficient we’ll be able to run these systems entirely locally.

    I’m already doing it, but I have some higher end hardware.

      • CeeBee@lemmy.world
        link
        fedilink
        arrow-up
        6
        ·
        9 months ago

        Stable diffusion SXDL Turbo model running in Automatic1111 for image generation.

        Ollama with Ollama-webui for an LLM. I like the Solar:7b model. It’s lightweight, fast, and gives really good results.

        I have some beefy hardware that I run it on, but it’s not necessary to have.

      • Ookami38@sh.itjust.works
        link
        fedilink
        arrow-up
        2
        ·
        9 months ago

        Depends on what AI you’re looking for. I don’t know of an LLM (a language model,think chatgpt) that works decently on personal hardware, but I also haven’t really looked. For art generation though, look up automatic1111 installation instructions for stable diffusion. If you have a decent GPU (I was running it on a 1060 slowly til I upgraded) it’s a simple enough process to get started, there’s tons of info online about it, and it’s all run on local hardware.