• LionyxML@alien.topOPB
    link
    fedilink
    English
    arrow-up
    2
    ·
    11 months ago

    Here’s another addition to my aspiring blog: https://www.rahuljuliato.com/posts/ellama

    This post outlines the process of setting up Ollama as a server and Ellama as an Emacs client. The result is a localized AI companion that enhances your Emacs experience.

    Embracing the world of Emacs has been a delightful journey, and I can’t wait to share it with all of you!

    • wonko7@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      that’s so cool, thanks!

      I think I’m gonna need new hardware if I want to play with this…

      • LionyxML@alien.topOPB
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 months ago

        Nice!

        Ollama runs in CPU mode if you don’t have a GPU, also there are models that can be used with 4Gb of RAM.

        Things won’t be fast but for experimenting it is enought :)

  • walseb@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    Is there any way to have it use company-mode so that it suggests ways to complete what you are typing? Just having it show the next most likely word would be great. That’s a feature I’d really like to try in emacs, but haven’t seen any packages that provide it.

    • s-kostyaev@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      Not now. And personally I don’t think it will be useful. But I see how it can be done. In that case hardest thing is to collect good context for completion. Without it this would be dummy t9.