Example generators made with this plugin:

See the plugin page for more. There will probably be issues/bugs! Thank you in advance to the pioneers who test this and report bugs/issues in these first few days/weeks 🫡

(It was actually possible to discover this plugin a few days ago, but no one made it through all the clues lol ^^ some people did at least figure out the first step)

  • VioneT@lemmy.worldM
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    Just made ai-text-recipes and this template for testing.

    I assume that if the AI is generating multiple paragraphs, those paragraphs are ‘chunks’? Also, we can also use just the onChunk() function instead of render() since both are applied on each chunk?

    • perchance@lemmy.worldOPM
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 year ago

      Nice! Thank you for playing around with it.

      The chunks are basically words, or chunks of words, but they can be larger than that. E.g. the first chunk is your startWith text if you specified that, and then each subsequent chunk is generally a little piece of text - corresponding to the chunks that are being appended to the output element several times per second.

      The render function is specifically for transforming the output into some different form. Whatever you return from that function is what gets displayed - like in this example where we ask the AI for asterisks around actions (since that would be easy for it to generate) but then “render” that text so that the asterisked parts are italicized via HTML. Getting the AI itself to generate HTML is okay, but it has been trained mostly on text, rather than HTML, so it’s probably better to get it to use a “syntax” that it’s more accustomed to, and then we handle the transformation to HTML ourselves with render.

      onChunk doesn’t have any effect on the display of the output unless you specifically write some code to do that. It just allows you to run whatever custom code you want every time a new chunk is received.

      But yeah you can definitely just use onChunk if you want to manage the “rendering” yourself (e.g. onChunk(data) => outputEl.innerHTML = data.fullTextSoFar.replace(...)), or if you don’t want to change what is displayed, but instead want to do something else for every chunk.

      Thanks for the question! I’ve just updated the plugin page with some details on most of the options that are currently available.

      • VioneT@lemmy.worldM
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        The list on the plugin page is really helpful! Thanks again for the explanation!

    • perchance@lemmy.worldOPM
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Can’t wait to see what you create with this one! Your text-to-image-plugin creations (esp. realistic portraits) are amazing. Let me know if there are any extra prompt options that would make certain common use cases easier (akin to hideStartWith - which I guessed would be something people asked for, but it was just a guess)

      • VioneT@lemmy.worldM
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        1 year ago

        Thanks! Just started testing it and some hiccups about network failure is happening.

        • perchance@lemmy.worldOPM
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          Woops! Thanks. Should be fixed now. Please keep me updated with any other issues you run into.