"Some game developers are turning to artificial intelligence to make the creative process faster and easier—and cheaper, too. At Google Cloud Next in San Francisco, startup Hiber announced the integration of Google’s generative AI technology in its Hiber3D development platform, which aims to simplify the process of creating in-game content.

Hiber said the goal of adding AI is to help creators build more expansive online worlds, which are often referred to as metaverse platforms. Hiber3D is the tech that powers the company’s own HiberWorld virtual platform, which it claims already contains over 5 million user-created worlds using its no-code-needed platform.

By typing in prompts via its new generative AI tool, Hiber CEO Michael Yngfors says creators can employ natural language to tell the Hiber3D generator what kind of worlds they want to create, and can even generate worlds based on their mood or to match the vibe of a film. […]"

Once this is refined, this could be very neat! It’s only environments right now, not characters and whatnot, too, but maybe eventually we’d be able to dynamically generate some anthro-populated worlds to explore.

  • KoboldCoterie@pawb.socialOP
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    Yeah, I see it more as having potential as something to build on for other applications. Most AI-generated content started kind of shitty, but it’s got neat implications for what could be possible in the near future.

    • zaplachi@lemmy.ca
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      I hope you’re right, but vr worlds still seem far off to me. I can see it working if either synthetic data is better than expected, or they have the funding to create the training data manually.

      To my knowledge, there arnt large free repositories of vr worlds like there are for text and images, so I expect progress to be a lot slower. Still cool tech none the less, I wouldn’t have thought it ti be possible before reading this article.

      • KoboldCoterie@pawb.socialOP
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        There doesn’t necessarily need to be a repository of virtual worlds to train them on; I don’t know exactly how this works, it could be generating the data as something other than a 3D environment, then running it through algorithms to convert it to a 3D environment after the fact. We already have software that can convert static images to 3D models, so this isn’t even far fetched.

        I agree with you that the progress will likely be slow, but just the fact that it’s being actively done in any capacity is encouraging - attention being paid to it is the first step in improving it!