• frog 🐸@beehaw.org
    link
    fedilink
    English
    arrow-up
    8
    ·
    5 months ago

    I also suspect, based on the accuracy of AIs we have seen so far, that their interpretation of the deceased’s personality would not be very accurate, and would likely hallucinate memories or facts about the person, or make them “say” things they never would have said when they were alive. At best it would be very Uncanny Valley, and at worst would be very, very upsetting for the bereaved person.

    • Zaktor@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      3
      ·
      5 months ago

      This is a very patronizing view of people who all seem to be well informed about what this is and isn’t and who have already acknowledged that they will put it aside if it scares them. No one is foisting this on the bereaved wife and the husband has preemptively said it’s ok if her or her children never use it.

      This might fail in all the ways you think it will. That’s a very small dataset of information, so it’s likely to be either be an overcomplicated recording or to need to incorporate training other than what he personally said, but it’s not your place to tell her what’s best for her personal grieving process.

      • frog 🐸@beehaw.org
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        5 months ago

        Given the husband is likely going to die in a few weeks, and the wife is likely already grieving for the man she is shortly going to lose, I think that still places both of them into the “vulnerable” category, and the owner of this technology approached them while they were in this vulnerable state. So yes, I have concerns, and the fact that the owner is allegedly a friend of the family (which just means they were the first vulnerable couple he had easy access to, in order to experiment on) doesn’t change the fact that there are valid concerns about the exploitation of grief.

        With the way AI techbros have been behaving so far, I’m not willing to give any of them the benefit of the doubt about claims of wanting to help rather than make money - such as using a vulnerable couple to experiment on while making a “proof of concept” that can be used to sell this to other vulnerable people.

        • Zaktor@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          5 months ago

          So just more patronizing. It’s their life, you don’t know better than them how to live it, grief or no.

          • frog 🐸@beehaw.org
            link
            fedilink
            English
            arrow-up
            2
            ·
            5 months ago

            Nope, I’m just not giving the benefit of the doubt to the techbro who responded to a dying man’s farewell posts online with “hey, come use my untested AI tool!”

    • trev likes godzilla@beehaw.org
      link
      fedilink
      arrow-up
      2
      ·
      5 months ago

      I have no doubts about that either, myself. Though even if such an abomination of a doppelganger were to exist, and it seems that these companies are hellbent on making it so, it would be worse for the reasons you described previously: prolonging and molesting the grieving process that human beings have evolved to go through. All in the name of a dollar. I apologize for being so bitter about this (this bitterness is not directed at you, frog), but this entire "AI’ phenomenon fucking disgusts and repulses me so much I want to scream.

      • frog 🐸@beehaw.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 months ago

        I absolutely, 100% agree with you. Nothing I have seen about the development of AI so far has suggested that the vast majority of its uses are grotesque. The few edge cases where it is useful and helpful don’t outweigh the massive harm it’s doing.

    • intensely_human@lemm.ee
      link
      fedilink
      arrow-up
      1
      ·
      5 months ago

      I think it would be the opposite of upsetting, but in an unhealthy way. I think it would snap them out of their grief into a place of strangeness, and theyd stop feeling their feelings.

      There is no cell of my gut that likes this idea.

      • frog 🐸@beehaw.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 months ago

        Yeah, I think you could be right there, actually. My instinct on this from the start is that it would prevent the grieving process from completing properly. There’s a thing called the gestalt cycle of experience where there’s a normal, natural mechanism for a person going through a new experience, whether it’s good and bad, and a lot of unhealthy behaviour patterns stem from a part of that cycle being interrupted - you need to go through the cycle for everything that happens in your life, reaching closure so that you’re ready for the next experience to begin (most basic explanation), and when that doesn’t happen properly, it creates unhealthy patterns that influence everything that happens after that.

        Now I suppose, theoretically, there’s a possibility that being able to talk to an AI replication of a loved one might give someone a chance to say things they couldn’t say before the person died, which could aid in gaining closure… but we already have methods for doing that, like talking to a photo of them or to their grave, or writing them a letter, etc. Because the AI still creates the sense of the person still being “there”, it seems more likely to prevent closure - because that concrete ending is blurred.

        Also, your username seems really fitting for this conversation. :)