The best part of the fediverse is that anyone can run their own server. The downside of this is that anyone can easily create hordes of fake accounts, as I will now demonstrate.

Fighting fake accounts is hard and most implementations do not currently have an effective way of filtering out fake accounts. I’m sure that the developers will step in if this becomes a bigger problem. Until then, remember that votes are just a number.

  • shagie@programming.dev
    link
    fedilink
    arrow-up
    15
    ·
    1 year ago

    Web of trust is the solution. Show me vote totals that only count people I trust, 90% of people they trust, 81% of people they trust, etc. (0.9 multiplier should be configurable if possible!)

    If this was implemented on the server, that implies a significant amount of information about said web of trust to be stored by the server admins. Furthermore, it would imply that that trust web is also federated out as if you’re a first tier trust of mine, for me (or a server calculating on my behalf) to evaluate the value of the likes and dislikes it would need your web of trust too… and transitively out.

    If this was implemented on the client, that means effectively revealing the origins of all the likes and dislikes on an object. Aside from the “this can be a lot of data to send over the wire whenever someone looks at an active post” it also means that you wouldn’t need to be a server admin to see that data.

    Either way, this approach with ActivityPub being the underlying protocol, would entail privacy violations and opportunities for bullying that anything Threads (from other threads of concern) could do would pale in comparison to existing bad actors.

    P.S. Don’t trust me because I trust JohnDoe and he works for marketing at BigCo and might be persuaded to list some of his paid clients highly.

    • interdimensionalmeme@lemmy.ml
      link
      fedilink
      arrow-up
      4
      ·
      1 year ago

      Client must computer all raw data. All individual moderation action (vote,block, subscribe) would be made public by default and stealth optional.

      Only user led moderation has a future, it all has to be transparent, public, client sided, optional and consensual

    • sparr@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      1 year ago

      It could be implemented on both the server and the client, with the client trusting the server most of the time and spot checking occasionally to keep the server honest.

      The origins of upvotes and downvotes are already revealed on objects on Lemmy and most other fediverse platforms. However, this is not an absolute requirement; there are cryptographic solutions that allow verifying vote aggregation without identifying vote origins, but they are mathematically expensive.

        • interdimensionalmeme@lemmy.ml
          link
          fedilink
          arrow-up
          7
          ·
          1 year ago

          It’s nothing. You don’t recompute everything for each page refresh. Your sucks well the data, compute reputation total over time and discard old raw data when your local cache is full.

          Historical daily data gets packaged, compressed, and cross signed by multiple high reputation entities.

          When there are doubts about a user’s history, your client drills down those historical packages and reconstitute their history to recalculate their reputation

          Whenever a client does that work, they publish the result and sign it with their private keys and that becomes a web of trust data point for the entire network.

          Only clients and the network matter, servers are just untrustworthy temporary caches.

        • Opafi@feddit.de
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          Any solution that only works because the platform is small and that doesn’t scale is a bad solution though.

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      That sounds a bit hyperbolic.

      You can externalize the web of trust with a decentralized system, and then just link it to accounts at whatever service you’re using. You could use a browser extension, for example, that shows you whether you trust a commenter or poster.

      That list wouldn’t get federated out, it could live in its own ecosystem, and update your local instance so it provides a separate list of votes for people in your web of trust. So only your admin (which could be you!) would know who you trust, and it would send two sets of vote totals to your client (or maybe three if you wanted to know how many votes it got from your instance alone).

      So no, I don’t think it needs to be invasive at all.

      • shagie@programming.dev
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        The single layer web of trust on the server wouldn’t be terribly difficult.

        A single layer web of trust on a client would mean that the client is getting sufficient information about all the votes to be able to weight them. This means that instead of “+4 -1” for the information that the client gets instead it would get that “shagie liked the object, JohnDoe liked the object, BadGuy liked the object, SomeoneElse liked it, and YetAnotherPerson disliked it.” That implies a lot more information being revealed to a client than many would be comfortable with.

        Granted all of that is available if you federate with a system and poke in the database. It’s there. But this makes it really easy to get that information.

        A transitive web of trust implies not only are you getting those votes and considering that “shagie liked the object” but also that you trust me and so that I trust JohnDoe is available to whatever is making that vote weighting calculation.

        And while that single layer on the server isn’t too eyebrow raising, getting the transitive listing gets into the Facebook level of social graph building - but for all to see. I’m not sure that people would be comfortable with that degree of nakedness of personal information.

        Consider also the data payload sizes. This post (rather mundane and not viral) has 243 comments. Some of them have over a hundred votes. How big of a payload do you want to get to send to the vote weigher (and back)?

        Consider the load for… say… https://lemm.ee/post/843533

        And for bad actors, all they have to do is cast a couple hundred votes on each comment (until they’re defederated and the database cleaned up by the admin) to DDOS the vote weigher.

        • sugar_in_your_tea@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          My point is you can have a mixed system. For example:

          • server stores list of “special interest” users (followed users, WoT, mods, etc)
          • server stores who voted for what (already does)
          • client updates the server’s list of “special interest” users with WoT data
          • when retrieving metadata about a post, you’d get:
            • total votes
            • votes from “special interest” users
            • total votes from your instance

          That’s not a ton of data, and the “special interest” users wouldn’t need to be synchronized to any other instance. The client would store the WoT data and update the server as needed (this way the server doesn’t need any transitive logic, the client handles it).

        • sugar_in_your_tea@sh.itjust.works
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          I think that could work well. At the very least, I want the feature where I can see how many times I’ve upvoted/down voted a given individual when they post.

          That wouldn’t/shouldn’t give you transitive data imo, because voting for something doesn’t mean you trust them, just that the content is valuable (e.g. it could be a useful bot).