Reddit CEO Steve Huffman is standing by Reddit’s decision to block companies from scraping the site without an AI agreement.

Last week, 404 Media noticed that search engines that weren’t Google were no longer listing recent Reddit posts in results. This was because Reddit updated its Robots Exclusion Protocol (txt file) to block bots from scraping the site. The file reads: “Reddit believes in an open Internet, but not the misuse of public content.” Since the news broke, OpenAI announced SearchGPT, which can show recent Reddit results.

The change came a year after Reddit began its efforts to stop free scraping, which Huffman initially framed as an attempt to stop AI companies from making money off of Reddit content for free. This endeavor also led Reddit to begin charging for API access (the high pricing led to many third-party Reddit apps closing).

In an interview with The Verge today, Huffman stood by the changes that led to Google temporarily being the only search engine able to show recent discussions from Reddit. Reddit and Google signed an AI training deal in February said to be worth $60 million a year. It’s unclear how much Reddit’s OpenAI deal is worth.

Huffman said:

Without these agreements, we don’t have any say or knowledge of how our data is displayed and what it’s used for, which has put us in a position now of blocking folks who haven’t been willing to come to terms with how we’d like our data to be used or not used.

“[It’s been] a real pain in the ass to block these companies,” Huffman told The Verge.

  • spongebue@lemmy.world
    link
    fedilink
    English
    arrow-up
    50
    arrow-down
    5
    ·
    3 months ago

    Honestly, my biggest issue with LLMs is how they source their training data to create “their own” stuff. A meme calling it a plagiarism machine struck a chord with me. Almost anyone else I’d sympathize with, but fuck Spez.

    • Wirlocke@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      2
      ·
      3 months ago

      What resonated with me is people calling LLMs and Stable Diffusion “copyright laundering”. If copyright ever swung in AI’s favor it would be super easy to train an AI on stuff you want to steal, add in some generic training, and now you have a “new” piece of art.

      LLMs and Stable Diffusion are just compression algorithms for abstract patterns, only one level above data.

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        6
        ·
        edit-2
        3 months ago

        The real takeaway of all of this is that copyright law is massively out of date and not fit for purpose in the 21st century or frankly the late 20th.

        The current state of copyright law cannot deal with the internet, let alone AI

    • markon@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      2
      ·
      3 months ago

      Yep they now get paid for the data we have them. I have no sympathy lol. At least these models can’t actually store it all losslessly by any stretch of the imagination. The compression factors would have to be like 100-200X+ anything we’ve ever been able to achieve before. The numbers don’t work out. The models do encode a lot though and some of it is going to include actual full text data etc but it’ll still be kinda fuzzy.

      I think we do need ALL OPEN SOURCE. Not just for AI, but I know on that point I’m preaching to the choir here lol