• 2 Posts
  • 21 Comments
Joined 11 months ago
cake
Cake day: September 28th, 2023

help-circle
  • Post from July, tweet from today:

    It’s easy to forget that Scottstar Codex just makes shit up, but what the fuck “dynamic” is he talking about? He’s describing this like a recurring pattern and not an addled fever dream

    There’s a dynamic in gun control debates, where the anti-gun side says “YOU NEED TO BAN THE BAD ASSAULT GUNS, YOU KNOW, THE ONES THAT COMMIT ALL THE SCHOOL SHOOTINGS”. Then Congress wants to look tough, so they ban some poorly-defined set of guns. Then the Supreme Court strikes it down, which Congress could easily have predicted but they were so fixated on looking tough that they didn’t bother double-checking it was constitutional. Then they pass some much weaker bill, and a hobbyist discovers that if you add such-and-such a 3D printed part to a legal gun, it becomes exactly like whatever category of guns they banned. Then someone commits another school shooting, and the anti-gun people come back with “WHY DIDN’T YOU BAN THE BAD ASSAULT GUNS? I THOUGHT WE TOLD YOU TO BE TOUGH! WHY CAN’T ANYONE EVER BE TOUGH ON GUNS?”

    Embarrassing to be this uninformed about such a high profile issue, no less that you’re choosing to write about derisively.










  • Yeah, and applying the Yggy rubric, I’d bet that he started earlier, he posted more consistently, and he didn’t let ignorance of a subject or even mockery of past failures slow him down.*

    And if there are a few other rats with more hustle that he’s overshadowed, well sure give him some points for talent, and a few more for luck.

    • He did famously quit when the NYT made clear they were doing a real profile on him instead of PR puffery, but he couldn’t stay away long.

  • I think you are overestimating how much of SlateScott’s success comes from his brilliance, and how much even his dedicated readers understand (or even properly read) of each post. He’s a poster in a tight knit network of posters, many of whom know each other socially, and all of whom heap praise on the leading lights as high IQ geniuses. Being influenced by SlateScott is self-flattering to a certain type, so you get many testimonials.

    This may be a bit of a stretch, but I really liked this essay on Matt Iglesias, but really it’s about the banality of posting success: https://maxread.substack.com/p/matt-yglesias-and-the-secret-of-blogging

    There are all kinds of things you can do to develop and retain an audience – break news, loudly talk about your own independence, make your Twitter avatar a photo of a cute girl – but the single most important thing you can do is post regularly and never stop.

    …it’s the best time there’s ever been to be somebody who can write something coherent quickly. Put things out. Let people yell at you. Write again the next day.






  • Short answer: “majority” is hyperbolic, sure. But it is an elite conviction espoused by leading lights like Nick Beckstead. You say the math is “basically always” based on flesh and blood humans but when the exception is the ur-texts of the philosophy, counting statistics may be insufficient. You can’t really get more inner sanctum than Beckstead.

    Hell, even 80000 hours (an org meant to be a legible and appealing gateway to EA) has openly grappled with whether global health should be deprioritized in favor of so-called suffering-risks, exemplified by that episode of Black Mirror where Don Draper indefinitely tortures a digital clone of a woman into subjugation. I can’t find the original post, formerly linked to from their home page, but they do still link to this talk presenting that original scenario as a grave issue demanding present-day attention.


  • less than 1%…on other long-term…which presumably includes simulated humans.

    Oh it’s way more than this. The linked stats are already way out of date, but even in 2019 you can see existential risk rapidly accelerating as a cause, and as you admit much moreso with the hardcore EA set.

    As for what simulated humans have to do with existential risk, you have to look to their utility functions: they explicitly weigh the future pleasure of these now-hypothetical simulations as outweighing the suffering of any and all present or future flesh bags.