• 34 Posts
  • 6 Comments
Joined 1 year ago
cake
Cake day: September 8th, 2023

help-circle




  • Wonder who at NIST is actually going to leave over this (if any).

    Critics of so-called “AI doomers” have warned that focusing on any potentially overblown talk of hypothetical killer AI systems or existential AI risks may stop humanity from focusing on current perceived harms from AI, including environmental, privacy, ethics, and bias issues.

    As a doomer, honestly I can never parse if this sort of thing is “AI ESG trying to defect against us even though we’ve been trying to play nice with them very hard” or “AI accelerationists trying to drive a wedge into AI safety.”

    Kaushik cautioned, however, that “if there’s truth to NIST scientists threatening to quit” over Christiano’s appointment, “obviously that would be serious if true.”

    In other words, “big, if true”.

    Timnit Gebru, who founded the Distributed Artificial Intelligence Research Institute after Google fired her from their AI ethical research team after she spoke out against discrimination, criticized Christiano’s blog on X.

    “What’s better, that he wrote a blog on a cult forum, or that he just pulled random numbers out of his behind for this apocalyptic prediction?” Gebru wrote. “As they say, why not both.”

    Okay I take it back, this is … wait, she’s not even defecting, she’s just shitting on him randomly for lolz. Destroy Twitter when?

    Currently, Christiano has said that a pause isn’t necessary because “the current level of risk is low enough that I think it is defensible for companies or countries to continue AI development if they have a sufficiently good plan for detecting and reacting to increasing risk.”

    Well let’s all hope he’s right about that.









  • It just sounds like the creator made a thing that wasn’t what people wanted.

    It just feels like the question to ask then isn’t “but how do I get them to choose the thing despite it not being what they want?”

    “Hard work goes to waste when you make a thing that people don’t want” is … true. But I would say it’s a stretch to call it a “problem”. It’s just an unescapable reality. It’s almost tautological.

    Look at houses. You made a village with a diverse bunch of houses. But more than half of those, nobody wants to live in. Then “how do I get people to live in my houses?” “Build houses that people actually want to live in.” Like, you can pay people money to live in your weird houses, sure, I just feel like you have missed the point of being an architect somewhat.






  • Can you judge if the model is being truthful or untruthful by looking at something like |states . honesty_control_vector|? Or dynamically chart mood through a conversation?

    Can you keep a model chill by actively correcting the anger vector coefficient once it exceeds a given threshold?

    Can you chart per-layer truthfulness through the layers to see if the model is being glibly vs cleverly dishonest? With glibly = “decides to be dishonest early”, cleverly = “decides to be dishonest late”.