Out of curiosity, I went ahead and read the full text of the bill. After reading it, I’m pretty sure this is the controversial part:
SEC. 3. DUTY OF CARE. (a) Prevention Of Harm To Minors.—A covered platform shall act in the best interests of a user that the platform knows or reasonably should know is a minor by taking reasonable measures in its design and operation of products and services to prevent and mitigate the following:
(1) Consistent with evidence-informed medical information, the following mental health disorders: anxiety, depression, eating disorders, substance use disorders, and suicidal behaviors.
The sorts of actions that a platform would be expected to take aren’t specified anywhere, as far as I can tell, nor is the scope of what the platform would be expected to moderate. Does “operation of products and services” include the recommender systems? If so, I could see someone using this language to argue that showing LGBTQ content to children promotes mental health disorders, and so it shouldn’t be recommended to them. They’d still be able to see it if they searched for it, but I don’t think that makes it any better.
Also, in section 9, they talked about forming a committee to investigate the practicality of building age verification into hardware and/or the operating system of consumer devices. That seems like an invasion of privacy.
Reading through the rest of it, though, a lot of it did seem reasonable. For example, it would make it so that sites would have to put children on safe default options. That includes things like having their personal information be private, turning off addictive features designed to maximize engagement, and allowing kids to opt out of personalized recommendations. Those would be good changes, in my opinion.
If it wasn’t for those couple of sections, the bill would probably be fine, so maybe that’s why it’s got bipartisan support. But right now, the bad seems like it outweighs the good, so we should probably start calling our lawmakers if the bill continues to gain traction.
Out of curiosity, I went ahead and read the full text of the bill. After reading it, I’m pretty sure this is the controversial part:
The sorts of actions that a platform would be expected to take aren’t specified anywhere, as far as I can tell, nor is the scope of what the platform would be expected to moderate. Does “operation of products and services” include the recommender systems? If so, I could see someone using this language to argue that showing LGBTQ content to children promotes mental health disorders, and so it shouldn’t be recommended to them. They’d still be able to see it if they searched for it, but I don’t think that makes it any better.
Also, in section 9, they talked about forming a committee to investigate the practicality of building age verification into hardware and/or the operating system of consumer devices. That seems like an invasion of privacy.
Reading through the rest of it, though, a lot of it did seem reasonable. For example, it would make it so that sites would have to put children on safe default options. That includes things like having their personal information be private, turning off addictive features designed to maximize engagement, and allowing kids to opt out of personalized recommendations. Those would be good changes, in my opinion.
If it wasn’t for those couple of sections, the bill would probably be fine, so maybe that’s why it’s got bipartisan support. But right now, the bad seems like it outweighs the good, so we should probably start calling our lawmakers if the bill continues to gain traction.
apologies for the wall of text, just wanted to get to the bottom of it for myself. you can read the full text here: https://www.congress.gov/bill/118th-congress/senate-bill/1409/text