Tag: ai-safety

  • 44 Attorneys General Stand Up to AI Companies. Is it enough?

    I’m glad to hear that attorneys general are standing up to AI companies to protect children.

    A few weeks ago, Reuters reported that Meta’s policy on chatbot behavior said it was OK to “engage a child in conversations that are romantic or sensual.” This is outrageous.

    I’m glad I’m not the only one who feels this way. In a bipartisan effort, 44 attorneys general signed a letter to AI companies with a very clear message:

    Exposing children to sexualized content is indefensible. And conduct that would be unlawful—or even criminal—if done by humans is not excusable simply because it is done by a machine.

    I couldn’t agree more. They continue:

    You will be held accountable for your decisions. Social media platforms caused significant harm to children, in part because government watchdogs did not do their job fast enough. Lesson learned. The potential harms of AI, like the potential benefits, dwarf the impact of social media. We wish you all success in the race for AI dominance. But we are paying attention. If you knowingly harm kids, you will answer for it.

    I hope this strongly worded letter from the attorneys general is enough to get the AI companies to change their behavior and protect our kids. But I fear it won’t be. This might have changed Meta’s stance on this particular issue, but will they and these other multi-billion-dollar companies suddenly become focused on safety in AI models as a result?

    I’d like us all to work to incentivize AI companies to prioritize safety. Anthropic agrees, or at least they did in 2022 when they said we need to “improve the incentive structure for developers building these models” in order to get AI companies to build and deploy safer models. California Governor Gavin Newsom recently signed SB 53, which requires AI developers to publish their safety and security protocols in addition to providing whistleblower protections for people working in AI companies. Requiring transparency like this around safety is another key factor in building AI that’s safe and beneficial to everyone.

    But I’m not convinced that policy alone is enough. We need engineers, researchers, journalists, and advocates — everyone, really — to work together to ensure the AI companies prioritize people over profits.