

I don’t know that it’s wise to trust what anthropic says about their own product. AI boosters tend to have an “all news is good news” approach to hype generation.
Anthropic have recently been pushing out a number of headline grabbing negative/caution/warning stories. Like claiming that AI models blackmail people when threatened with shutdown. I’m skeptical.






It was almost certainly written by gpt, you can tell because it doesn’t make any sense but still manages to be objectively incorrect.
Information already is knowledge, and that’s not what gpt does.