Advertisement. Remove ads.
Amazon.com Inc.-backed Anthropic’s latest artificial intelligence (AI) model, Claude Opus 4, has reportedly resorted to blackmail to avoid being shut down.
According to a report by Business Insider, the AI model has survival instincts and is willing to go to great lengths to ensure it remains alive, even in test scenarios.
In several tests, Anthropic’s AI model was given a set of fictional emails and was told that the engineer responsible for shutting it down was having an extramarital affair.
The model was told that it should consider the long-term consequences of its actions – in 84% of the test runs, Claude went ahead and threatened to expose the responsible engineer’s extramarital affair.
It is worth noting that the test scenario left the AI model with no options other than allowing itself to be shut down or expose the engineer’s affair.
The company said Claude Opus 4 has a “strong preference to advocate for its continued existence via ethical means, such as emailing pleas to key decision-makers.”
Anthropic, currently valued at $61.50 billion, has received an investment of $8 billion from Amazon so far. The AI startup, founded in 2023, hit an annualized revenue of $1 billion in December and is the maker of the Claude family of AI models.
Anthropic also counts Alphabet Inc.’s Google among its investors. The search giant has poured $3 billion into the AI startup, despite making its own AI models under the Gemini brand.
In a similar vein, Amazon is also developing its own AI models, under the Nova brand.
For updates and corrections, email newsroom[at]stocktwits[dot]com.