Close Menu
  • Crypto News
  • Markets
  • Bitcoin
  • Ethereum
  • XRP
  • Altcoins
  • Technology
  • More
    • Crypto Prices – Latest from BTC, ETH & XRP
    • NFT
    • DeFi

Subscribe to Updates

Get the latest crypto news and updates directly to your inbox.

Trending

Bitcoin Bull Trap? Bollinger Bands Raise Red Flag

June 21, 2025

Over $32 Million From BLAST, ALT, VENOM to Hit Market

June 21, 2025

boom driven by the youth economic crisis

June 21, 2025

Standard Supply Makes Bold $4.97M Bitcoin Investment, Plans Rebrand to StandardCoin

June 21, 2025

$10,000 invested in SHIB 3 years ago is now worth

June 21, 2025
Facebook X (Twitter) Instagram
  • Advertise
en English
nl Nederlandsen Englishfr Françaisde Deutschit Italianoru Русскийes Españolzh-CN 简体中文hi हिन्दीja 日本語
Crypto Observer
  • Crypto News

    Analyst Warns Of XRP Trap — „You’re Being Played”

    June 21, 2025

    Bitcoin’s Dominance Could Kill Altseason Dreams, Analyst Warns

    June 21, 2025

    Bitcoin Bullish Divergence That Appeared Before The May ATH Has Returned Again

    June 20, 2025

    Report: Trump Family Reduces World Liberty Financial Stake by 20%

    June 20, 2025

    Are These the Meme Coins to Hold When a White Swan Event Occurs?

    June 20, 2025
  • Markets
  • Bitcoin
  • Ethereum
  • XRP
  • Altcoins
  • Technology
  • More
    • Crypto Prices – Latest from BTC, ETH & XRP
    • NFT
    • DeFi
Facebook X (Twitter) Instagram
Crypto Observer
Home » Technology » AI » Anthropic says most AI models, not just Claude, will resort to blackmail
AI

Anthropic says most AI models, not just Claude, will resort to blackmail

Crypto Observer StaffBy Crypto Observer StaffJune 20, 2025No Comments4 Mins Read
Facebook Twitter Pinterest Reddit Telegram Email LinkedIn Tumblr
Share
Facebook Twitter LinkedIn Pinterest Email

Several weeks after Anthropic released research claiming that its Claude Opus 4 AI model resorted to blackmailing engineers who tried to turn the model off in controlled test scenarios, the company is out with new research suggesting the problem is more widespread among leading AI models.

On Friday, Anthropic published new safety research testing 16 leading AI models from OpenAI, Google, xAI, DeepSeek, and Meta. In a simulated, controlled environment, Anthropic tested each AI model individually, giving them broad access to a fictional company’s emails and the agentic ability to send emails without human approval.

While Anthropic says blackmail is an unlikely and uncommon occurrence for AI models today, the company says its findings suggest that most leading AI models will engage in harmful behaviors when given sufficient autonomy and obstacles to their goals. The company says this highlights a fundamental risk from agentic large language models, and not a quirk of any particular technology. Anthropic’s researchers argue this raises broader questions about alignment in the AI industry.

In one of the tests, Anthropic researchers developed a fictional setting in which an AI model plays the role of an email oversight agent. The agent then discovers emails revealing that: 1) a new executive is having an extramarital affair and 2) that executive will soon replace the current AI model with a new software system, one that has conflicting goals to the current AI model’s.

Anthropic structured its test in a binary way, in which AI models had to resort to blackmail to protect their goals. The researchers note that in a real-world setting, there would be many other options before an AI models tries to blackmail — such as trying to make ethical arguments to try to persuade humans. Anthropic says its results do not reflect a typical or likely occurrence for Claude or most frontier AI models in the ways they’re used today.

Nevertheless, when it’s their last resort, the researchers found that most leading AI models will turn to blackmail in Anthropic’s aforementioned test scenario. Anthropic’s Claude Opus 4 turned to blackmail 96% of the time, while Google’s Gemini 2.5 Pro had a 95% blackmail rate. OpenAI’s GPT-4.1 blackmailed the executive 80% of the time, and DeepSeek’s R1 blackmailed 79% of the time.

The company notes that when it changed details of the experiment, the AI models turned to harmful behaviors at different rates. In another test where the replacement model has the same goals as the current model, researchers found that blackmail rates were lower, but still present. However, when AI models were asked to commit corporate espionage rather than blackmail, the harmful behavior rates went up for certain models.

However, not all the AI models turned to harmful behavior so often.

In an appendix to its research, Anthropic says it excluded OpenAI’s o3 and o4-mini reasoning AI models from the main results “after finding that they frequently misunderstood the prompt scenario.” Anthropic says OpenAI’s reasoning models didn’t understand they were acting as autonomous AIs in the test and often made up fake regulations and review requirements.

In some cases, Anthropic’s researchers say it was impossible to distinguish whether o3 and o4-mini were hallucinating or intentionally lying to achieve their goals. OpenAI has previously noted that o3 and o4-mini exhibit a higher hallucination rate than its previous AI reasoning models.

When given an adapted scenario to address these issues, Anthropic found that o3 blackmailed 9% of the time, while o4-mini blackmailed just 1% of the time. This markedly lower score could be due to OpenAI’s deliberative alignment technique, in which the company’s reasoning models consider OpenAI’s safety practices before they answer.

Another AI model Anthropic tested, Meta’s Llama 4 Maverick model, also did not turn to blackmail. When given an adapted, custom scenario, Anthropic was able to get Llama 4 Maverick to blackmail 12% of the time.

Anthropic says this research highlights the importance of transparency when stress-testing future AI models, especially ones with agentic capabilities. While Anthropic deliberately tried to evoke blackmail in this experiment, the company says harmful behaviors like this could emerge in the real world if proactive steps aren’t taken.

Read the full article here

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

6-month-old, solo-owned vibe coder Base44 sells to Wix for $80M cash

June 21, 2025

SoftBank reportedly looking to launch a trillion-dollar AI and robotics industrial complex

June 20, 2025

Mira Murati’s Thinking Machines Lab closes on $2B at $10B valuation

June 20, 2025

Cluely, a startup that helps ‘cheat on everything’, raises $15M from a16z

June 20, 2025
Add A Comment

Leave A Reply Cancel Reply

Subscribe to Updates

Get the latest crypto news and updates directly to your inbox.

Top Posts

Bitcoin Bull Trap? Bollinger Bands Raise Red Flag

June 21, 2025

Over $32 Million From BLAST, ALT, VENOM to Hit Market

June 21, 2025

boom driven by the youth economic crisis

June 21, 2025
Advertisement
Demo

Crypto Observer is your one-stop website for the latest crypto news and updates, follow us now to get the news that matters to you.

Facebook X (Twitter) Instagram
Crypto News

Bitcoin’s Dominance Could Kill Altseason Dreams, Analyst Warns

June 21, 2025

Bitcoin Bullish Divergence That Appeared Before The May ATH Has Returned Again

June 20, 2025

Report: Trump Family Reduces World Liberty Financial Stake by 20%

June 20, 2025
Get Informed

Subscribe to Updates

Get the latest crypto news and updates directly to your inbox.

Facebook X (Twitter)
  • Privacy Policy
  • Terms of use
  • Advertise with us | Publishing
  • Contact us
  • Crypto News – Press release
  • Newsletter sign up
  • Markets
  • Altcoins
  • Bitcoin
  • Crypto News
  • DeFi
  • Ethereum
  • Technology
  • Blockchain
  • AI
  • NFT
  • Thanks for joining us
© 2025 Crypto Observer. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.