Open Letter from Anthropic Employees

Anthropic Letter

Current and former Anthropic employees are asking leadership to make a public, binding commitment to pause frontier model development conditionally — if other major frontier AI labs credibly agree to do the same.

Our Ask

We ask that Anthropic make a binding commitment to pause frontier model development if the other major labs credibly agree to do the same, with the pause lasting until:

  1. broad scientific consensus that superintelligence can be developed safely and controllably, and
  2. strong public buy-in.

To Anthropic Leadership:

As current and former employees, we share the belief that advanced AI systems pose catastrophic risks to society, and we are proud of Anthropic's history of prioritizing safety. We understand why binding, unilateral pause commitments were removed from the RSP v3.0 — but their absence leaves us unprotected against the exact catastrophic risks those commitments were designed to mitigate.

We have read the rationale behind the Responsible Scaling Policy (RSP) v3.0. We understand that binding, unilateral pause commitments have been removed because they can place the company at a competitive disadvantage in an aggressive market and create pressure to downplay model capabilities.

However, their absence leaves us unprotected against the exact catastrophic safety risks the previous RSP's pre-commitments to pausing were originally designed to mitigate.

If a unilateral pause is an ineffective method to improve the safety of humanity, Anthropic must instead strive towards a multilateral pause, coordinated between all frontier AI developers across all countries.

At Davos, Google DeepMind's Demis Hassabis said he would support a pause if all other companies and countries would also agree to it, while Dario said that he was confident a competition between only him and Demis would be something they could "work out". Anthropic should therefore build on Demis' response regarding a conditional pause to help catalyze a broader agreement.

We ask that Dario Amodei and Anthropic leadership make a public, binding commitment to pause frontier model development conditionally. Specifically, we ask that Anthropic will pause if the leaders of the other major frontier AI labs credibly agree to do the same, with the pause happening until there is broad scientific consensus that superintelligence (an AI that significantly outperforms all humans on essentially all cognitive tasks) can be developed safely and controllably, and there is a strong public buy-in.

We recognize the geopolitical reality that labs are also competing on a US-China axis. However, the only sustainable solution to global escalation will ultimately be a robust international treaty or mutually assured agreement. A multilateral commitment from leading Western labs is one first step toward establishing that global framework.

By pre-committing to a conditional pause, Anthropic can maintain its competitive position while making its position clear and creating the space in public conversation required for international coordination.

Signed,

Signatories

Be the first to sign this letter.

Add Your Name

Current and former employees of Anthropic are invited to sign. You may sign anonymously. All signatures are verified before being published.

Verification Method

Frequently Asked Questions

When would the pause be triggered?

To ground this in existing metrics, one threshold for triggering the conditional pause should be the moment AI systems are able to "compress two years of 2018–2024 AI progress into a single year" — the exact threshold used in Anthropic's RSP v3 to define highly capable models with catastrophic risk potential.

What do you mean by "superintelligence"?

We define superintelligence as an AI system that significantly outperforms all humans on essentially all cognitive tasks. which is the definition used in the superintelligence statement, signed by more than 130,000 people.

What about China? Won't they just keep going?

A multilateral commitment from leading Western labs is a first step toward establishing a global framework, not a final solution. The only sustainable answer to global escalation is ultimately a robust international treaty or mutually assured agreement, similar to how nuclear arms control began with bilateral agreements before expanding. A credible, coordinated pause among Western labs creates the leverage and precedent needed to bring other nations, including China, to the table. Without that first step, there is no path to international coordination at all.