Openai suggests the elasticity of artificial intelligence if competitors benefit from risk systems

Openai said he would look Adjust your safety requirements if the company is competing with a high -risk artificial intelligence model without protection. The company wrote in its report “Preparation Frame” that if another company launches The model that represents a threatThe same thing can do after “accurately” asserting that “the risk of panorama” has changed.
The document explains how the company tracksIt resides, provides and protects from the catastrophic risks They raise artificial intelligence models. “If the other border developer launches a high -risk system without similar guarantees, we can adjust our requirements,” Openai wrote in a blog post published on Tuesday.
“but, First, we accurately emphasize that the risks are panorama It has already changed, we will publicly realize that we are modifying, we will evaluate that The amendment does not increase significantly from the general risks One of the grave damages is still maintaining guarantees at a more protection level. “
Before publishing a model, Openai measures If it can cause severe damage by identifying reasonable, measurable, serious, serious and non -disposable risks, and creating guarantees against them. Next, classify these risks as low, media, or critical.
Some of the risks that the company already follows are the capabilities It is its models in the fields of biology, chemistry, cybersecurity and defeat. The company says it is also evaluating new risks, as if the artificial intelligence model can work For a long time without intervention Man, self -repetition and what can be concerned with the threat in the nuclear and radiological fields.
“The risks of persuasion”, Due to the use of Chatgpt for political campaigns or pressure groups, they will be dealt with outside the framework, and instead, they will be studied through Model specificationsThe document that defines ChatGPT behavior.
Silent reduction in security obligations
Stephen Adler, a former Openai resear Report updates The company’s preparation shows that “silently reduces its security obligations.” In his letter, he referred to the company’s commitment by the company to test “repeated versions” from its artificial intelligence models. But he indicated that Openai will change now Just try the models whose trained or “bizo” teachers will be published.
“People may differ completely About whether it is necessary to try repeated models, and it is better to eliminate Openai’s commitment to keep them and do not meet them simply. “But in any case, I would like Openai was more clear On the contrary, in this previous commitment. “
The news comes after Openai this week launched a new family of artificial intelligence models, called GPT-4.1, apparently without System card There is no security report. He asked “Euronews Next” Openai According to the security reportBut he did not receive an answer at the time of publication.
The news comes after 12 former employees Openai a written In the case presented by Elon Musk against Openai, in which it is claimed that the change in the profit company may lead to security discounts.