AI Developers Failed Readiness Testing for Superintelligence Threats — FLI Research

ШІ-модель від Google за 48 годин вирішила «проблему десятиліття» супербактерій

The largest artificial intelligence development companies received extremely low scores for their level of existential safety and readiness for potential threats posed by so-called superintelligence. The indicators for planning and risk management fall significantly short of the necessary level, according to a recent analytical report from the Future of Life Institute (FLI).

This is reported by Finway

  • None of the companies evaluated received a score higher than D in the area of existential safety.
  • OpenAI and Google DeepMind showed the worst results in risk management.
  • Experts compared the situation to launching a nuclear power plant without a disaster prevention plan.

AI Safety Index. Data: FLI.

Main Findings of the AI Safety Index

The Future of Life Institute conducted research on the readiness of the seven largest developers of large language models (LLMs) — OpenAI, Google DeepMind, Anthropic, Meta, xAI, Zhipu AI, and DeepSeek — for the emergence of artificial general intelligence (AGI). According to the test results, no company scored above a D for existential safety planning, indicating a critical level of inadequate preparation.

The index evaluated companies across six areas: from current risks to readiness for catastrophic consequences of AI going out of control. The highest score — C — was awarded to the startup Anthropic. OpenAI and Google DeepMind received C and C- respectively, while other companies demonstrated even poorer results.

One reviewer emphasized that none of the organizations have “anything resembling a consistent, implemented plan” for controlling AI.

Artificial Intelligence: Rising Risks and Calls to Action

As noted by FLI co-founder and Massachusetts Institute of Technology professor Max Tegmark, leading companies are creating systems potentially comparable in danger to nuclear facilities, yet they lack even basic publicly available plans for preventing disasters. According to him, the pace of AI technology development significantly exceeds expectations: whereas experts previously spoke of decades until the emergence of AGI, companies themselves now predict this timeline to be just a few years.

The report also noted that progress has accelerated since the February AI summit. In particular, new models Grok 4, Gemini 2.5, and the video generator Veo3 demonstrate significant increases in capabilities and functionality.

Alongside FLI’s research, the organization SaferAI published its own report confirming “unacceptably weak” risk management in the development of AGI. Experts insist that companies must immediately reassess their approaches to safety.

Commenting on the situation, a representative from Google DeepMind stated that the FLI report did not take into account all the measures the company is implementing. Other developers have refrained from commenting on the results for now.

It was previously reported that Meta Corporation has recruited specialists from OpenAI to work on projects related to the creation of superintelligence.

Новини по темі