•

Google Warns AI Could Reach Human-Level Intelligence by 2030—and Potentially ‘Destroy Mankind’

The paper outlines a range of potential dangers, including the possibility that such technology could “permanently destroy humanity.”

Published on
Read : 2 min
Demis Hassabis, Ceo Of Deepmind
Google Warns AI Could Reach Human-Level Intelligence by 2030—and Potentially ‘Destroy Mankind’ | Indian Defence Review

A recent study by Google DeepMind suggests that artificial general intelligence (AGI), a form of AI with human-like capabilities, could emerge as early as 2030.

The paper outlines a range of potential dangers, including the possibility that such technology could “permanently destroy humanity.” While the study does not describe specific ways AGI might trigger human extinction, it raises broad concerns about the risk it could pose.

The authors underline that decisions about whether harms are “severe” should not be left solely to companies like Google DeepMind, but rather to society as a whole, “guided by its collective risk tolerance and conceptualisation of harm.”

Four Main Categories of AGI Risk

The paper categorizes AGI-related threats into four major areas: misuse, misalignment, mistakes, and structural risks. Among these, misuse is highlighted as a particularly pressing danger—wherein individuals or groups could intentionally deploy AGI to cause harm.

Google DeepMind emphasizes the importance of prevention strategies that directly address this misuse potential. These risks are not presented as hypothetical or distant.

Instead, the researchers treat them as near-future possibilities that require immediate action from AI developers, regulators, and global institutions.

The language in the paper makes it clear that severe harm, including existential threats, falls within the realm of realistic outcomes if development continues without proper oversight.

Google DeepMind CEO says for AGI to go well, humanity needs 1) a “CERN for AGI” for international coordination on safety research, 2) an “IAEA for AGI” to monitor unsafe projects, and 3) a “technical UN” for governance
byu/MetaKnowing insingularity

A Call for International Governance

In a statement made earlier this year, Demis Hassabis, CEO of DeepMind, reiterated the need for a global approach to AGI governance.

He advocated for “a kind of CERN for AGI” — a central international research initiative focused on safely advancing artificial general intelligence.

According to Hassabis, such a body should be complemented by “a kind of an institute like IAEA, to monitor unsafe projects,” referencing the nuclear oversight body.

He also proposed a “supervening body” involving governments from around the world, to determine how AGI systems are deployed, comparing it to “a kind of like UN umbrella.”

What Makes AGI Different from Traditional AI?

Unlike traditional AI, which is designed for specific tasks, AGI aims to emulate the flexibility of human intelligence. It would have the ability to understand, learn, and solve problems across various domains without being restricted to a particular function.

In essence, AGI is envisioned as a machine capable of adapting to new environments and objectives in the same way that humans can.

This distinction is at the heart of both the optimism and anxiety surrounding AGI. While it promises transformative benefits, its versatility and autonomy introduce unpredictable elements that current safety measures may not be equipped to handle.

As the paper stresses, the potential for “severe harm” makes it essential that the development of AGI is approached with both caution and a coordinated global response.

Leave a Comment

Share to...