Monday, October 27, 2025
Global Current News
  • News
  • Finance
  • Technology
  • Automotive
  • Energy
  • Cloud & Infrastructure
  • Data & Analytics
  • Cybersecurity
  • Public Safety
  • News
  • Finance
  • Technology
  • Automotive
  • Energy
  • Cloud & Infrastructure
  • Data & Analytics
  • Cybersecurity
  • Public Safety
No Result
View All Result
Global Current News
No Result
View All Result

Global tech and media leaders urge international ban on superintelligent AI systems

by Juliane C.
October 27, 2025
in Technology
ban

Oppo teams up with Dimensity 9500 to drive performance in upcoming Find X9 phones

Samsung promotes โ€˜AI for allโ€™ vision at India Mobile Congress

Amazon Web Services outage causes widespread disruption, reignites calls for tighter tech oversight

More than 900 influential figures in global society and from various fields are joining forces to call for a pause in the development of artificial intelligence systems that could surpass and replace human capabilities. The proposal, supported by figures such as Steve Wozniak, Richard Branson, Prince Harry, and Steve Bannon, seeks a global ban on what they call “superintelligent AI” until there is scientific consensus and public agreement on its safety. This appeal was organized by the Future of Life Institute, a nonprofit that monitors the risks of technological advancement.

A call for an AI ban at the request of world leaders

The ban request comes from a variety of sectors, including technical experts, politicians, media, and cultural figures. Among the signatures, two stand out, given the nature of the subject: scientists Yoshua Bengio and Geoffrey Hinton, considered the “fathers of modern AI.” However, the document also has the support of executives from major companies and former US government officials. The group advocates a temporary ban on the development of “superintelligence” systems capable of surpassing human intelligence in several cognitive areas.

A rare global alliance calls for caution as AI advances beyond human control

The arguments outlined focused on warning of the risk of mass unemployment, loss of individual freedoms, and even threats to human survival if autonomous systems become uncontrollable.

โ€œThe future of AI should serve humanity, not replace it. The true test of progress will be not how fast we move, but how wisely we steer,โ€ Prince Harry said in a statement.

One of the most striking aspects of this initiative was the political and ideological diversity of the participants. While conservative figures like Steve Bannon joined the cause, progressive figures and artists, such as singer will.i.am, also joined the movement. All united in a common cause: the realization that AI’s advancement may be outpacing human capacity to regulate it.

What’s at stake: jobs, security, and control over humanity’s future

The ongoing race involving large tech companies like OpenAI, Meta, and xAI is increasingly raising the risk of creating something that humanity itself may not be able to contain. As companies race to launch more autonomous models, the fear is that this development will unfold unchecked and without much oversight or sufficient mechanisms to prevent superintelligent AI from making unpredictable decisions.

The risks aren’t just based on hypothetical scenarios. They range from information manipulation to profound economic and social impacts. The main question of this movement is to assess the risks that an autonomous, superintelligent AI, without proper supervision, could pose on a global scale.

Despite all the arguments, there are those who argue otherwise, defining the movement as unnecessary and causing excessive panic. Meta scientist Yann LeCun believes that superintelligence will still take decades to emerge, and that humans will still be in charge when it does.

Between progress and prudence: how to balance Innovation and responsibility

It’s worth noting that the document doesn’t propose an end to AI research, but rather a controlled halt to the development of this technology, which may eventually surpass human understanding. Stuart Russell, a professor at the University of California, explains that the goal is simple:

“It’s simply a proposal to require adequate safety measures for a technology that, according to its developers, has a significant chance to cause human extinction. Is that too much to ask?โ€

The discussion about the development of superintelligent AI, according to the personalities that have joined the movement, has become increasingly necessary, given the global race for the accelerated development of this technology. The main arguments are based on the lack of control and restrictions on this growth, and concerns about the future consequences for humanity.

Global Current News

ยฉ 2025 by Global Current News

  • Contact
  • Legal notice

No Result
View All Result
  • News
  • Finance
  • Technology
  • Automotive
  • Energy
  • Cloud & Infrastructure
  • Data & Analytics
  • Cybersecurity
  • Public Safety

ยฉ 2025 by Global Current News