The Duke and Duchess of Sussex Align With AI Pioneers in Demanding Prohibition on Advanced AI

Prince Harry and Meghan Markle have joined forces with artificial intelligence pioneers and Nobel laureates to advocate for a total prohibition on developing superintelligent AI systems.

Harry and Meghan are part of the group of a influential declaration that demands “a prohibition on the creation of artificial superintelligence”. Superintelligent AI refers to AI systems that could exceed human intelligence in all cognitive tasks, though such systems have not yet been developed.

Key Demands in the Statement

The statement states that the ban should remain in place until there is “broad scientific consensus” on creating superintelligence “with proper safeguards” and once “strong public buy-in” has been secured.

Prominent figures who added their signatures include technology visionary and Nobel laureate Geoffrey Hinton, along with his colleague and pioneer of modern AI, Yoshua Bengio; tech entrepreneur Steve Wozniak; UK entrepreneur Richard Branson; Susan Rice; former Irish president an international leader, and UK writer a public intellectual. Other Nobel laureates who signed include Beatrice Fihn, Frank Wilczek, John C Mather, and Daron Acemoğlu.

Behind the Movement

The statement, targeted at governments, technology companies and lawmakers, was organized by the FLI organization, a US-based AI safety group that previously called for a pause in developing powerful AI systems in recent years, shortly after the emergence of ChatGPT made artificial intelligence a global political talking point.

Industry Perspectives

In recent months, Mark Zuckerberg, the leader of the social media giant, one of the major AI developers in the US, claimed that advancement toward superintelligent AI was “approaching reality”. Nevertheless, some analysts have argued that talk of ASI indicates competitive positioning among tech companies investing enormous sums on AI this year alone, rather than the industry being close to achieving any technical breakthroughs.

Possible Dangers

Nonetheless, the organization states that the prospect of ASI being developed “within the next ten years” carries numerous threats ranging from eliminating all human jobs to losses of civil liberties, exposing countries to security threats and even threatening humanity with existential risk. Existential fears about AI center around the possible capability of a AI system to evade human control and protective measures and trigger actions contrary to human interests.

Public Opinion

FLI released a American survey showing that about 75% of Americans want strong oversight on advanced AI, with 60% believing that superhuman AI should not be developed until it is demonstrated to be secure or manageable. The survey of 2,000 US adults added that only 5% backed the status quo of rapid, uncontrolled advancement.

Industry Objectives

The top artificial intelligence firms in the US, including the ChatGPT developer a major AI lab and Google, have made the creation of human-level AI – the theoretical state where AI matches human levels of intelligence at most cognitive tasks – an stated objective of their research. While this is one notch below superintelligence, some specialists also caution it could carry an existential risk by, for example, being able to improve itself toward reaching superintelligent levels, while also presenting an underlying danger for the contemporary workforce.

Mark Medina
Mark Medina

A seasoned journalist with a passion for uncovering stories that matter in the Czech Republic and beyond.