Article Image

IPFS News Link • Robots and Artificial Intelligence

In The Global Race To Regulate AI, No One Is Quite Sure How

• BY: CRIDDLE, ESPINOZA, LIU

"Mitigating the risk of extinction from AI should be a global priority," it said, "alongside other societal-scale risks such as pandemics and nuclear war."

That single sentence invoking the threat of human eradication, signed by hundreds of chief executives and scientists from companies including OpenAI, Google's DeepMind, Anthropic and Microsoft, made global headlines.

Driving all of these experts to speak up was the promise, but also the risk, of generative AI, a type of the technology that can process and generate vast amounts of data.

The release of ChatGPT by OpenAI in November spurred a rush of feverish excitement as it demonstrated the ability of large language models, the underlying technology behind the chatbot, to conjure up convincing passages of text, able to write an essay or improve your emails.

It created a race between companies in the sector to launch their own generative AI tools for consumers that could generate text and realistic imagery.

The hype around the technology has also led to an increased awareness of its dangers: the potential to create and spread misinformation as democratic elections approach; its ability to replace or transform jobs, especially in the creative industries; and the less immediate risk of it becoming more intelligent than and superseding humans.

Brussels has drafted tough measures over the use of AI that would put the burden on tech groups to make sure their models do not break rules. Its groundbreaking AI Act is expected to be fully approved by the end of the year — but it includes a grace period of about two years after becoming law for companies to comply


thelibertyadvisor.com/declare