Debates rage at World Economic Forum over need to regulate AI

DAVOS: Artificial intelligence was a hot topic at the World Economic Forum in Davos, which concluded on Friday, as political and business leaders discussed a wide variety of issues relating to the technology, from its potential to be an equalizing force to its future role in geopolitics.

Nick Clegg, the president of global affairs of Facebook parent company Meta, and a former deputy UK prime minister, pointed out that it is impossible to regulate something that cannot be detected.

Speaking during a panel discussion titled “Hard Power of AI” on Thursday, he advocated for platforms to develop a system of “invisible watermarking” for AI-generated content.

Ireland’s prime minister, Leo Varadkar, who recently fell victim to the misuse of AI in a deepfake video that appears to show him promoting investment in cryptocurrency, said he is concerned about such misuse of a technology that is only going to get more sophisticated and effective.

He agreed with Clegg that it is important to establish ways to detect the use of AI but also said that people and societies will also have to adapt to the use of the technology.

The potential threats associated with AI go far beyond deepfakes and disinformation, officials and experts warned, particularly in times of war when with use of the technology can be deadly.

“You usually need up to 10 artillery rounds to hit one target but if you have a drone connected to an AI-powered platform, you just need one shot,” said Dmytro Kuleba, Ukraine’s minister of foreign affairs.

The development of nuclear weapons “completely changed the way humanity understands security,” he said, then warned that AI will “have even bigger consequences.”

As worrying as the misuse of AI might be, Mustafa Suleyman, the co-founder and CEO of Inflection AI, highlighted its inherent benefits.

Suleyman, who co-founded DeepMind, which Google acquired in 2014, and worked for Google as its vice-president of AI products and AI policy, said that as applications of the technology become more useful they will get cheaper and easier to use.

He said there is a fine line between regulating the technology itself and regulating the associated risks that might arise from misuse. Like any other technology, it can be used for good or bad, he added, and AI models must not be allowed to make it easier for anyone to develop or build something that is “illegal and terrible for the world.”

Generative AI is still in its early stages, Suleyman said, but eventually there will be several examples of the technology available and so users will have to be aware of and question the core business models of the developers of the systems they are interacting with.

“If the business model of the organization providing that AI is to sell ads, then the primary customer of that piece of software is in fact the advertiser and not the user,” he said.

Meta’s business model is advertising-based but that does not mean its platforms do not strive to serve their users, Clegg said.

He also questioned the notion that people get a “richer menu of ideological political inputs” from TV news reports and newspapers than from online sources of news.

“We sometimes over-romanticize the non-online world as if it’s one which is replete with lots of diverse opinions,” he said.

On the other hand, Karoline Edtstadler, minister for the EU and Constitution at the Austrian Chancellery, said that unlike traditional TV and print journalism, the online space is dominated by echo chambers and the use of algorithms to promote content.

“You can find every opinion on the internet if you’re searching for it but you shouldn’t underestimate algorithms, and we often find people on the internet in echo chambers where they get their own opinion reflected again and again,” she said.

Leave a Comment