Monday, November 25, 2024

What exactly are the risks posed by artificial intelligence?

Date:

In late March, more than 1,000 technology leaders, researchers, and other experts working on and around artificial intelligence signed an open letter warning that AI technologies present “grave risks to society and humanity.”

The group, which included Elon Musk, CEO of Tesla and owner of Twitter, urged AI labs to halt development of their most powerful systems for six months so they could better understand the risks behind the technology.

“Strong AI systems should only be developed once we are confident that their effects will be positive and that their risks can be controlled,” the letter states.

The letter, which now has more than 27,000 signatures, was terse. Her language was broad. And some of the names behind the letter appear to have an ambivalent relationship to AI. Mr. Musk, for example, is building his own artificial intelligence company, and is a major donor to the organization that wrote the letter.

But the letter represents a growing concern among AI experts that the latest systems, most notably GPT-4, the technology introduced by San Francisco startup OpenAI, could cause harm to society. They believed that future regimes would be more dangerous.

Some stakes have arrived. Others won’t for months or years. Still others are just assumptions.

“Our ability to understand what can go wrong in strong AI systems is very weak,” said Yoshua Bengio, a professor and researcher in artificial intelligence at the University of Montreal. “So we need to be very careful.”

Dr. Bengio is perhaps the most important person to sign the letter.

See also  YouTube TV Finally Adds 5.1 Audio on Google TV, Android TV and Roku

Working with other academics—Geoffrey Hinton, a researcher at Google until recently, and Yann LeCun, now chief artificial intelligence scientist at Meta, owner of Facebook—Dr. Bengio has spent the past four decades developing the technology that drives systems like GPT-4. In 2018, researchers were awarded the Turing Award, often known as the “Nobel Prize for Computing,” in recognition of their work on neural networks.

A neural network is a mathematical system that learns skills by analyzing data. About five years ago, companies like Google, Microsoft, and OpenAI began building neural networks that learned from huge amounts of digital text called Large Language Models, or LLMs.

By identifying patterns in this text, the LLM learns to create text on its own, including blog posts, poems, and computer programs. They can even hold a conversation.

This technology can help computer programmers, writers, and other workers generate ideas and do things more quickly. But Dr. Bengio and other experts also warn that LLMs can learn unwanted and unexpected behaviours.

These systems can generate untruthful, biased, and other toxic information. Systems like GPT-4 mistake facts and frame information, a phenomenon called “hallucinations”.

Companies are working to solve these problems. But experts like Dr. Bengio worry that as researchers make these systems more powerful, they will introduce new risks.

Because these systems present information with what appears to be complete confidence, it can be difficult to separate fact from fiction when they are used. Experts worry that people will rely on these systems for medical advice, emotional support, and the raw information they use to make decisions.

See also  Random: Daisy fans are worried about her being dropped by Mario Strikers

“There is no guarantee that these systems will be correct in whatever task you assign them,” said Subbarao Kambhampati, a professor of computer science at Arizona State University.

Experts are also concerned that people will abuse these systems to spread disinformation. Because they can speak in human-like ways, they can be surprisingly persuasive.

“We now have systems that can interact with us through natural language, and we can’t distinguish between the real and the fake,” said Dr. Bengio.

Experts worry that the new artificial intelligence could kill jobs. Currently, technologies such as GPT-4 tend to complement human workers. But OpenAI acknowledges that it could replace some workers, including the people who moderate online content.

They cannot yet replicate the work of lawyers, accountants or doctors. But they can replace paralegals, personal assistants and translators.

Paper written by OpenAI researchers It has been estimated that 80 percent of the U.S. workforce could have at least 10 percent of their work tasks affected by LLM and that 19 percent of workers could see at least 50 percent of their tasks affected.

“There is a sign that rote jobs are going away,” said Oren Etzioni, founding CEO of the Allen Institute for Artificial Intelligence, a research lab in Seattle.

Some of the people who signed the letter also believe that artificial intelligence can slip out of our control or destroy humanity. But many experts say this is greatly exaggerated.

The letter was written by a group from the Future of Life Institute, an organization dedicated to exploring existential risks to humanity. They warn that because AI systems often learn unexpected behavior from the vast amounts of data they analyze, they can cause serious and unexpected problems.

See also  Konami announces new music-based Bomberman game for Apple Arcade

They worry that as companies connect LLM to other Internet services, these systems could gain unexpected powers as they can write their own computer code. They say developers will create new risks if they allow powerful AI systems to run their code.

said Anthony Aguirre, a theoretical cosmologist and physicist at the University of California, Santa Cruz and co-founder of the Future Life Institute.

“If you take a less likely scenario — where things really take off, where there’s no real governance, where these systems get stronger than we thought they would be — then things get really crazy,” he said.

Dr. Etzioni said the talk of existential risk was hypothetical. But he said other risks — most notably misinformation — were no longer speculation.

“Now we have some real problems,” he said. “They are in good faith. They require some responsible response. They may require regulations and legislation.”

POPULAR

RELATED ARTICLES

How Climate Change Affects Turtle Nesting Sites: What You Need to Know

Climate change is an ever-growing concern, and its effects...

Putin, a member of the International Criminal Court, is set to travel to Mongolia despite an arrest warrant against him

Despite an arrest warrant from the International Criminal Court,...

Japan Typhoon: Millions ordered to evacuate as one of strongest typhoons in decades hits Japan

What's the latest?Posted at 12:48 BST12:48 GMTImage source ReutersTyphoon...