Wednesday, December 25, 2024

Bill Gates, AI developers push back against Musk, and Wozniak an open letter

Date:

McNamee wins Getty Images

If you’ve heard a lot of pro-AI chatter in recent days, you’re probably not alone.

AI developers, leading AI ethicists, and even Microsoft co-founder Bill Gates have spent the past week defend their work. This is a response to a file open letter Published last week by the Future of Life Institute, and signed by Tesla CEO Elon Musk and Apple co-founder Steve Wozniak, it called for a six-month halt to work on AI systems that can rival human-level intelligence.

The letter, which now has more than 13,500 signatures, expressed concern that the “dangerous race” to develop software like OpenAI’s ChatGPT, Microsoft’s Bing AI chatbot and Alphabet’s Bard could have negative consequences if left unchecked, from widespread disinformation to compromise. Human functions of machines.

But large segments of the tech industry, including at least one of its biggest luminaries, are in decline.

“I don’t think asking a particular group to pause solves the challenges,” Gates told Reuters on monday. Gates added that it would be difficult to force a pause in a global industry — though he agreed that the industry needed more research to “identify difficult areas.”

That’s what makes the debate so interesting, experts say: The open letter may point to some legitimate concerns, but the proposed solution seems almost impossible to achieve.

Here’s why and what could happen next – from government regulations to a possible botnet uprising.

The concerns of the open letter are relatively straightforward: “Recent months have seen AI Labs engage in an out-of-control race to develop and deploy more powerful digital brains that no one—not even their creators—can reliably understand, predict, or control.”

See also  How is Netflix's password-sharing campaign likely to work

AI systems often come with programming biases and possibilities Privacy problems. They can spread disinformation far and wide, especially when used maliciously.

And it’s easy to imagine companies trying to save money by replacing human jobs — from personal assistants to customer service representatives — with AI language systems.

Italy has already temporarily banned ChatGPT due to privacy issues caused by the OpenAI data breach. The UK government published its regulation recommendations last week, and European Consumer Organization He called on lawmakers across Europe to ramp up regulations, too.

In the United States, some members of Congress have called for new laws to regulate AI technology. Last month , Federal Trade Commission He issued guidelines for companies developing these chatbots, which means the federal government is keeping a close eye on the AI ​​systems scammers can use.

and multiple State privacy laws It was passed last year to force companies to disclose when and how their AI products work, and Giving customers an opportunity to opt out To provide personal data for automated AI decisions.

These laws are currently in effect in the states of California, Connecticut, Colorado, Utah, and Virginia.

At least one AI safety and research company isn’t concerned yet: Existing technologies “do not pose an imminent concern,” San Francisco-based Anthropic wrote in blog post Last month.

Anthropic, which received a $400 million investment from Alphabet in February, has an AI chatbot. In her blog post, she noted that future AI systems could become “more robust” over the next decade, and that building guardrails now can “help reduce risks” in the future.

See also  Wendy's sued after a woman was hospitalized for eating a burger

The Problem: Anthropic Books: No one is quite sure what these barriers could or should look like.

The ability of an open letter to spark a conversation about a topic is helpful, a company spokesperson told CNBC Make It. The spokesperson did not specify whether Anthropic will support the six-month pause.

In a tweet on Wednesday, OpenAI CEO Sam Altman acknowledged that an “effective global regulatory framework including democratic governance” and “adequate coordination” between leading artificial general intelligence (AGI) companies could help.

But Altman, whose Microsoft-funded company makes ChatGPT and helped develop Bing’s AI chatbot, did not specify what those policies might entail, or respond to CNBC Make It’s request for comment on the open letter.

Some researchers raise another issue: Pausing research could stifle progress in a fast-moving industry, allowing authoritarian countries to develop their own AI systems to get ahead.

Highlighting potential threats to AI may encourage bad actors to adopt the technology for nefarious purposes, says Richard Sucher, an AI researcher and CEO of AI-powered search engine startup You.com.

The urgency of these threats is also exaggerated Feeds unnecessary hysteria On topic, says Socher. He adds that the open letter proposals are “impossible to implement, and address the problem at the wrong level.”

The muted response to the open letter from AI developers seems to indicate that tech giants and startups alike are unlikely to stop their work voluntarily.

The letter’s call for increased government regulation seems more likely, especially since lawmakers in the US and Europe are already pushing for transparency from AI developers.

In the US, the FTC could also create rules that require AI developers to train new systems only with datasets that do not contain misinformation or implicit bias, and to increase testing of such products before and after they are released to the public, according to a report. December consultant From the law firm of Alston & Bird.

See also  Goldman Sachs is cutting jobs again amid falling Wall Street deals

Stuart Russell, a University of Berkeley computer scientist and leading researcher in artificial intelligence who co-signed the open letter, says such efforts must be made before the technology advances any further.

The pause may also give tech companies more time to prove that their advanced AI systems “do not present an undue risk,” Russell said. he told CNN on saturday.

Both sides seem to agree on one thing: worst-case scenarios for the rapid development of AI deserve to be prevented. In the short term, that means providing users of AI products with transparency and protecting them from scammers.

In the long term, this could mean preventing AI systems from surpassing human-level intelligence, and preserving the ability to control it effectively.

“Once you start making machines that rival and outsmart humans, it will be very difficult for us to survive,” Gates said. he told the BBC Back in 2015. “It’s just an inevitability.”

Don’t Miss: Want to be smarter and more successful with your money, your work, and your life? Subscribe to our new newsletter!

Take this survey and tell us how you want to take your money and career to the next level.

POPULAR

RELATED ARTICLES

How Climate Change Affects Turtle Nesting Sites: What You Need to Know

Climate change is an ever-growing concern, and its effects...

Putin, a member of the International Criminal Court, is set to travel to Mongolia despite an arrest warrant against him

Despite an arrest warrant from the International Criminal Court,...

Japan Typhoon: Millions ordered to evacuate as one of strongest typhoons in decades hits Japan

What's the latest?Posted at 12:48 BST12:48 GMTImage source ReutersTyphoon...