Thursday, November 21, 2024

Microsoft advocates AI rules to reduce risk

Date:

Microsoft passed a raft of regulations for artificial intelligence on Thursday, as the company addresses concerns from governments around the world about the dangers of the rapidly evolving technology.

Microsoft, which has promised to bring artificial intelligence into many of its products, has proposed regulations that include a provision that systems used on critical infrastructure can be completely shut down or slowed down, similar to an emergency braking system on a train. The company also called for laws to show when additional legal obligations apply to an AI system and for labels to show when an image or video was generated by a computer.

“Companies have to step up,” Microsoft president Brad Smith said in an interview about lobbying for regulations. “The government needs to move faster.” He presented the proposals to an audience that included lawmakers at an event in downtown Washington on Thursday morning.

The call for regulation is permeating an AI boom, with the launch of the ChatGPT chatbot in November generating a flurry of interest. Companies including Microsoft and Alphabet, Google’s parent company, have raced ever since to integrate the technology into their products. That has raised concerns that companies are sacrificing safety to get to the next big thing before their competitors.

Lawmakers have publicly expressed concerns that these AI products, which can generate text and images on their own, will lead to an avalanche of disinformation, be used by criminals, and put people out of work. Washington regulators have vowed to be vigilant about fraudsters who use artificial intelligence and cases where systems perpetuate discrimination or make decisions that break the law.

See also  Dow sells loss as Snap drops 39% amid earnings warning

In response to this scrutiny, AI developers have increasingly called for shifting some of the burden of monitoring the technology to the government. Sam Altman, CEO of OpenAI, which makes ChatGPT and considers Microsoft as an investor, told a Senate subcommittee this month that the government should regulate the technology.

The maneuver echoes new calls for privacy or social media laws by internet companies such as Google and Meta, Facebook’s parent. In the US, lawmakers have moved slowly after such calls, with few new federal rules on privacy or social media in recent years.

In the interview, Mr. Smith said Microsoft was not trying to take away responsibility for managing new technology, because it was introducing specific ideas and pledging to implement some of them regardless of whether or not the government took action.

“There is not an iota of abdication of responsibility,” he said.

He supported the idea, which Mr. Altman endorsed during congressional testimony, that a government agency should require companies to obtain licenses to deploy “high-powered” AI models.

“It means you notify the government when the testing starts,” Mr. Smith said. “You must share the results with the government. Even when it is authorized for publication, it is your duty to continue to monitor it and report to the government if unexpected problems arise.”

Microsoft, which generated more than $22 billion from its cloud computing business in the first quarter, said these high-risk systems should only be allowed to operate in “authorized AI data centers.” Mr. Smith conceded that the company would not be “in a bad position” to provide such services, but said several US competitors could also provide them.

See also  Elon Musk called ESG a scam - did Tesla boss do investors a favor?

Microsoft added that governments should classify some AI systems used in critical infrastructure as “high risk” and require them to have “safety brakes”. She compared the feature to “braking systems engineers have long built into other technologies such as elevators, school buses, and high-speed trains.”

Microsoft said that in some sensitive cases, companies that provide AI systems are required to know certain information about their customers. The company said that to protect consumers from deception, AI-generated content should be required to carry a private label.

Mr. Smith said companies should bear legal “liability” for AI-related damages. In some cases, he said, the responsible party could be the developer of an app like Microsoft’s Bing search engine that uses someone else’s underlying AI technology. He added that cloud computing companies may be responsible for complying with security regulations and other rules.

“We don’t necessarily have the best information or the best answer, or we may not be the most credible spokesperson,” said Mr. Smith. “But, you know, right now, especially in Washington, D.C., people are looking for ideas.”

POPULAR

RELATED ARTICLES

How Climate Change Affects Turtle Nesting Sites: What You Need to Know

Climate change is an ever-growing concern, and its effects...

Putin, a member of the International Criminal Court, is set to travel to Mongolia despite an arrest warrant against him

Despite an arrest warrant from the International Criminal Court,...

Japan Typhoon: Millions ordered to evacuate as one of strongest typhoons in decades hits Japan

What's the latest?Posted at 12:48 BST12:48 GMTImage source ReutersTyphoon...