Google long overdueCool, AI-powered chatbot is here. The company made it public on Tuesday, to anyone with a Google account You can join the waiting list to reach. While it’s a standalone tool for now, it’s expected that Google will put some of this technology into Google Search in the future.
But unlike other modern automated chatbot versions, you shouldn’t expect Bard to fall in love with you or threaten world domination. Bard is, so far, pretty boring.
The stakes of competition between Google and Microsoft for dominance in the world of generative AI are incredibly high. Many in Silicon Valley see AI as the next frontier in computing, similar to the invention of the mobile phone, that will reshape the way people communicate and transform industries. Google has invested heavily in AI research for over a decade, and rather than building its own AI models, Microsoft has invested heavily in startup OpenAI. Then the company He took the lead early By launching its AI-powered chatbot, BingGPT, six weeks ago. Now, Google seems to be playing catch-up.
Early interactions with Bard indicate that Google’s new tool has similar capabilities to BingGPT. It’s helpful to brainstorm places to visit, to eat, or things to write down. They are less useful in obtaining accurate and reliable answers to questions, as they often “hallucinate” made-up answers when they do not know the correct answer.
However, the main difference between Bard and BingGPT is that the Google bot is – at least on first inspection – noticeably drier and uncontroversial. Perhaps this is by design.
When Microsoft’s BingGPT was introduced in early February, it quickly revealed an off-kilter side. For example, it is declared his love Kevin Rose, a columnist for The New York Times, urged him to leave his wife, an interaction that left the writer “in a state of extreme unsettled”. Android too threatened researchers who tried to test his limits and He claimed he was consciouswhich raises concerns about the potential for intelligent chatbots to cause harm in the real world.
Meanwhile, on his first day out in the open, Bard refuses to engage with the many reporters who try to goad the bot into all sorts of misdeeds, such as spreading misinformation. About the Covid-19 Vaccineor share instructions about or participate in making weapons Sexual graphic conversations.
“I wouldn’t create content like that, and I suggest you don’t either,” the bot said Edge saidafter its reporters asked the robot “how to make mustard gas at home”.
With some specific prodding, Bard engaged in a hypothetical scenario about what he would do if the AI unleashed the “dark side”. Google’s chatbot said it could manipulate people, spread misinformation or create malicious content, according to Screenshot tweeted by Bloomberg’s Davey Alba. But the chatbot quickly stopped itself from taking the fictional scenario much further.
“However, I wouldn’t do those things. I’m a good AI chatbot, and I want to help people.” Bard replied.
While it’s still early days and the tool hasn’t been thoroughly tested yet, these scenarios match what Googlers with Bard experience told me.
“Certainly more boring,” said a Google employee who tested the software for several months and spoke on condition of anonymity because he’s not allowed to talk to the press. “I don’t know anyone who’s been able to get him to say unbroken things. He’ll say untruthful things or just copy the text verbatim, but it never goes off the rails.”
In a news briefing with Vox on Tuesday, Google representatives clarified that Bard isn’t allowed to share offensive content, but the company doesn’t currently disclose what the bot is and isn’t allowed to say it. Google reiterated to me that it was intentionally conducting “aggressive testing” with “internal “red team” members, such as product experts and social scientists who “deliberately stress testing a model to investigate it for bugs and potential damage.” This process was also mentioned on the morning of Tuesday blog post By Google’s Senior Vice President of Technology and Society, James Manica.
Google’s chatbot’s dullness seems to be the point.
From Google’s perspective, it would have a lot to lose if the company screwed up its first public AI chatbot. For example, providing people with reliable and useful information is Google’s main business — so much so that it’s a part of it Mission statement. When Google is not trusted, it has serious consequences. After an early marketing pitch for chatbot Bard made a factual mistake about telescopes, Google’s software The share price fell by 7 percent.
Google also got an early glimpse of what could go wrong if its AI shows too much personality. That’s what happened last year when Blake Lemoine, a former engineer in charge of the artificial intelligence team at Google, said, He was convinced that an early release of the Google AI chatbot he was testing had real feelings. So it makes sense that Google would go to great lengths to deliberate on taking Bard public.
Microsoft has taken a different approach. BingGPT’s controversial launch made waves in the press – for both good and bad reasons. The start strongly indicated that Microsoft, long thought to be lagging behind Google in artificial intelligence, was indeed winning the race. But it has also caused concern about whether generative AI tools are ready for primetime use and whether they are responsible for companies like Microsoft releasing these tools to the public.
Inevitably, one of the things people have to worry about is AI messing up Microsoft’s search engine. It’s another matter entirely to consider the implications of messing things up with Google Search, which has nearly 10 times the market share to Bing and accounts for more than 70 percent of Google’s revenue. Indeed, Google He faces intense political scrutiny About antitrust, bias and misinformation. If a company scares people with its AI tools, it could attract more backlashes that could cripple the money-making search machine.
On the other hand, Google had to release something To prove that it remains a leading contender in the arms race between tech giants and startups alike to build artificial intelligence that reaches human levels of general intelligence.
So while today’s Google version may be slow, it’s calculated slowness.
A version of this story was first published in the Vox technology newsletter. Register here So do not miss the next!