SAN FRANCISCO (Reuters) – Alphabet Inc (GOOGL.O) warned employees about how they were using its chatbots, including its Bard, at the same time it markets the software around the world, according to four people familiar with the matter. command. Reuters.
Google parent advised employees not to enter its confidential materials into smart chatbots, the people said and the company confirmed, citing a long-standing policy on information protection.
Chatbots, among them Bard and ChatGPT, are human-like programs that use so-called generative artificial intelligence to have conversations with users and answer countless prompts. Human reviewers can read chats, and The researchers found Similar AI can reproduce the data it has absorbed during training, which leads to the risk of leakage.
Some people said Alphabet also warned its engineers to avoid direct use of computer code that chatbots could generate.
When asked for comment, the company said Bard can make unsolicited code suggestions, but it helps programmers nonetheless. Google also said that it aims to be transparent about the limitations of its technology.
The concerns show how Google wants to avoid business harm from the programs it has launched in competition with ChatGPT. At stake in Google’s race against ChatGPT OpenAI backers and Microsoft Corp (MSFT.O) are billions of dollars in investment and still untold advertising and cloud revenue from new AI software.
Google’s warning also reflects what has become a security standard for companies, warning individuals against using publicly available chatbots.
An increasing number of companies around the world have put up firewalls on intelligent chatbots, the companies told Reuters, among them Samsung (005930.KS), Amazon.com (AMZN.O) and Deutsche Bank (DBKGn.DE). Apple (AAPL.O), which did not respond to requests for comment, has also reportedly done so.
About 43% of professionals were using ChatGPT or other AI tools as of January, often without telling their bosses, according to a survey of nearly 12,000 respondents, including from major US-based companies, conducted by the network site Fishbowl. .
By February, Google had asked employees testing the Bard before its launch not to give it inside information, I mentioned inside. Google now rolls out Bard in more than 180 countries and in 40 languages as a springboard to creativity, and its caveats extend to its code suggestions.
Google told Reuters it had held detailed talks with Ireland’s Data Protection Commission as it addresses regulators’ questions, following Politico’s report Tuesday that the company was delaying launching Bard in the EU this week pending more information about the chatbot’s impact on privacy.
Concerns about sensitive information
Such technology can craft emails, documents, and even the programs themselves, promising to speed up tasks exponentially. However, this content could include false information, sensitive data, or even copyrighted passages from the “Harry Potter” novel.
Google’s privacy notice, updated on June 1, also states: “Do not include confidential or sensitive information in your Bard conversations.”
Some companies have developed software to address such concerns. For example, Cloudflare (NET.N), which defends websites against cyberattacks and offers other cloud services, markets the ability for companies to mark and restrict certain data from the outflow.
Google and Microsoft also offer conversational tools to business customers who will charge a higher price but refrain from ingesting data into generic AI models. The default setting in Bard and ChatGPT is to save users’ chat history, which users can choose to delete.
Yousef Mahdi, Microsoft’s chief consumer marketing officer, said it “makes sense” that companies wouldn’t want their employees to use public chatbots at work.
“Companies are duly taking a conservative stance,” Mehdi said, explaining how Microsoft’s free Bing chatbot compares with its enterprise software. “There, our policies are much stricter.”
Microsoft declined to comment on whether it had a blanket ban on employees entering confidential information into public AI programs, including its own, although another executive there told Reuters he had personally restricted its use.
Writing confidential matters into chatbots was like “shoveling a bunch of PhD students all over your private records,” Cloudflare CEO Matthew Prince said.
Reporting by Jeffrey Dustin and Anna Tong in San Francisco Editing by Kenneth Lee and Nick Zieminski
Our standards: Thomson Reuters Trust Principles.
“Infuriatingly humble analyst. Bacon maven. Proud food specialist. Certified reader. Avid writer. Zombie advocate. Incurable problem solver.”