In a move to protect sensitive information, Alphabet, the parent company of Google, has instructed its employees to refrain from entering confidential data into Bard, its generative AI chatbot. This warning extends to other chatbots, including Microsoft-backed ChatGPT from OpenAI, due to concerns about workers inadvertently leaking internal data via these tools. The company’s caution is prompted by the potential risks associated with the use of these chatbots, which have generated significant interest in recent months for their impressive ability to converse in a human-like manner, write essays and reports, and even succeed in academic tests.
One of the primary concerns is that human reviewers may read the conversations users have with the chatbots, posing a risk to personal privacy and potentially exposing trade secrets. Additionally, the chatbots are partially trained using users’ text exchanges, which means that with certain prompts, the tool could potentially repeat confidential information received in those conversations to members of the public.
Like ChatGPT, Bard is now freely available for anyone to try, and its webpage warns users not to include information that can be used to identify them or others in their conversations. The chatbot collects data on Bard conversations, product usage information, and location information, and uses it to improve Google products and services that include Bard. However, the company’s recent expansion of its warning to employees has emphasized the need to avoid using precise computer code generated by chatbots.
The risks associated with using these chatbots are not limited to Alphabet, as other companies such as Samsung, Apple, and Amazon have also enacted internal policies to prevent the sharing of sensitive information. Samsung, for instance, recently issued a similar instruction to its workers after a number of them fed sensitive semiconductor-related data into ChatGPT. Apple and Amazon have reportedly also implemented similar policies to mitigate the risks linked to using these chatbots.
As chatbots continue to evolve and improve, companies are taking steps to ensure that their employees do not inadvertently compromise sensitive information. Alphabet’s warning to employees serves as a reminder of the importance of maintaining confidentiality and security in the era of AI-powered chatbots. With the potential for these tools to be used for both positive and negative purposes, it is essential for companies to take proactive measures to protect their data and maintain the privacy of their employees and customers.