Google warns staff about chatbots

 


Alphabet, Google's parent company, is cautioning employees against sharing confidential information with AI chatbots, including its own Bard.

 AI language models often train on user data, potentially leading to the leakage of confidential information if the bots reproduce the data in future chats.

Alphabet expressed concerns about the potential risk of data leakage and the fact that human reviewers could have access to chat entries.

  • In addition, Alphabet has alerted its engineers to avoid using computer code generated by chatbots directly.
  • The company acknowledged Bard's unwanted code suggestions but still sees its benefits for programmers and aims to be transparent about its limitations.

In May, Samsung temporarily barred employees from using generative AI tools after an engineer unintentionally uploaded internal source code to ChatGPT.

  • A January survey by Fishbowl revealed that 43% of professionals use AI tools like ChatGPT at work, with almost 70% keeping it undisclosed to their bosses.

Post a Comment

Previous Next

Contact Form