Today, many people rely on neural network-based language models like ChatGPT for their jobs. A Kaspersky survey in Russia revealed that 11% of respondents have utilised chatbots, with nearly 30% believing in their potential to replace jobs in the future. Other surveys indicate that 50% of Belgian office workers and 65% in the UK rely on ChatGPT. Moreover, the prominence of the search term “ChatGPT” in Google Trends suggests a pronounced weekday usage, likely tied to work-related tasks.
However, the growing integration of chatbots in the workplace prompts a crucial question: can they be entrusted with sensitive corporate data? Kaspersky researchers have identified four key risks associated with employing ChatGPT for business purposes.
Data leak or hack on the provider’s side.
Although LLM-based chatbots are operated by tech majors, even they are not immune to hacking or accidental leakage. For example, there was an incident in which ChatGPT users were able to see messages from others’ chat histories.
Data leak through chatbots.
Theoretically, chats with chatbots might be used to train future models. Considering that LLMs are susceptible to “unintended memorisation,” wherein they remember unique sequences like phone numbers that don’t enhance model quality but pose privacy risks, any data in the training corpus may inadvertently or intentionally be accessed by other users from the model.
This is a big concern in places where official services like ChatGPT are blocked. Users might resort to unofficial alternatives like programs, websites, or messenger bots and download malware disguised as non-existing client or app.
Attackers can get into employee accounts, accessing their data, through phishing attacks or credential stuffing. Moreover, Kaspersky Digital Footprint Intelligence regularly finds posts on dark web forums selling access to chatbot accounts.
An example of a dark web post offering access to ChatGPT and Synthesia corporate account for $40.
Summarising above, data loss is a major privacy concern for both users and businesses when using chatbots. Responsible developers outline how data is used for model training in their privacy policies. Kaspersky’s analysis of popular chatbots, including ChatGPT, ChatGPT API, Anthropic Claude, Bing Chat, Bing Chat Enterprise, You.com, Google Bard, and Genius App by Alloy Studios, shows that in the B2B sector, there are higher security and privacy standards, given the greater risks of corporate information exposure. Consequently, the terms and conditions for data usage, collection, storage, and processing are more focused on safeguarding compared to the B2C sector. The B2B solutions in this study typically don’t automatically save chat histories and in some cases, no data is sent to the company’s servers, as the chatbot operates locally in the customer’s network.
“After examining the potential risks tied to using LLM-based chatbots for work purposes, we’ve found that the risk of sensitive data leakage is highest when employees use personal accounts at work. This makes raising staff awareness of the risks of using chatbots a top priority for companies. On the one hand, employees need to understand what data is confidential or personal, or constitutes a trade secret, and why it must not be fed to a chatbot. On the other, the company must spell out clear rules for using such services, if they are allowed at all,” comments Anna Larkina, security and privacy expert at Kaspersky.
To still get the benefits of using chatbots and stay safe, Kaspersky experts also recommend:
- Use Strong, Unique Passwords: Create complex passwords for each of your accounts, and avoid using easily guessable information like birthdays or names.
- Beware of Phishing: Be cautious of unsolicited emails, messages, or calls asking for personal information. Verify the sender’s identity before sharing any sensitive data.
- Educate Your Employees: Employees should stay informed about the latest online threats and best practices for staying safe online.
- Keep Software Updated: Regularly update your operating system, apps, and antivirus programs. These updates often contain security patches.
- Limit Corporate Information Sharing: Be cautious about sharing personal information on social media or public forums. Only provide it when absolutely necessary.
- Verify URLs and Websites: Double-check the URL of websites you visit, especially before entering login credentials or making purchases.
- Use Corporate Security Solution: To prevent employees from independently consulting untrusted chatbots for work purposes, you can use a security solution with cloud service analytics. Among its features, our Kaspersky Endpoint Security Cloud includes Cloud Discovery for managing cloud services and assessing the risks of using them.