Featured Posts
Content has been re-crowned king - and it's important to acknowledge the technologies we leverage every day to create that content. AI tools, such as ChatGPT, Google’s Bard and others can help you perform your job more efficiently, if you need some creative inspiration or to get a document started if you’re not the best at written communication. It is important to remember that this kind of technology also calls for extra vigilance around the protection of sensitive data.
AI tools like ChatGPT operate on a simple principle: they generate responses based on the data fed into them. This has a critical implication: if you were to inadvertently input sensitive data into the AI, the risk arises of this data being disclosed, replicated, and then misused. For instance, if you converse with an AI tool about a customer's private details, or discuss confidential projects, this information is being used to enhance the next version of the AI tool. Everything you submit is being used as the next set of "training-data". This is also called RLHF, or ‘Reinforcement learning from human feedback’ - where your input fine tunes the way the chatbot converses in order to mimic humans. This can have far-reaching impacts on both business integrity and customer trust if something where to go wrong, that information is breached, and even giving the bot a trained bias.
To put his in perspective, the sphere of the risk of ‘oversharing’ extends beyond AI tool interactions. For example, imagine a situation where you're seeking help from a third-party support agent, partner or vendor. The impulse to provide as much information as possible to resolve an issue quickly is understandable, but oversharing can lead to overexposure of sensitive data, even if it is unintentional by both parties. If oversharing to a vendor, this can lead to vendor email compromise (VEC).

Online forums can pose a similar risk. These platforms are often used to seek solutions for professional challenges. Sharing too much detail about internal procedures, specific customer cases, or business operations can inadvertently make private data public, resulting in potential data leakage. Another risk is whether you can trust the output you receive from chatbots. No providers currently guarantee the veracity of the outputs so it's still up to you to proofread, edit and check sources. The bot can and will make up data or ‘facts’ to fit the content.
All of your interactions with organisational or private data should be treated with integrity and make security paramount, not just from a legal perspective but also in maintaining the trust that clients place in you. Reputational risk often follows financial risk and compounds it.
It's not currently known the exact method of storage or usage that these AI bots use – and whether they remember conversations and then use that input to inform answers to other users. There are some anecdotal stories of code being ‘leaked’ by GPT-4 and then used in another answer, however the source of that report is tenuous. For best practice, it's best to treat chatbots as a real person, who is a stranger: You wouldn't reveal more than is necessary, and you wouldn’t ask them to overextend themselves in your favour.
Here's how we can collectively mitigate possible risks together:
Security Awareness Training: Understand what constitutes sensitive data and the implications of sharing it inappropriately. Defining what is sensitive, confidential or secret is a good first step, then training on how to recognise and categorise them. Procedures for each type of data's use in practice in addition to general security awareness training should also be standard.
Strict Data Governance: Adhere to your data governance policies, which should control access to and distribution of sensitive data within the organisation. Our logging and auditing mechanisms are there to monitor data handling.
The potential of AI-driven technologies like ChatGPT is unquestionable, but we must balance this innovation with a commitment to data privacy. Active participation in following best practices of mindful sharing, maintaining strict adherence to data governance policies, and promptly reporting any issues, is key to ensuring ongoing safety online.