
In all my workshops, the question of AI safety and data protection consistently comes up. Many professionals in the not-for-profit sector are eager to harness AI’s potential but are rightly concerned about what happens to their data when they use tools like ChatGPT and Perplexity. These concerns are valid, and understanding the risks and best practices is crucial.
What Do the Terms and Conditions Say?
Understanding the terms and conditions of AI tools is vital to protecting your organisation’s data. Here are three key points from the T&Cs of popular tools:
- Data Retention: OpenAI’s ChatGPT and Perplexity may store user inputs to enhance system performance. However, enterprise and premium plans often provide stricter data handling policies.
- Ownership and Privacy: User inputs may be used for AI model improvement, but these tools generally do not claim ownership of your data.
- Deletion and Control: Some services allow users to delete inputs or request data removal. Always check if your chosen AI tool offers this feature.
👉 View ChatGPT’s Terms and Conditions 👉 View Perplexity’s Terms and Conditions
💡 Pro-tip: If you have a specific privacy question, copy the link to the Terms and Conditions and ask an AI tool to summarise the relevant sections for you. It can save time navigating complex legal documents.
What Happens to the Data You Share?
When you use AI tools like ChatGPT or Perplexity, your input may be stored and analysed to improve the service. According to OpenAI’s terms and conditions, inputs may be retained for training purposes unless you opt for an enterprise or premium plan. Similarly, Perplexity states that user queries may be logged to enhance system performance.
It’s crucial to understand that using AI systems is not fundamentally different from using online platforms like social media. Whenever you interact with online services, there’s a risk your information may be stored or shared. The key difference is the type of data and how it’s used.
Practical Tips for Protecting Your Data
- Avoid Sensitive Information: Never share personal, confidential, or proprietary information when using AI tools. This includes names, addresses, financial details, and any information protected under privacy laws.
- Anonymise Inputs: Use general terms instead of specifics, for example, refer to a 'client' instead of using a name.
- Check the Terms and Conditions: Familiarise yourself with the AI tool’s data policies. Some platforms offer privacy-focused options, so consider upgrading if needed.
What to Do if You’ve Shared Too Much
If you accidentally input sensitive information, here’s what you can do:
- Contact the Service Provider: Reach out to the AI tool’s support team and request data deletion. Some platforms have processes for handling such requests. You can contact them here:
OpenAI Support: privacy@openai.com
Perplexity Support email: support@perplexity.ai - Review Data Privacy Options: Explore ways to control or erase your data within the tool’s settings.
- Seek Legal Advice: For highly sensitive data, consulting a legal professional may be necessary.
Comparing AI Tools to Social Media and Online Services
Using AI tools is similar to using platforms like Facebook, Twitter, or Gmail: data shared can be stored, processed, and used. However, AI tools often provide options to reduce data retention. Awareness and intentional use are key to protecting sensitive information.
Final Thoughts
AI tools can be transformative, but understanding data safety is vital. By applying good data hygiene practices, NGO workers can leverage AI’s power without compromising privacy or security.