Recent research found that ChatGPT, as well as Google's Gemini and Microsoft's Copilot are rife with security vulnerabilities — so say goodbye to your data.
Since ChatGPT launched in late 2022 and made artificial intelligence (AI) mainstream, everyone has been trying to ride the AI wave — tech and non-tech companies, incumbents and start-ups — flooding the market with all sorts of AI assistants and trying to get our attention with the next “flashy” application or upgrade.
With the promise from tech leaders that AI will do everything and be everything for us, AI assistants have become our business and marriage consultants, our advisors, therapists, companions, confidants — listening as we share our business or personal information and other private secrets and thoughts.
The providers of these AI-powered services are aware of the sensitivity of these discussions and assure us that they are taking active measures to protect our information from being exposed. Are we really being protected?