Nearly Half of Survey Respondents Fooled by ChatGPT in Phishing Schemes

A recent survey conducted by Beyond Identity has revealed alarming statistics about the effectiveness of ChatGPT, a generative artificial intelligence (AI) software, in tricking individuals. The survey aimed to analyze how convincing the AI could be in various schemes, including phishing emails, texts, and social media posts. The results indicate a significant vulnerability among the general public to fall for such scams.

High susceptibility to phishing messages

According to the survey, 39% of respondents admitted that they would fall victim to at least one of the phishing messages generated by ChatGPT. The most common types of scams that fooled respondents were social media post scams at 21% and text message scams at 15%. This raises concerns about the increasing sophistication of AI-generated phishing attempts and the need for better public awareness.

Buy physical gold and silver online

For those who were cautious and did not fall for any of the phishing attempts, the top factors that aroused suspicion were suspicious links, strange requests, and unusual amounts of money being asked for. These are traditional red flags in phishing schemes, but the survey suggests that a significant portion of the population still falls for more advanced tactics.

The ChatGPT app confusion

One of the most startling findings was that 49% of respondents were tricked into downloading a fake ChatGPT app when presented with six real but copycat options. This suggests that even when people are aware of the risks, distinguishing between genuine and fraudulent apps remains a challenge. The report also noted that individuals who had fallen victim to app fraud in the past were much more likely to do so again, highlighting the importance of education and awareness in preventing future scams.

AI-generated passwords, a double-edged sword

Interestingly, 13% of respondents have used AI to generate passwords. While this may seem like a secure method, the survey revealed a potential vulnerability. ChatGPT can use easily accessible personal information to generate lists of probable passwords, attempting to breach accounts. This is particularly concerning for the one in four respondents who use personal information in their passwords. The survey found that 35% use birth dates and 34% use pet names—information that can often be found on social media profiles, business listings, or phone directories.

The real-life impact

Despite the high susceptibility to scams as indicated by the survey, 93% of respondents had not experienced having their information stolen from an unsafe app in real life. This could suggest a gap between perceived risk and actual incidents, but it also serves as a warning that many may be unaware of the risks they are taking.

The Beyond Identity survey paints a concerning picture of the public’s vulnerability to AI-generated scams. With nearly half of the respondents fooled by a fake ChatGPT app and a significant percentage susceptible to phishing scams, there is an urgent need for better education and awareness programs. Individuals should be cautious of suspicious links, strange requests, and unusual monetary demands. Additionally, the use of personal information in passwords should be avoided, given the capabilities of AI like ChatGPT to exploit such data.

Beyond Identity is a leading cybersecurity firm that focuses on identity management and authentication solutions. Their recent survey aims to shed light on the evolving landscape of AI-generated scams and the public’s susceptibility to them.

About the author

Why invest in physical gold and silver?
文 » A