A research project in the field of social media investigation was conducted by the team of Notre Dame University where they found out that the general public faces a lot of difficulties in determining the human from the artificial intelligence during the political discussions on different online platforms.
The research, divided into three rounds with a customized Mastodon platform as its delivery platform, indicates that participants are mistaking AI bots for humans 58% of the time.
AI bots blur human boundaries in political discourse, study
The investigation team, under the guidance of Dr. Paul Brenner from the Research Computation Center at the University of Notre Dame, adopted AI bots based on the generative networks which include GPT4, Llama-2-Chat, and Claude 2.
This generation of chatbots was designed with diverse personalities to be used as political players in the field of world affairs and more often taking on human traits and involved in different political discussions.
Beauty of it; was the fact that each bot had a character (emergence of a personality with a detailed profile) who was instructed to draw a relationship between personal experiences and the broader-world events.
For data manipulation, despite the panel understanding that it was interacting with humans and bots, about the half of this sample group itself was not able to make the difference between artificial intelligence and real humans.
This evidence demonstrates that AIbots can imitate very closely to humans as far as their conversational patterns are concerned hence boosting their chances of getting used in promoting misinformation.
AI’s role in misinformation
Mr. Paul Brenner brought up issues such as the adoption of these AI bots and their probable cascade effect, an opportunity for the AI powered bots to influence public opinion and possibly spread misinformation.
He explained, that AI in simulations, is so effective; that each human being on the Internet, can easily be deceived as to the original source of information.
Another important highlighted aspect in this study was the evaluation of different AI models in the process by showing that the accuracy rates varied little among the various LLMs. Here, we come to understand that this fact demonstrates that even less sophisticated models also could be convincing enough in creating human-like conversations in social media interaction scenarios.
Hence, among other things, in the first line Brenner highlighted his stand on multidimensional approach toward the prevention of spreading false information with the help of AI.
these initiatives such as educational campaigns, legislative action, and stricter social media account validation which will all help each other to completely end this problem. Besides, dimensions of psychological health of adolescent AI influence are set to be considered and strategies to negate this impact are intended to be developed.
This research project accentuates the imbalance of AI vs. online information integrity in the social networks that necessitates the development of integrated strategies to deal with the situation.
Original content from:University of Notre Dame