Meta, the parent company of Facebook, has now staged an argument using its new AI chatbot, Meta AI, which extends across all its social media platforms. The chatbot claims to be an encyclopedia, a guidebook, a counselor, an illustrator, and more. However, in such a move, some concerns have arisen about the possibility of influence that artificial intelligence could have on social media experiences.
AI’s growing presence on social media
Generative AI comes to social media. For example, TikTok has a huge engineering team working on developing large language models capable of recognizing and generating text. The company even hires writers and reporters to annotate and improve the performance of these AI models.
Already, that is being suggested in Meta’s help page, which says the company may use the messages that people provide to help train AI models to improve technology capabilities. AI experts claim social media users should brace for more, which could reshape their experiences for better or worse.
“Everyone does all they can to get noticed.” Ethan Mollick, a professor at the Wharton School of the University of Pennsylvania, points out this as the most vital application for AI in social media apps today— making their platforms as sticky as possible for consumers. Apps like Instagram are trying to work out ways for users to be as captive as possible because that will bring in the maximum time for advertising.
Striking a balance: Regulation and moderation
During Facebook’s first-quarter earnings call, Mark Zuckerberg did acknowledge it would be a little bit of time before the company actually saw returns on investments into the chatbot and other AI applications. But the technology is already changing user experiences on the platforms it owns. About 30% of the material on Facebook’s feed is served by AI recommendation systems, and more than half of the content on Instagram is shown through the same method.
While AI might personalize the user experience or even create other content, it could also, however, personalize other content—such as portraits and music—in the view of experts, who warn of possible pitfalls. Jaime Sevilla, director of Epoch, a research institute with a focus on the present and future trends of AI technology, expressed his concerns over the power of persuasion of AI and the possible spread of misinformation.
One study demonstrates that GPT-4, led by AI researchers from the Swiss Federal Institute of Technology Lausanne, has been found to perform 81.7% better than humans in getting someone to agree in an argument. The study has not yet been peer-reviewed, but Sevilla has already sounded the alarm, especially with regard to its impacts on how the technology could extend potential scams and fraud.
As the United States hurtles toward yet another politically rancorous election season, lawmakers would be wise to heed the dangers of AI and its relationship to the distribution of misinformation. Bindu Reddy is the CEO and co-founder of Abacus.AI. She proposes something more subtle than using a sledgehammer, such as a ban on using AI within social media platforms, because bad actors were spreading misinformation online long before AI was even invented.
Reddy, meanwhile, called on the banning of an AI-generated human or human-like deepfake, but she knocks sweeping restrictions like those imposed by the European Union. She said the United States should not be left behind other countries on AI development.
AI Chatbots and similar Social Media Innovations are developing, and a balance of innovation with responsible regulation is required to ensure the protection of users’ experiences as well as democratic principles across the world.