Top AI Models Lack Transparency, New Index Show

Researchers at Stanford’s Center for Research on Foundation Models (CRFM) have recently found that the biggest AI foundation models are lacking in transparency, and that could be a thing of concern, considering how powerful and fast the technology is spreading across the world. 

Top AI Models are Lacking in Transparency

To score the transparency of the top 10 foundation models, the researchers teamed with other experts at MIT and Princeton to develop the “Foundation Model Transparency Index.” The index evaluates 100 different aspects of transparency based on how a company builds a foundation model, how it works, and how it is used downstream.

Buy physical gold and silver online

The index ranked Meta’s Llama 2 at 54%, followed by BigScience’s BLOOMZ (53%) and OpenAI GPT-4 (48%). Google’s PaLM 2 had a transparency rating of 40%, with the least transparent model being Amazon’s Titan Text at 12%. 

“This is a pretty clear indication of how these companies compare to their competitors, and we hope will motivate them to improve their transparency,” said Rishi Bommasani, Social Lead at CRFM. 

Why Does it Matter?

Transparency in AI models is imperative to prevent certain threats to consumer protection and also guarantee that these tools are safe to use. 

The lack of transparency would make it harder for businesses to know if they can safely build applications using commercial foundation models or for academics to rely on the models for research. 

On the side of regulation, it’s a major priority. Transparency is important for rooting out bias, privacy violations, etc., and to effectively help policymakers around the world formulate rules to regulate AI models. 

“If you don’t have transparency, regulators can’t even pose the right questions, let alone take action in these areas,” Bommasani said. 

Some countries, including Australia, have already begun enforcing measures to ensure the transparency and integrity of AI models. In September, Cryptopolitan reported that the Australian government has empowered citizens with the right fo request meaningful information about how automated decisions are made.

The government demand that AI companies provide these information in a clear and comprehensible manner, ensuring that citizens can understand how AI influences their lives.

About the author

Why invest in physical gold and silver?
文 » A