On the bright side, the company says there’s little risk it’ll become sentient and begin updating itself.
OpenAI’s GPT-4o artificial intelligence model demonstrates “medium risk” when it comes to the potential for persuading human political opinions via generated text, according to information published by the company on Aug. 8.
In a document called a “System Card,” OpenAI explained its efforts at safety testing its top-tier GPT-4o model which powers the company’s flagship ChatGPT service.
According to OpenAI, GPT-4o is relatively safe when it comes to the potential for harms related to cybersecurity, biological threats, and model autonomy. Each of these are labelled “low risk,” indicating that the company thinks it’s unlikely ChatGPT will become sentient and harm humans directly.