Elon Musk recently said AI would surpass humans by 2025 but Google’s models are so inaccurate they’re being tuned by hand.
A recent artificial intelligence feature called “AI Overview,” unveiled by search monolith Google, has been handing out inaccurate and dangerous summaries in response to user searches and Google doesn’t appear to have an actual fix for the problem.
As of the time of this article’s writing, Google has disabled certain queries for its “AI Overview” feature after it was widely reported that the system was generating erroneous and potentially harmful outputs.
Reports began to circulate throughout the social and news media communities of a user query asking the search engine how to keep cheese on pizza to which the AI system reportedly responded with text indicating that the user should use glue. In another batch of apparent mess ups, the AI system purportedly told users that at least two dogs owned hotels and pointed to a non-existent dog statue as evidence.