In the realm of life’s mishaps, there are minor errors of Artificial Intelligence, and then there are unforgivable blunders. The recent resignation of MP and House of Commons Speaker Anthony Rota serves as a stark reminder of the latter. Rota’s departure came amidst tremendous pressure, and the reason behind it left the entire nation of Canada and the international community in shock.
The pitfalls of blindly trusting AI
Anthony Rota had invited a Second World War veteran to a parliamentary event, only to discover later that this veteran had been a member of a military unit that fought alongside the Nazis. The shocker came when the entire Canadian Parliament, unaware of this grim history, applauded the veteran. This event has now become a source of international embarrassment for Canada, sparking outrage from various political factions, including Liberal MPs, the opposition, and the NDP.
The questions that linger include whether Rota’s actions were a grave oversight, a result of ignorance, or a miscalculated political move gone awry. Regardless of the underlying reasons, one thing remains clear: a significant portion of this debacle can be attributed to a fundamental lack of vetting, a failure to ask the basic questions about who this person was and what his history entailed. In the realm of useful information, determining whether someone had affiliations with the Nazis is undoubtedly paramount.
As we dissect this incident, it’s hard not to reflect on the evolving landscape of information dissemination, which presents its own set of challenges. With the advent of artificial intelligence (AI), we now have access to what appears to be a reliable source of information, but appearances can be deceiving. AI is rapidly infiltrating various facets of our lives, from research to spreadsheet and presentation creation. In this context, the importance of information vetting cannot be overstated, lest we find ourselves ensnared in minor or major embarrassments.
The primary concern arises from the growing trend of implementing tools that analyze data or provide information based on prompts. The tech industry giants, including Microsoft, Google, Meta (formerly Facebook), and Amazon, are making significant investments in AI. OpenAI, the creator of ChatGPT, one of the most renowned AI technologies, has garnered substantial funding, particularly from Microsoft, which is integrating AI assistants into its core products, including the ubiquitous Office software suite.
However, the name “artificial intelligence” can be misleading. At its core, contemporary AI relies on large language models (LLMs). These LLMs excel at recognizing patterns in language, drawing from the vast pool of content available online. Consequently, when you ask an AI assistant to generate a travel itinerary or craft a presentation, it can perform these tasks remarkably well. Essentially, it aggregates and synthesizes online content into coherent responses.
However, there’s a critical limitation to AI: it doesn’t possess true understanding. Instead, it operates by gathering information and recognizing what a correct answer might resemble. This inherent limitation is why AI assistants often provide incorrect information. For instance, they may offer erroneous mathematical explanations or return outdated data.
AI’s unreliability becomes even more concerning as it becomes increasingly pervasive. Microsoft’s recent announcement of Copilot, an AI assistant for Windows and Office, is a testament to this trend. While Copilot can be immensely helpful in tasks like spreadsheet calculations and slide design, relying on it for information retrieval, such as analyzing sales data or incorporating web data into presentations, can lead to precarious situations.
In a nutshell, AI is inherently unreliable and prone to inaccuracies. Overreliance on AI for tasks that impact others and are monetarily compensated carries a high risk of embarrassment and error.
AI in the workplace: A Double-Edged sword
In literature, the concept of the unreliable narrator is a commonly used trope—a character whose narrative cannot be trusted for various reasons. As an educator, I often instruct my students that, while it may be frustrating, critical reading is essential. Blindly trusting an authoritative voice is not a prudent approach.
This principle applies equally to AI. The practical consequences of AI’s inaccuracies, biases, and lapses in judgment are tangible and far-reaching. While society grapples with how to respond to this new technological landscape, the advice for individuals is clear and straightforward:
Rephrase and expand: The situation with AI is very similar to dealing with unreliable narrators in literature. However, the real-world consequences of AI’s falsehoods, errors, and lapses in judgment are very real.
In this rapidly evolving era of AI, vigilance, critical thinking, and cautious reliance on technology are paramount. The allure of AI as a quick and efficient information source should not overshadow the importance of verifying and cross-referencing information. In the pursuit of accuracy and trustworthiness, we must remember that AI, while powerful, is not infallible.