In a recent interview, IBM CEO Arvind Krishna called for greater accountability in the AI industry, suggesting that both companies developing AI and those utilizing it improperly should be held liable for any harms caused by the technology. This position sets IBM apart from some of its industry peers who are advocating for lighter regulatory measures concerning AI.
Krishna emphasized the importance of legal liability as a driving force for accountability, drawing upon centuries of economic history to support his argument. According to him, holding AI developers responsible for flaws in their systems that result in real-world harm is a necessary step towards ensuring the responsible use of artificial intelligence. Furthermore, Krishna believes that companies deploying AI should also bear responsibility when their use of the technology leads to problems, such as employment discrimination charges.
A call for accountability in AI
Krishna’s stance on AI regulation takes a different path compared to the approach taken with social media platforms, which have enjoyed sweeping legal protections established during the early days of the internet. He contends that by imposing legal liability on AI companies, there will be a greater incentive for them to create safer systems that adhere to existing laws, such as copyright and intellectual property rights. His vision for the AI industry is one where accountability drives innovation and compliance.
Krishna has actively engaged in discussions about AI regulation in Washington, recently participating in a forum alongside CEOs from companies like Meta, Google, Amazon, and X. Their aim was to advise senators on how to effectively regulate AI, reflecting the industry’s growing awareness of the need for responsible governance. IBM has also joined a White House pledge to build safe AI models, underscoring their commitment to the responsible development and deployment of AI technologies.
While Krishna acknowledges that his call for AI creators to be held accountable may not make him popular with everyone, he argues that when it comes to critical infrastructure and crucial use cases, the bar for deploying AI should be set higher. He also acknowledges that IBM itself would be subject to legal risk under the rules he supports. However, he points out that IBM primarily builds AI models for other companies, which have a vested interest in complying with the law. In contrast, AI models like OpenAI’s ChatGPT or Meta’s Llama 2 are more publicly accessible and, therefore, more susceptible to a range of users.
Enforcing accountability and addressing challenges
While Krishna did not delve into specific enforcement mechanisms for accountability, his comments align with IBM’s broader policy recommendations for AI, as outlined in a recent blog post. He cautioned regulators against requiring licenses to develop AI technology, breaking ranks with competitors like Microsoft. Additionally, he urged against granting blanket immunity to AI creators and deployers, emphasizing the importance of holding them responsible for any negative consequences arising from AI use.
In Krishna’s view, while companies like IBM can advocate for accountability, it is ultimately lawmakers who must formulate the rules and regulations governing AI. He encourages Congress and the federal government to consider the framework for accountability in the AI industry.
Recently, IBM announced a significant move to provide legal cover to business customers facing accusations of unintentional violations of copyright or intellectual property rights through its generative AI. This decision mirrors a similar initiative by Microsoft, which aims to alleviate concerns about generative AI models utilizing vast datasets to create images, text, and other content.
Krishna believes that this move will accelerate market growth and set a crucial precedent in the industry. He stresses that accountability should be a shared responsibility across the AI ecosystem, emphasizing the need for ongoing discussions on how to enforce it effectively.
Arvind Krishna’s call for accountability in the AI industry highlights the evolving landscape of AI regulation and governance. As technology continues to advance, the discussion around AI ethics, legal liability, and responsible deployment becomes increasingly vital. The path forward will require collaboration between industry leaders, regulators, and policymakers to strike the right balance between innovation and accountability in the AI ecosystem.