The inexorable rise of artificial intelligence (AI) brings with it the specter of misuse, with looming liability for human rights issues tied to the technology.
Fears are mounting among institutional investors, turning up the heat on tech companies to assume responsibility and commit to ethical use of AI.
A rising call for ethical AI
Leading the charge for ethical AI is the Collective Impact Coalition for Digital Inclusion, a force of 32 financial institutions managing a staggering $6.9 trillion in assets.
Among its members are powerhouses like Aviva Investors, Fidelity International, and HSBC Asset Management. Their message to tech enterprises is clear: they must reinforce protections against human rights risks connected to AI.
Such risks encompass unauthorized facial recognition, surveillance, discrimination, and mass job losses. The British insurer’s asset management arm, Aviva Investors, has engaged tech firms – even those as specialized as chipmakers – in dialogue aimed at strengthening these protections.
Louise Piffaut, the firm’s chief of environmental, social, and governance equity integration, has noted a ramping up of these meetings in both frequency and intensity. The catalyst behind this escalation is the growing apprehension about generative AI, exemplified by models like ChatGPT.
Aviva Investors’ approach is both proactive and reactive. If conversations with companies don’t yield the desired results, they’re prepared to take several steps, including voting against management at annual meetings, voicing concerns with regulators, and even offloading shares.
Investing in AI: A matter of climate, ethics, and diversity
AI’s potential risk profile is such that it could soon overtake climate change as the primary concern for responsible investors, according to investment bank Jefferies.
The urgency is reflected in recent actions by Nicolai Tangen, CEO of Norway’s $1.4 trillion oil fund. Just two months ago, Tangen announced that the fund’s 9,000 invested companies would have to adopt ethical guidelines for AI use. Tangen also made a plea for more regulation in this rapidly expanding sector.
Aviva Investors, which manages more than £226 billion in assets, holds minor stakes in tech firms such as Taiwan Semiconductor Manufacturing Company, Tencent Holdings, Samsung Electronics, MediaTek, Nvidia, Alphabet, and Microsoft. These companies are all engaged in developing generative AI tools.
As it increases its oversight of tech companies, the asset manager is also scrutinizing firms in consumer, media, and industrial sectors. The aim is to ensure these businesses commit to retraining, rather than firing, workers at risk of job elimination due to AI-driven efficiencies.
The gravity of the AI question has gone beyond economic implications to touch on critical social matters. There’s a rising fear that AI could potentially disrupt societal stability, jeopardize job security, infringe on privacy, and introduce algorithmic bias.
The potential consequences of unchecked AI use could set back efforts at workforce diversity by years.
With potential repercussions expanding, a significant number of tech companies have yet to establish an ethical AI framework. However, some are setting good examples, like Sony, Vodafone, and Deutsche Telekom, which have introduced respective ethics guidelines for AI.
Regulatory pressure on tech companies is also escalating, with an expectation to take responsibility for human rights issues along their entire supply chain.
A prime example is the EU’s corporate due diligence directive, which proposes that companies like chipmakers consider human rights risks in their value chain.