A group of Microsoft investors has filed a formal shareholder proposal calling on the company’s board to issue a report on the material risks of its generative artificial intelligence technology facilitating misinformation and disinformation, as well as assessing the effectiveness of Microsoft’s efforts to remediate those harms.
The proposal raises concerns that AI tools such as ChatGPT “may dramatically increase misinformation and disinformation globally, posing serious threats to democracy and democratic principles,” citing numerous studies and reports alleging that ChatGPT “could be the most powerful tool in widely spreading misinformation.”
With questions about defamation liability for AI-generated content still unresolved and Chat GPT already under investigations by European and Canadian data protection authorities, shareholders are concerned that Microsoft may be exposing itself to significant litigation and regulatory risk.
Misinformation and disinformation enabled by AI hallucinations (instances where an AI returns blatantly false information) also stand to have a deeply deleterious effect on democratic elections, which is of particular concern going into the 2024 election season both in the United States and globally.
While Microsoft has led the industry in developing AI business principles, the company’s decision to roll out a product like ChatGPT with full knowledge that the application regularly produces results that “sound convincing but are incomplete, inaccurate, or inappropriate,” raises doubts that Microsoft has adequately assessed the threats to public welfare or material risks to shareholders.
Shareholders filing the proposal are represented by Arjuna Capital, as lead filer; Open MIC, Eko, and Azzad Asset Management.
“It’s not a question of whether ChatGPT generates misinformation–it does. The question is to what extent the technology can and will be manipulated to spread disinformation globally causing political and societal harm.” said Natasha Lamb, Managing Partner at Arjuna Capital. “The next question is how Microsoft plans to address such an immense business risk.”
“Microsoft is well aware of how frequently ChatGPT spouts blatant falsehoods, and how easily it could be deployed by malicious actors to undermine trust in elections,” said Michael Connor, Executive Director of Open MIC. “It’s not enough for the company to claim it has mitigated the threats to democracy and risks to investors—Microsoft needs to show its work.”
“At a time when misinformation and disinformation already create significant social harms and challenge our ability to set responsible guardrails, the rush to deploy fallible tools such as ChatGPT raises serious questions,” noted Christina O’Connell, Senior Manager of Investments and Shareholder Engagement at Eko. “Our member shareholders cannot properly evaluate their investments without a transparent look at the risks posed by an uncertain regulatory environment, unknown social impacts, and unclear plans for harm mitigation.”
Assuming Microsoft does not take legal action to attempt to block it, investors will have the opportunity to vote on the shareholder proposal at Microsoft’s annual meeting, which will likely be held in December.