Opinion

Big Tech’s voluntary commitment to responsible AI offers false assurances

Mohammad Hosseini

 

In response to widespread concerns over artificial intelligence, representatives from Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI met last month with President Joe Biden’s administration to help move toward safe and responsible AI technology.

As part of that meeting, the White House released a document that underscores the principles of safety, security, and trust as fundamental to AI’s future. It is, however, quite surprising that the document available online does not mention the White House; it is void of any official signs such as standardized formatting, letterhead, authorship or source, signature, reference number, or even a date.

These issues aside, when dealing with high tech giants such as Amazon, Google, and Microsoft, the notion of voluntary commitment is more like a joke than anything serious.

Indeed, even in cases in which established laws exist and commitments are mandatory, these companies often have the financial and legal resources to navigate around them, sometimes pushing the boundaries of what’s permissible and bending the rules to their benefit.

Google and Amazon’s union-busting efforts, Facebook’s Cambridge Analytica scandal, and Microsoft and OpenAI’s copyrights violations are only the tip of the iceberg, demonstrating that regulators already have a hard time enforcing existing mandatory laws.

So how can voluntary commitments be expected to yield better results?

Regarding the challenges with implementation and effectiveness, the variability in interpretation stands out because companies could interpret and implement these commitments in dissimilar ways.

Particularly in cases of potential competitive disadvantage — such as when abiding by these commitments could be against the financial bottom line — flouting or interpreting the guidelines in a lax manner could be a matter of life and death for these companies. This is likely given the international nature and implications of AI development.

The White House’s consultations with a broad spectrum of countries — Australia, Brazil, Canada, Chile, France, Germany, India, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, New Zealand, Nigeria, the Philippines, Singapore, South Korea, the United Arab Emirates, and the United Kingdom — indicate an appetite for creating an international front to address these issues effectively.

But history has shown that aligning on international agreements, especially on issues with deep economic implications, often yields more rhetoric than action. (Climate change and the Paris climate accord are a shining example.)

Essentially, creating a cohesive and enforceable international framework clashes with the reality of geopolitical competition.

In the case of AI, the U.S.’ major technological competitor, China, is not listed among the countries the White House consulted with. Will the U.S. or its allies forgo a competitive edge by committing to guidelines and moral principles that other nations choose not to adopt?

Again, the example of climate change is illuminating. The U.S. and China are the world’s top two carbon emitters, and despite showing good intentions to curb their emissions, because they compete in developing solar energy equipment, their interests keep clashing, and consequently, global efforts to combat climate change are negatively affected.

Lastly, it is also important to note that voluntary commitments tend to offer false assurance, creating an illusion of safety and security, even when underlying issues persist and nothing changes in practice.

What makes the notion of voluntary commitments in the case of AI more troublesome is that it puts users’ safety, security, and trust at the mercy of companies that already have bad records and can use technological competition with China as a convenient excuse to disregard their voluntary commitments.

Thus, it’s vital to be critical toward soft measures such as voluntary commitments and advocate for devising more comprehensive and internationally-inclusive solutions that include monitoring and evaluation regimes as well as sanctions for the companies and countries that don’t comply.

 

Mohammad Hosseini, Ph.D., is a postdoctoral scholar in the Department of Preventive Medicine at Northwestern University’s Feinberg School of Medicine, a member of the Global Young Academy, and an associate editor of the journal Accountability in Research.

More Stories In Opinion