Vatican’s AI ethics plan lacks the legal restrictions it needs to be effective

Microsoft and IBM have added a divine touch to their AI ethics efforts by signing a new pledge endorsed by his holiness the Pope.

The so-called “ Rome Call for AI Ethics ” promises to develop technologies that protect the planet and all its people by honoring six principles: transparency, inclusion, responsibility, impartiality, reliability, and security and privacy.

These noble values are already common in corporate AI ethics initiatives, including Microsoft’s own . But critics argue that these programs are designed to avoid government regulation by showing that tech giants can police themselves through volunteering to follow codes of practice that they’ve come up with themselves.

The Rome Call was cosigned by Microsoft president Brad Smith, IBM executive vice-president John Kelly, and president of the Pontifical Academy for Life Archbishop Vincenzo Paglia.

Archbishop Paglia said their partnership emerged from the friendships he had developed with the leaders of the tech giants. The Rome Call suggests that their views on AI ethics are conveniently well-aligned.

The document does encourage “new forms of regulation” for AI but proposes no specific mechanisms to do this, nor any restrictions on high-risk technologies, such as facial recognition.

This approach reflects big tech’s growing recognition that new rules are inevitable but can still be molded to fit the needs of the industry.

The recent public pronouncement of Facebook founder Mark Zuckerberg epitomizes this shift in stance.

As news spread that the EU was preparing new regulations on AI and US lawmakers were threatening to break up Facebook , Zuckerberg publically called for more regulation of Big Tech , with his own helpful suggestions on the form this could take. US Congressman David Cicilline responded in a tweet that “ Mark Zuckerberg doesn’t get to make the rules anymore.”

More AI ethics-washing?

AI ethics programs have become widespread in Silicon Valley, but the guidelines are typically comprised of vague promises rather than clear rules and mechanisms to enforce them

This has led to accusations that the initiatives are primarily attempts at “ethics washing” that can deflect criticism and ward off government regulation.

The transparency that they ostensibly aim to uphold is often lacking in their internal operations. Microsoft, for example, claims that as a result of recommendations from its AI ethics committee, “significant sales have been cut off,” but the company offered details on which technologies had been curtailed.

If their initiatives are to provide any true ethical benefit, they must include practical mechanisms that turn principles into practices, independent external oversight, and true transparency into decision-making procedures.

In a message read to participants in a workshop titled “The ‘Good’ Algorithm?” Pope Francis said that “a critical contribution can be made by the principles of the Church’s social teaching” to the ethical development of AI.

The Vatican may be able to give IBM and Microsoft moral guidance, but unless they’re enforced by legal restrictions, trusting that the principles are followed would be an act of blind faith.

You’re here because you want to learn more about artificial intelligence. So do we. So this summer, we’re bringing Neural to TNW Conference 2020, where we will host a vibrant program dedicated exclusively to AI. With keynotes by experts from companies like Spotify and RSA, our Neural track will take a deep dive into new innovations, ethical problems, and how AI can transform businesses. Get your early bird ticket and check out the full Neural track .

Leave a Reply

Your email address will not be published. Required fields are marked *