When it comes to AI research, Google has put in place measures designed to help the firm navigate potentially sensitive topics (e.g. race and religion) more effectively. But some have suggested the level of caution exercised by the company could amount to censorship.
According to a Reuters report, Google has added an extra layer of checks for all the research produced by its experts, who now have to consult with legal, policy and public relations teams before pursuing sensitive topics, such as facial recognition. On a few occasions, experts were also advised to “take great care to strike a positive tone”.
It’s not clear when Google started enforcing the new policy, but people familiar with the matter are saying it began some time in June.
“Advances in technology and the growing complexity of our external environment are increasingly leading to situations where seemingly inoffensive projects raise ethical, reputational, regulatory or legal issues,” reads a document provided to research staff.
Managers behind the new policy have said it doesn’t mean researchers should “hide from the real challenges” of the use of AI.
But, discussing the matter with Reuters, senior scientist Margaret Mitchell warned about the dangers of this policy.
“If we are researching the appropriate thing given our expertise, and we are not permitted to publish that on grounds that are not in line with high-quality peer review, then we’re getting into a serious problem of censorship,” she said.
Google is yet to make an official statement.