Almost three weeks after the sudden exit of Black artificial intelligence ethics Timnit Gebru, which comes in more detail about the shady new set of policies that Google has rolled out for its research team.
After reviewing internal communication and talking to researchers affected by the rule change, Reuters reported on Wednesday that the technology giant recently added one “sensitive subject” review process for its researchers’ papers, and on at least three occasions explicitly requested that researchers refrain from casting Google’s technology negatively.
Under the new procedure, researchers are required to meet with special legal, political and PR teams before pursuing AI research related to so-called controversial topics, which may include facial analysis and categorization of race, gender or political affiliation.
In an example reviewed by Reuters, researchers who had studied the recommendation AI used to fill in user feeds on platforms such as YouTube – a Google-owned property – had prepared a paper describing concerns that the technology could be used to promote “disinformation, discriminatory or otherwise unreasonable results ”and“ insufficient diversity of content ”as well as leading to“ political polarization. “After reviewing a senior leader who instructed the researchers to strike a more positive tone, and the final publication suggests instead that the systems can promote “accurate information, fairness and diversity of content.”
“Advances in technology and the growing complexity of our external environment are increasingly leading to situations where seemingly inoffensive projects raise ethical, reputational, regulatory or legal issues,” says an internal website outlining the policy.
In recent weeks – and especially after departure from Gebru, a well-known researcher who reportedly fell out of favor with higher-ups after she sounded the alarm about censorship infiltrating the research process – Google has gained more control over the potential prejudices in its internal research department.
Four employees who spoke to Reuters validated Gebru’s claims, saying they also believe Google has begun interfering in critical studies of the technology’s potential for harm.
“If we examine the right thing in view of our expertise and we are not allowed to publish this on grounds that are not in line with high quality peer review, then we’re getting into a serious problem with censorship,” Margaret Mitchell, said a senior researcher at the company.
In early December, Gebru claimed she had been fired by Google after she pushed back against an order not to publish research claiming that AI capable of mimicking speech could put marginalized populations in an unfavorable position.