Google reportedly asked employees to ‘strike a positive note’ in research paper

Google has added a layer of research for research papers on sensitive topics, including gender, race, and political ideology. A senior leader also instructed researchers to “strike a positive tone” in an article this summer. The news was first reported by Reuters.

“Advances in technology and the growing complexity of our external environment are increasingly leading to situations where seemingly inoffensive projects raise ethical, reputational, regulatory or legal issues,” the policy reads. Three employees told Reuters the rule started in June.

The company has also asked employees to “refrain from throwing its technology in a negative light” on several occasions, Reuters said.

Employees working on a paper on recommendation AI used to personalize content on platforms like YouTube were told to “be very careful about striking a positive note,” according to Reuters. The authors then updated the paper to “remove all references to Google products.”

Another paper on the use of AI to understand foreign languages ​​”softened a reference to how the Google Translate product made mistakes,” Reuters wrote. The change came in response to a request from reviewers.

Google’s standard review process is designed to ensure that researchers do not inadvertently disclose business secrets. But the review on “sensitive issues” goes beyond that. Employees wishing to evaluate Google’s own bias services are asked to consult the legal, public relations and policy teams first. Other sensitive topics reportedly include China, the oil industry, location data, religion and Israel.

The search giant’s publication process has been in the spotlight since the firing of AI ethicist Timnit Gebru in early December. Gebru says she was interrupted via an email she sent to the Google Brain Women and Allies listserv, an internal group for Google AI staff. In it, she talked about Google executives pushing her to withdraw a paper on the dangers of large language processing models. Jeff Dean, Google’s head of AI, said she had submitted it too close to the deadline. But Gebru’s own team pushed this claim back, saying the policy was being applied “unevenly and discriminatory.”

Gebru reached out to Google’s PR and policy team in September regarding the paper according to Washington Post. She knew that the company could take issues with certain aspects of the research as it uses large language processing models in its search engine. The deadline for making changes to the paper was not until the end of January 2021, giving researchers ample time to respond to any concerns.

One week before Thanksgiving, Megan Kacholia, vice president of Google Research, asked Gebru to withdraw the paper. The following month, Gebru was fired.

Google did not immediately respond to a request for comment from The edge.

Source