Google told researchers to use ‘a positive tone’ in AI research, documents show | Technology

Sign up for the Guardian Today US newsletter

Google moved this year to tighten control over its science papers by launching a “sensitive topics” review, and in at least three cases, authors asked not to throw its technology in a negative light, according to internal communications and interviews with researchers involved in work .

Google’s new review procedure asks researchers to consult legal, political and public relations teams before pursuing topics such as facial and emotional analysis and categorizations of race, gender or political affiliation, according to internal websites explaining the policy.

“Advances in technology and the growing complexity of our external environment are increasingly leading to situations where seemingly inoffensive projects raise ethical, reputational, regulatory or legal issues,” said one of the research staff. Reuters could not determine the date of the post, though three current employees said the policy began in June.

Google declined to comment on this story.

The “sensitive issues” process adds a round of scrutiny to Google’s standard review of paperwork for pitfalls in revealing trade secrets, said eight current and former employees.

For some projects, Google officials have intervened in later stages. A senior Google executive who reviewed a content recommendation technology study shortly before publication this summer told authors to “be very careful to strike a positive note,” according to internal correspondence read to Reuters.

The leader added: “This does not mean we have to hide from the real challenges” that the software poses.

Subsequent correspondence from a researcher to reviewers shows authors “updated to remove all references to Google products.” A draft seen by Reuters had mentioned Google-owned YouTube.

Four employees, including senior researcher Margaret Mitchell, said they believe Google has begun interfering with important studies of potential technological harm.

“If we investigate the right thing in view of our expertise and we are not allowed to publish it for reasons that are not in line with high quality peer review, then we’re getting into a serious censorship issue,” Mitchell said. .

Google states on its public-facing site that its researchers have “significant” freedom.

Tensions between Google and some of its employees erupted this month following the sudden resignation of scientist Timnit Gebru, who led a 12-person team with Mitchell focusing on ethics in artificial intelligence (AI).

Gebru says Google fired her after she questioned an order not to publish research in which she claimed that AI mimics speech could harm marginalized populations. Google said it accepted and expedited her resignation. It could not be determined whether Gebru’s paper underwent a “sensitive subject” review.

Jeff Dean, Google’s senior vice president, said in a statement this month that Gebru’s paper dwells on potential damages without discussing ongoing efforts to address them.

Dean added that Google supports the AI ​​Ethics Scholarship and “is actively working to improve our paper review processes because we know that too many controls and balances can become cumbersome”.

Sensitive topics

The explosion in AI research and development across the technology industry has prompted US and other authorities to propose rules for its use. Some have cited scientific studies that show that facial analysis software and other artificial intelligence can maintain bias or ruin privacy.

In recent years, Google has incorporated AI into all of its services using the technology to interpret complex search queries, make recommendations on YouTube, and auto-complete phrases in Gmail. Its researchers published more than 200 articles in the past year on developing AI responsibly among more than 1,000 projects in total, Dean said.

Studying Google services for bias is among the “sensitive topics” under the company’s new policy, according to an internal website. Among dozens of other “sensitive issues” listed were the oil industry, China, Iran, Israel, Covid-19, home security, insurance, location data, religion, self-driving vehicles, telecommunications, and systems that recommend or personalize web content.

The Google paper, to which the authors were asked to strike a positive note, discusses recommendation AI, what services YouTube uses to personalize users’ content feeds. A draft reviewed by Reuters contained “concerns” that this technology could promote “disinformation, discriminatory or otherwise unfair results” and “insufficient diversity of content” as well as lead to “political polarization”.

The final publication says instead that the systems can promote “accurate information, fairness and diversity of content.” The published version entitled What are you optimizing for? Customization of recommendation systems with human values, omitted credit to Google researchers. Reuters could not determine why.

A paper this month on AI for understanding a foreign language softened a reference to how the Google Translate product made mistakes following a request from business reviewers, a source said. The published version says that the authors used Google Translate, and a separate sentence says that part of the research method was to “review and correct inaccurate translations”.

For a paper published last week, a Google employee described the process as a “long-haul flight” involving more than 100 email exchanges between researchers and reviewers, according to internal correspondence.

The researchers found that AI can host personal data and copyrighted material – including a page from a “Harry Potter” novel – that was pulled from the Internet to develop the system.

A draft described how such disclosures could infringe copyright or violate European privacy law, said a person familiar with the matter. Following company reviews, authors removed the legal risks and Google published the paper.

.Source