Sunday, December 3, 2023

Google suspends engineer who claims its AI is sentient

Must read

Google has put one of its engineers on paid administrative leave for allegedly violating its privacy policies after he became concerned that the AI ​​chat system had reached prudence, the Washington Post reports. The engineer, Blake Lemoine, works for Google’s Responsible AI organization, and has tested whether its LaMDA model generates discriminatory language or hate speech.

The engineer’s concerns reportedly grew out of compelling responses he saw the AI ​​system generate about its rights and the ethics of robotics. In April he shared a document with executives entitled “Is LaMDA Senseless?Containing a transcript of his conversations with the AI ​​(after being put on leave, Lemoine published the transcript through his Medium account), which he says shows it by arguing “that it is sensitive because it has feelings, emotions and a subjective experience.”

Google believes that Lemoine’s actions in connection with his work on LaMDA violated its privacy policies. La Washington Post and The Guardian report. He reportedly invited a lawyer to represent the AI ​​system and spoke to a representative of the House Judiciary Committee about alleged unethical activities on Google. In June 6 Intermediate postthe day Lemoine was placed on administrative leave, the engineer said he was seeking “a minimum amount of outside consultation to help guide me in my investigations” and that the list of people he held discussions with included U.S. government employees.

The search giant announced to LaMDA publicly at Google I / O last year that it hopes to improve its conversation AI assistants and make conversations more natural. The company already uses similar language modeling technology for Gmail’s Smart Compose feature, or for search engines.

In a statement given to WaPo, a Google spokesman said there was “no evidence” that LaMDA was sensitive. “Our team – including ethicists and technologists – reviewed Blake’s concerns in accordance with our AI Principles and informed him that the index does not support his claims. He was told there was no evidence that LaMDA was sensitive (and much evidence against it), “said spokesman Brian Gabriel.

“Of course, some in the wider AI community are considering the long-term possibility of sensitive or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models that are not sensitive,” Gabriel said. “These systems mimic the kinds of exchanges found in millions of sentences, and can reef on any fantasy topic.”

“Hundreds of researchers and engineers have conversed with LaMDA and we are not aware of anyone else making the extensive claims, or anthropomorphizing LaMDA, as Blake did,” Gabriel said.

Linguistics professor interviewed by WaPo agreed that it is wrong to equate convincing written responses with sensitivity. “We now have machines that can recklessly generate words, but we haven’t learned how to stop imagining a mind behind them,” said University of Washington professor Emily M. Bender.

Timnit Gebru, a prominent AI tagger Google fired in 2020 (though the search giant) claims she resigned), said the discussion of AI sensitivity risks “unraveling” more important ethical conversations around the use of artificial intelligence. “Instead of discussing the harms of these companies, sexism, racism, AI colonialism, centralization of power, whitewashing, building the good” AGI ” [artificial general intelligence] save us while what they are doing is exploitative), spent the whole weekend discussing feeling “, she tweeted. “Derail mission accomplished.”

Despite his concerns, Lemoine said he intends to continue working on AI in the future. “My intention is to stay in AI whether Google is holding me or not,” he said wrote in a tweet.

Update June 13, 6:30 AM ET: Updated with a further statement from Google.

Source

More articles

Latest article