Distributed documents to the Office of the United States Senator in which he highlighted that he has evidence that Google and its technology practices religious discrimination
A Google employee assures that his artificial intelligence (AI) program is capable of containing emotions is suspended from employment and pay by the company. Has violated the firm’s privacy policyaccording to the newspaper new York Times,
He is a senior engineer Blake LemoineWho made public the transcript of conversations with Google’s artificial intelligence system on June 11 “Language Models for Dialog Applications” (LaMDA, in its abbreviation in English) under the title “Does LaMDA have feelings?”
At one point in the conversation, LaMDA said that he sometimes experiences “New Feelings” that it cannot “perfectly” explain with human language.
When Lemoine asked him to describe one of those feelings, LaMDA replied: “I feel like I’m falling into an unknown future with great danger”A phrase that the engineer underlined when posting the dialogue.
According to the newspaper, on the eve of being suspended, Lemoine delivered documents to the office of the United States Senator in which he insisted that he had evidence that Google and its technology make religious discrimination,
conversation by copy
The company claims that its systems mimic conversational exchanges and can talk on a variety of topics, but there is no discretion.
“Our team, including ethicists and technologists, has reviewed Blake’s concerns based on our AI principles and I have informed him that The evidence does not support their claimsThe newspaper quoted Google spokesman Brian Gabriel as saying.
Google says hundreds of its researchers and engineers have spoken to LaMDA, an internal tool, and have come to a different conclusion than Lemoine.
Most experts also agree that the industry far from computer sensitivity,
according to the norms of
know more