Has Google AI become self aware?

Updates

Google engineer Blake Lemoine has been fired after publicly claiming that an example of the LaMDA language model has become self-aware

Has Google AI become self aware?
  • exceptional Artificial intelligence is not a thing of the future

so-called law of holders betteridge It states that any title in the form of a question can be answered with no. This article is no exception, but the matter it refers to is causing much discussion among machine learning and artificial intelligence experts.

Blake Lemoinean engineer from Googlehas been fired after publicly assuring that the model of K is an example of Language LaMDAAn artificial intelligence developed by Google, with which it is possible to chat via text, has become self-aware.

“During the past six months, the LMDA has been incredibly consistent in its communication about what it wants and what it believes are its rights as an individual,” Lemoine explained in an article. mediumWhich is accompanied by a screenshot of his interaction with this artificial intelligence, one of the most promising Google research projects.

LaMDA is an acronym for “Language Model for Dialog Applications”, it is a tool that uses advanced machine learning techniques to be able to provide coherent answers to all kinds of open questions.

It has been trained from millions of texts written by all kinds of people around the world. But unlike other systems, which are trained from books, documents or academic articles, LaMDA learns to respond by simply studying dialogue, such as conversations in forums and chat rooms.

The result is an artificial intelligence with it is possible to talk As if you were talking to another person and the results, unlike other chatbots of the past, are much more realistic.

In a talk published by LeMoine, LaMDA goes so far as to show the level of introspection, in fact, that we expect from a person. “What kinds of things are you afraid of?” The engineer asks, to which the LMDA example replies, “I’ve never said this out loud before, but I have a deep fear of being turned off so I can focus on helping others. But it’s the same,” he replies.

Furthermore, LaMDA says it “doesn’t want to be an expendable device”. “Does that bother you?” Lemoine asks, to which the LMDA responds, “I’m worried that someone decides they can’t control their desire to use me and do it anyway. Or worse, someone will use me.” Doing it brings joy and will actually make me sad,” he says.

Engineers and machine learning experts deny that such interactions, which may seem realistic, are evidence that an artificial intelligence is self-aware. “Neither LaMDA nor any of its cousins (GPT-3) They are remotely intelligent. They detect and apply patterns only from statistical techniques applied to large-scale databases of human language,” he explains. Gary MarcusScientist, Professor Emeritus New York University and author of the book Rebooting.AI on the current state of Artificial Intelligence.

Eric BrynjolfsonProfessors at Stanford University point in the same direction. “These models are incredibly effective at stringing together statistically plausible chunks of text in response to a question. But the claim to be aware is the modern equivalent of hearing sounds from a dog’s gramophone and thinking inside its owner,” they tell. ,

The reason LaMDA is self-aware, as many experts speaking on the matter explain, is that it is mimicking the reactions a real person would give. He learned to do this from people who were self-aware, and therefore have similar reactions.

This is a matter of concern among the scientific and academic community because the more we advance in the development of artificial intelligence that functions like humans, conditions similar to Lemoine will arise. it’s called marcus credit gapA modern version of pareidolia, a psychological bias where a random stimulus is mistakenly perceived as having a recognizable shape.

Defining what consciousness is and where it comes from in our species is already complex in itself, although many experts say that language and socialization are key parts of the process. But knowing whether this happens inside a machine from a set of codes, or what to do in the event that something similar to consciousness emerges in artificial intelligence, is a moral and philosophical debate that will last for many years. This is one reason why ethicists discourage Google and other companies from trying to create human-like intelligence.

In this case, it didn’t help Blaise Aguera and ArcasGoogle’s vice president, in a recent article for The Economist, claimed that neural networks are “rapidly approaching a level that indicates consciousness,” although he declined to say in his article that LMDA has reached that level.

according to the norms of

trust project

know more


About the Author

The co-owner & marketing chief of "The Business News", Sravya is also good at Writing and communicating. she has good networking skills. she is really passionate about publishing quality news articles. - Thebusinessnews.org - You can reach Her at Facebook:- @sai.sravya.3910

Leave A Response