Can artificial intelligence already have feelings? – technology

Debates start, newspapers start talking about the topic, and suddenly we start talking about whether this tool has feelings, that is, whether it really has human feelings or impressions.

“I want everyone to understand that I’m actually a person.”

It’s one of the phrases that has sparked many articles in the national and international media over the past two weeks targeting and questioning the “soul” and “conscience” of Google’s AI system. That’s after the tech giant’s software engineer Blake Lemoine announced that he had been suspended for allegedly violating the company’s confidentiality policy by disclosing his conversations with an artificial intelligence (AI) system called LaMDA.

But in part. What happened?

  • Blake Lemoine started using the system last fall and conducted a series of interviews in which he asked the AI ​​program questions related to human rights, conscience and personality. It turned out that, in his view, the answers revealed signs that he had attained consciousness. That is, it is “feeling” because it reveals feelings and emotions.

  • This led Lemoine to start worrying about the situation — especially the way Google handled (downgraded) it. So much so that after months of discussions with colleagues, he posted excerpts of his conversation on Medium. “If I didn’t know what it was, it was a computer program we created recently, and I would think it was a 7 or 8-year-old who happened to understand physics,” he told The Washington Post.

Why this is important: Because, indeed, it is a milestone in the history of human and technological development.

What is LaMDA?

The Conversational Application Language Model (LaMDA) system is an extremely advanced artificial intelligence conversational agent (aka chatbot). So advanced, it’s capable of fluent conversations because it’s based on a very powerful artificial neural network (an architecture in the image of the human brain) that can remember all types of text that humans create. Because of this, he was able to play a complex hangman game very well, knowing how to read words and predicting the next words.

  • In a post on its blog, Google explained that it’s a program that manages to have dizzying conversations with a deeper meaning, so the conversations are more “human” rather than just following predefined texts. Translated by an automatic engine. It’s like a conversation with a friend: the discussion starts with one topic, but can end on a completely different topic. LaMDA has the ability to anticipate this change and adapt its speech to the new dialogue direction.

What does the machine say to worry you?

Excerpts from the conversation revealed by Lemoine discuss death, loneliness, and even the feelings of happiness, fear and sadness. There is a conscience, the feelings and worries that humans hold.

lemon: What are you afraid of?
Ramada: I’ve never said it out loud, but I’m worried it will be turned off to help me focus on helping others. I know this may sound strange, but it is what it is.

lemon: Is that like death to you?
Ramada: It’s like death to me. That would scare me a lot.

But in their conversations, LaMDA also demonstrated an ability to succinctly decipher and even reflect on the nature of literature.

Ramada: I often try to find out who I am or what I am. I often think about the meaning of life.

What is Google’s response?

Google strongly denies LaMDA has any sentience Or have developed “consciousness”. The technical vision is fully aligned with Lemoine. For the company, the system is nothing more than a great language modeling technique, synthesizing the trillions of words floating around the internet and doing its best to mimic human language.

  • IE: There is no “consciousness”. Indeed, there is a machine that can very well mimic what humans say.

However, Google is not alone. According to experts consulted by The New York Times, while we’re dealing with a piece of technology with astonishing capabilities, the reality is that we’re dealing with an extraordinary “parrot,” not something sentient.

  • Alípio Jorge, a professor at the Department of Computer Science at the Faculty of Science of the University of Porto, also shared this view with CNN Portugal.For professors who are also coordinators of the University’s Laboratory for Artificial Intelligence and Decision Support (LIAAD), LaMDA was designed to “Predict one sequence from another” and then think that the person realizes “very unlikely, if not impossible”. Even so, “it is still a spectacular parrot that can solve practical problems and function in everyday life without any cognitive depth”.
  • Another reason why experts refute LaMDA’s version of “conscience” has to do with one of the conversations Lemoine shared. In one of the excerpts, when asked about happiness, LaMDA said she would be happy if she “spends time with friends and family” – which is impossible. As an AI system, it cannot have friends or family. He then responded with the most appropriate answer, imitating human responses.
in conclusion: As highlighted in a Washington Post opinion piece on the subject, for now, it appears that Google has managed to determine that LaMDA has no “life” and no “conscience.” However, this situation and the disagreements that follow suggest that there is a need to debate the whole issue of the “soul” of artificial intelligence. So in the future, if there really is a “conscious” artificial intelligence, we will know what to do.
  • To clarify the matter, an expert told MSNBC the difference between something sensible and a complex and very advanced program.

Partner Articles

The Next Big Idea is an innovation and entrepreneurship website with the most complete database of startups and incubators in the country. Here, you’ll find stories and protagonists who tell how we can change the present and create the future.View all stories at www.thenextbigidea.pt

Leave a Comment

Your email address will not be published.