3/20/2024 - technology-and-innovation

Eliza and Parry, the grandparents of Chat GPT

By Melissa Bargman

Eliza and Parry, the grandparents of Chat GPT

On November 30, 2022, the world was revolutionized by the launch of a new AI system: the famous GPT Chat developed by the American company Open IA. In just 5 days, it reached 100 million users.There was no human sphere that was not involved in the controversy over the risks and opportunities of this new and powerful technology.

What can I do for you today?

GPT chat acronyms mean Chat Generative Pre-Trained Transformer. To understand exactly what it is, we ask directly to the system. He replied:

“The ChatGPT is an artificial intelligence system. Uses natural language processing technology to understand and generate answers in written conversations. It works through automatic learning and is based on a neuronal network architecture called GPT (Generative Pre-trained Transformer), specifically GPT-3.5 in my case. This model can keep conversations, answer questions, generate coherent text and perform a variety of tasks related to natural language. ”

In order to understand this definition, we must first understand the concepts evoked and above all, travel in time to discover that the GPT chat is no more than a prodigy version of combinations of powerful processing systems created decades ago.

The non-humans around us

To understand the GPT Chat base, we have to know that technically it is a Chatbot. Conversation bots or conversational bots are software programs that “They teach to keep a conversation with a person and provide automatic answers to user queries”.To achieve this artificial communication, there are standard programming techniques that select a limited set of responses to the concerns (keywordbus) they receive from people. Simulate reactions and phrases of a human interlocutor with which they can carry forward a conversation with a certain logic, to such a point that the user may not distinguish whether who responds is a machine or a person.Smart assistants such as Siri, Alexa, Google and the chats we usually interact with in support services are based on the application of the Natural Language Processing, a field of Computer Sciences whose goal is the formulation and research of effective mechanisms for communication between people and machines through the use of human language (having increasingly tongues).

An adventure for the human mind

As with any innovative technology, the domains in which they arise or apply for the first time result in health or, like the Internet in their origins, as weapons or tools for war scenarios.AI is not an exception and therefore it is not random that the first developments of chatbots were related to health and, above all, to mental health.If we consider that Artificial Intelligence aims to develop systems capable of thinking and acting as humans, first we must understand how human beings think, build brain processes, when they function in normal conditions, but also know their deviations and why they are generated.

Eliza, the first portable psychologist

Between 1964 and 1966, Joseph Weizenbaum, a computer professor at the Massachusetts Institute of Technology (MIT) developed what would be the first chatbot in history called ELIZA. The project consisted of developing a system that simulated being a trained rogerian psychotherapist to start a logical conversation with a person through keyword analysis.When it comes to one of the first technological developments for those who did not need technical knowledge at the time of testing it, anyone could use it. So, in a few months Eliza expanded by different academic institutions throughout the US and even became a toy in certain circles of experts.In your book “Computer Power and Human Reason: From Judgement to Calculation of 1976, Weizenbaum rescues three points that in his words “shockearam” him in his analysis of the person-machine relationship:
  • Many professional psychiatrists seriously stated that the program could become an almost automatic mental health specialist prepared to interact with patients in the near future.
  • “I was surprised by the fast and deeply emotionally engaged people felt when talking to the system and how unmistakably anthropomorphized.” The author says that his own secretary, who had seen him work tirelessly on the program and knew that it was merely a computer code, began to talk with Eliza. After a few sessions, the secretary asked Weizenbaum to leave the room while interacting with the system. In fact, people came to trust the machine with extremely sensitive personal issues.
  • Weizenbaum could not understand how many scholars considered that ELIZA had solved the problem of interpreting human language by machines. All the opposite: this system demonstrated that communication should always be analyzed and studied in its context and social structure of circulation.
In this way, getting communication between machines and humans is a mission (almost) impossible because there are so many social conventions as people in history. That’s why we talk about the “alucinations” of systems like the GPT chat.

Parry, the elementary schizophrenia of the AI

Another iconic AI programming project was developed by psychiatrist Kenneth Colby while still studying at Stanford University in 1972. Parry simulated a patient who suffered from a paranoid schizophrenia picture.In an experiment to test its effectiveness, 33 professional psychiatrists were invited to divide into two groups: some gave Parry therapy or others to real patients. Both groups were asked to identify in what cases it was a machine-therapist interaction and in which it was spoken of a real-therapist patient. The psychiatrists could correctly identify 48% of the occasions.Conclusion: the AI in its most archaic versions was able to deceive specialists entirely dedicated to mental health.

That illusion does not become hallucination

No one doubts that these automatic interaction systems have facilitated many processes that previously required reading manuals, annotating on schedules, searching in the guide.However, when it comes to the search for sensitive information or news, or the production of own material, it is always necessary to check the data, go to the sources. Although they acquire human form, they have in many cases even personifications and their own names that greet and smile, they are no more than thousands of lines of code that search for keywords and replicate automatic behaviors.It will be our human nature to relate to what makes us face with these codes... but we do not give our feelings to those who may hallucinate us or rather, said, deceive us.

Do you want to validate this article?

By validating, you are certifying that the published information is correct, helping us fight against misinformation.

Validated by 0 users
Melissa Bargman

Melissa Bargman

Hi, I'm Melissa, Social Communicator and Professor of Comparative Legislation at the Faculty of Social Sciences of the University of Buenos Aires. I am dedicated to the research of new technologies and their application in everyday life, their regulation, their opportunities and risks in society.

TwitterLinkedinInstagram

Total Views: 0

Comments