Chatbots History and the Turring Test.

Most of us must have heard of chatbots or must have come in contact with one,one way or the other. So basically chatbots are piece of software designed to conduct a conversation or a dialogue. if you want to know more about its definition and how it works, you can read my previous article Here but for this post I will be discussing mainly the history of chatbots and the turring test. A test used to determine if a chatbot can deceive a human being. Lets dive in.
Lets Start with the history.
In October 1950, Alan Turing proposed an approach for evaluating a computer’s
intelligence and famously named his method, The Imitation Game. The premise is
that an interrogator talks to two people through a “typewritten” machine (today we
would refer to this as instant messaging). The catch is that only one of the
conversations is with a real person – the other is with a computer.
Turing posited that, by the turn of the century (the year 2000), in a well-controlled
experiment, a computer should be able to fool the interrogator 70% of the time.
While we’ve made a lot of progress since 1950, no algorithm has consistently reached this bar.
However, there has still been substantial progress in the field of chatbot
development, which has led to a multi-billion dollar industry and dozens of profitable products.
The Chatbots that have developed so far.
ELIZA
ELIZA was developed by Joseph Weizenbaum at MIT Laboratories in 1966 and was
the first chatbot that made a meaningful attempt to beat the Turing Test. It used
pattern-recognition to pick out patterns in a person’s speech, then repeated the
words back to the person in a premade template. The most famous implementation
of the ELIZA chatbot is DOCTOR. In this implementation, the chatbot acts like a
psychotherapist, responding to a patient’s statements by selecting a phrase from the
respondent and parroting them back in the form of a question. This form of chatbot
is rule-based, because the program responds to a person based on rules that a
developer establishes in a predefined script. However, as you saw above, the
responses from a DOCTOR-programmed chatbot quickly become incoherent.
PARRY
Kenneth Colby developed PARRY while at Stanford University in 1968. Colby built
PARRY with a similar rule-based method to ELIZA. However, it was designed to model
the behavior of a person with diagnosable paranoia. PARRY also had a richer
response library and was able to simulate the mood of a person by shifting weights
of mood parameters. PARRY would respond differently based on the distribution
between mood parameters for anger, fear, or mistrust. PARRY passed a modified
Turing Test by fooling people who tried to distinguish the difference between it and a
person with paranoia.
A.L.I.C.E.(Artificial Linguistic Internet Computer Entity)
Richard Wallace started developing A.L.I.C.E. in 1995, shortly before leaving his
computer vision teaching job. Wallace improved upon ELIZA’s implementation by
continuing to watch the model while it had conversations with people. If a person
asked an A.L.I.C.E. bot something it did not recognize, Wallace would add a response
for it. In this way, the person who designs an A.L.I.C.E.-powered device could
continuously modify it by adding responses to unrecognized phrases. This means
that well-developed A.L.I.C.E. bots can respond to a variety of questions and
statements based on the developer’s needs. In a chapter from the 2009 book, Parsing
the Turing Test, Richard Wallace described this process as supervised learning,
because the developer – who he calls the botmaster – can supervise the learning of
the model.
In 2000, 2001, and 2004, A.L.I.C.E., won the Loebner prize, which is a contest that was
started in 1980 to award computer programs that are the most human-like. The
Loebner prize is awarded to the bot that performs the best on the Turing Test. The
judges of the Loebner competition have two conversations simultaneously. One with
a real person and the other with a bot. The winner of the competition is the one that
tricks a judge the highest percentage of the time.
Jabberwacky
Jabberwacky was developed by Rollo Carpenter in the 1980s and was launched on
the web in 1997. It was the first chatbot that tried to incorporate voice interaction.
Two versions of Jabberwacky won the Loebner prize in 2005 and 2006. Jabberwacky
has undergone continuous development since it debuted on the web. When it
launched, it used a similar rule-based approach to previous models, like ELIZA and
PARRY. However, in 2008, the model was renamed Cleverbot and updated to include
a method for learning without the supervision of a botmaster. Cleverbot can parse
and save human responses to questions, and respond similarly if a human asked it
the same question.
Mitsuku
Mitsuku was developed by Steve Worswick during the early 2000s and first won the
Loebner prize in 2013. The model is still actively developed and has won the Loebner
Prize in 2016, 2017, 2018, and 2019, making it the most human-like chatbot available.
Mitsuku works more like the A.L.I.C.E. It is a supervised learning model, where
developers actively tweak the rules to make interactions with Mitsuku more human-like.
Over 60 years ago, Alan Turing predicted we wouldn’t be able to distinguish humans
from robots by now. If Turing saw the progress we’ve made, would he be impressed?
It’s impossible to know. However, it’s unlikely he would have predicted the market
potential of chatbot technology – some expect Alexa sales will be $19 billion by 2021.
Products like Alexa, however, are highly integrated, drawing on many other
technologies and systems in addition to chatbot software. Alexa falls well short of
competing with Mitsuku in carrying out a conversation. You can confirm this by watching on youtube the conversation between Mitsuku and Alexa.
With the growing market for chatbot-driven technologies, there is far more money
and interest in chatbot development than just a few years ago. In 2017, Amazon
started the Alexa challenge, where teams compete to develop a chatbot that has the
best conversation with users. The winners of this challenge receive $500,000, and
Amazon tests each model’s ability to converse on Alexa devices.
We’ve made a lot of progress developing bots to beat the Imitation Game, but there
is still progress to be made. Based on the growing interest in the field, it’s safe to
expect significant progress in the coming years.
Yeah, I know there were a lot of tech stuffs on the article. Hope you learnt something new,share to friends and i would really want to know your thoughts on chatbots.
Comments
Post a Comment