DEEPAK Dox
DEEPAK Dox
A PROJECT REPORT
Submitted to the
DECEMBER, 2022
DEPARTMENT OF COMPUTER SCIENCE&ENGINEERING
CERTIFICATE
This is certified that the project report entitled “INFORMATION OF
CHATBOT” Is the bonafide record submitted by DEEPAK MULLAPUDI
(20HU1A4435), in partial fulfilment of the requirements for the Award of the
Degree of Bachelor of Technology, in Computer Science and
Engineering[DS]from the CHEBROLU ENGINEERING COLLEGE,
CHEBROLU.
Chatbot can be described as software that can chat with people using artificial intelligence.
This software are used to perform tasks such as quickly responding to users, informing them,
helping to purchase products and providing better service to customers. In this paper, we
present the general working principle and the basic concepts of artificial intelligence based
chatbots and related concepts as well as their applications in various sectors such as
telecommunication, banking, health, customer call centers and e-commerce. Additionally, the
results of an example chatbot for donation service developed for telecommunication service
provider are presented using the proposed architecture.
DEEPAK MULLAPUDI
20HU1A4435
ACKNOWLEDGEMENT
I wish to record my thanks to our Mr. V. DINESH BABU, H.O.D of Computer Science
Engineering, for his constant support, enthusiasm and motivation.
I wish to express thanks to all the staff members in the Department of Computer
Science Engineering, CHEBROLU ENGINEERING COLLEGE for their valuable
support throughout this project. I also thank all my friends for this moral support and suggestion
for this work.
Finally, I thank one and all those who have rendered help directly or indirectly at
MULLAPUDI DEEPAK
20HU1A4435
CHATBOT
Chatbots are used in dialog systems for various purposes including customer
service, request routing, or information gathering. While some chatbot applications use
extensive word-classification processes, natural-language processors, and
sophisticated AI, others simply scan for general keywords and generate responses
using common phrases obtained from an associated library or database.
They can be classified into usage categories that include: commerce (e-
commerce viachat), education, entertainment, finance, health, news,
and productivity.
Types of Chatbots
Hundreds of thousands of businesses worldwide are developing diverse forms of chatbots
intending to enhance customer service. This section explains the various types of chatbots,
what they are used for, and which chatbot software could be the most beneficial to your
company.
Types of Chatbot
1. Voice bots
A voice bot is a voice-to-text and text-to-speech communication channel powered by AI and
natural language understanding (NLU). AI technology aids in identifying key speech signals
and determining the optimal conversational response. The text-
to-speech (TTS) engine subsequently completes the interaction by converting the message
into audio or voice.
2. Hybrid chatbots
A hybrid chatbot is a harmonious blend of chatbot and live chat that combines the best of
both worlds. A customer service representative will be available in live chat to answer any
customer’s questions, which may be too complex or nuanced for automation alone.
An AI component in a chatbot replicates conversations based on how it is programmed and
the needs of the conversation. On the other hand, a hybrid chatbot will initiate an automated
chat conversation and attempt to resolve the user’s query as quickly and simply as possible. If
it does not function as expected, a customer service representative can intervene at any
moment or in the subject matter area where the chatbot cannot complete the task.
3. Social messaging chatbots
With the rise of new social media interfaces, organizations can now deploy an AI algorithm
across all of their customer’s preferred messaging platforms. This includes Facebook
Messenger, Twitter, Instagram, as well as messaging apps like WhatsApp and WeChat.
4. Menu-based chatbots
The most rudimentary type of chatbot in use is one that is based on menu-driven navigation.
Most of the time, these chatbots follow a fixed decision tree that is displayed to the consumer
in the form of clickable buttons. These chatbots (like the automated dial pad menus on
telephones that we use regularly) ask the user to make several choices and click on suitable
options to get to the final solution.
5. Skills chatbots
A skills chatbot is another kind of bot that can perform a specific set of tasks, once you have
extended its capabilities using pre-defined skills software. For example, the chatbot may be
able to provide weather information, turn off your room lights.
6. Keyword-based chatbots
Keyword-based chatbots can listen to what visitors enter and answer correctly, unlike menu-
based chatbots. These chatbots use customizable keywords and NLP to detect action triggers
in the conversation to understand how to respond appropriately to the consumer.
The following code is the example for chatbot :
import pickle
import numpy as np
In [2]:
with open("../input/chat-bot-data/train_qa.txt", "rb") as fp: # Unpickling
train_data = pickle.load(fp)
In [3]:
with open("../input/chat-bot-data/test_qa.txt", "rb") as fp: # Unpickling
test_data = pickle.load(fp)
Out[4]:
list
In [5]:
type(train_data)
Out[5]:
list
In [6]:
len(test_data)
Out[6]:
1000
In [7]:
len(train_data)
Out[7]:
10000
In [8]:
train_data[0]
Out[8]:
(['Mary',
'moved',
'to',
'the',
'bathroom',
'.',
'Sandra',
'journeyed',
'to',
'the',
'bedroom',
'.'],
['Is', 'Sandra', 'in', 'the', 'hallway', '?'],
'no')
In [9]:
' '.join(train_data[0][0])
Out[9]:
'Mary moved to the bathroom . Sandra journeyed to the bedroom .'
In [10]:
' '.join(train_data[0][1])
Out[10]:
'Is Sandra in the hallway ?'
In [11]:
train_data[0][2]
Out[11]:
'no'
In [13]:
all_data = test_data + train_data
In [14]:
for story, question , answer in all_data:
# In case we don't know what a union of sets is:
# https://www.programiz.com/python-programming/methods/set/union
vocab = vocab.union(set(story))
vocab = vocab.union(set(question))
In [15]:
vocab.add('no')
vocab.add('yes')
In [16]:
vocab
Out[16]:
{'.',
'?',
'Daniel',
'Is',
'John',
'Mary',
'Sandra',
'apple',
'back',
'bathroom',
'bedroom',
'discarded',
'down',
'dropped',
'football',
'garden',
'got',
'grabbed',
'hallway',
'in',
'journeyed',
'kitchen',
'left',
'milk',
'moved',
'no',
'office',
'picked',
'put',
'the',
'there',
'to',
'took',
'travelled',
'up',
'went',
'yes'}
In [17]:
vocab_len = len(vocab) + 1 #we add an extra space to hold a 0 for Keras's
pad_sequences
In [18]:
max_story_len = max([len(data[0]) for data in all_data])
In [19]:
max_story_len
Out[19]:
156
In [20]:
max_question_len = max([len(data[1]) for data in all_data])
In [21]:
max_question_len
Out[21]:
6
Out[22]:
{'.',
'?',
'Daniel',
'Is',
'John',
'Mary',
'Sandra',
'apple',
'back',
'bathroom',
'bedroom',
'discarded',
'down',
'dropped',
'football',
'garden',
'got',
'grabbed',
'hallway',
'in',
'journeyed',
'kitchen',
'left',
'milk',
'moved',
'no',
'office',
'picked',
'put',
'the',
'there',
'to',
'took',
'travelled',
'up',
'went',
'yes'}
In [23]:
# Reserve 0 for pad_sequences
vocab_size = len(vocab) + 1
In [24]:
In [25]:
# integer encode sequences of words
tokenizer = Tokenizer(filters=[])
tokenizer.fit_on_texts(vocab)
In [26]:
tokenizer.word_index
Out[26]:
{'took': 1,
'left': 2,
'went': 3,
'sandra': 4,
'up': 5,
'apple': 6,
'moved': 7,
'dropped': 8,
'picked': 9,
'mary': 10,
'football': 11,
'discarded': 12,
'grabbed': 13,
'down': 14,
'there': 15,
'travelled': 16,
'back': 17,
'hallway': 18,
'john': 19,
'the': 20,
'yes': 21,
'daniel': 22,
'bathroom': 23,
'to': 24,
'?': 25,
'milk': 26,
'bedroom': 27,
'garden': 28,
'put': 29,
'got': 30,
'.': 31,
'kitchen': 32,
'journeyed': 33,
'office': 34,
'in': 35,
'no': 36,
'is': 37}
In [27]:
train_story_text = []
train_question_text = []
train_answers = []
In [28]:
train_story_seq = tokenizer.texts_to_sequences(train_story_text)
In [29]:
len(train_story_text)
Out[29]:
10000
In [30]:
len(train_story_seq)
Out[30]:
10000
Functionalize Vectorization
In [31]:
def vectorize_stories(data, word_index=tokenizer.word_index,
max_story_len=max_story_len,max_question_len=max_question_len):
#X = story
X = []
# Xq = QUERY/QUESTION
Xq = []
# Y = CORRECT ANSWER
Y = []
y = np.zeros(len(word_index) + 1)
y[word_index[answer]] = 1
X.append(x)
Xq.append(xq)
Y.append(y)
pad_sequences(Xq, maxlen=max_question_len),
np.array(Y))
In [32]:
inputs_train, queries_train, answers_train = vectorize_stories(train_data)
In [33]:
inputs_test, queries_test, answers_test = vectorize_stories(test_data)
In [34]:
inputs_test
Out[34]:
array([[ 0, 0, 0, ..., 20, 27, 31],
[ 0, 0, 0, ..., 20, 28, 31],
[ 0, 0, 0, ..., 20, 28, 31],
...,
[ 0, 0, 0, ..., 20, 6, 31],
[ 0, 0, 0, ..., 20, 28, 31],
[ 0, 0, 0, ..., 6, 15, 31]], dtype=int32)
In [35]:
queries_test
Out[35]:
array([[37, 19, 35, 20, 32, 25],
[37, 19, 35, 20, 32, 25],
[37, 19, 35, 20, 28, 25],
...,
[37, 10, 35, 20, 27, 25],
[37, 4, 35, 20, 28, 25],
[37, 10, 35, 20, 28, 25]], dtype=int32)
In [36]:
answers_test
Out[36]:
array([[0., 0., 0., ..., 0., 1., 0.],
[0., 0., 0., ..., 0., 1., 0.],
[0., 0., 0., ..., 0., 0., 0.],
...,
[0., 0., 0., ..., 0., 1., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.]])
In [37]:
sum(answers_test)
Out[37]:
array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 497.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 503., 0.])
In [38]:
tokenizer.word_index['yes']
Out[38]:
21
In [39]:
tokenizer.word_index['no']
Out[39]:
36
In [41]:
input_sequence = Input((max_story_len,))
question = Input((max_question_len,))
Input Encoder m
In [42]:
# Input gets embedded to a sequence of vectors
input_encoder_m = Sequential()
input_encoder_m.add(Embedding(input_dim=vocab_size,output_dim=64))
input_encoder_m.add(Dropout(0.3))
Input Encoder c
In [43]:
input_encoder_c = Sequential()
input_encoder_c.add(Embedding(input_dim=vocab_size,output_dim=max_question_len))
input_encoder_c.add(Dropout(0.3))
Question Encoder
In [44]:
question_encoder = Sequential()
question_encoder.add(Embedding(input_dim=vocab_size,
output_dim=64,
input_length=max_question_len))
question_encoder.add(Dropout(0.3))
Use dot product to compute the match between first input vector seq and the query
In [46]:
match = dot([input_encoded_m, question_encoded], axes=(2, 2))
match = Activation('softmax')(match)
Add this match matrix with the second input vector sequence
In [47]:
response = add([match, input_encoded_c]) # (samples, story_maxlen, query_maxlen)
response = Permute((2, 1))(response) # (samples, query_maxlen, story_maxlen)
Concatenate
In [48]:
answer = concatenate([response, question_encoded])
In [49]:
answer
Out[49]:
<KerasTensor: shape=(None, 6, 220) dtype=float32 (created by layer 'concatenat
e')>
In [50]:
# Reduce with RNN (LSTM)
answer = LSTM(32)(answer) # (samples, 32)
In [51]:
# Regularization with Dropout
answer = Dropout(0.5)(answer)
answer = Dense(vocab_size)(answer) # (samples, vocab_size)
In [52]:
answer = Activation('softmax')(answer)
In [53]:
model.summary()
Model: "model"
______________________________________________________________________________
____________________
Layer (type) Output Shape Param # Connected to
==============================================================================
====================
input_1 (InputLayer) [(None, 156)] 0
______________________________________________________________________________
____________________
input_2 (InputLayer) [(None, 6)] 0
______________________________________________________________________________
____________________
sequential (Sequential) (None, None, 64) 2432 input_1[0][0]
______________________________________________________________________________
____________________
sequential_2 (Sequential) (None, 6, 64) 2432 input_2[0][0]
______________________________________________________________________________
____________________
dot (Dot) (None, 156, 6) 0 sequential[0]
[0]
sequential_2[
0][0]
______________________________________________________________________________
____________________
activation (Activation) (None, 156, 6) 0 dot[0][0]
______________________________________________________________________________
____________________
sequential_1 (Sequential) (None, None, 6) 228 input_1[0][0]
______________________________________________________________________________
____________________
add (Add) (None, 156, 6) 0 activation[0]
[0]
sequential_1[
0][0]
______________________________________________________________________________
____________________
permute (Permute) (None, 6, 156) 0 add[0][0]
______________________________________________________________________________
____________________
concatenate (Concatenate) (None, 6, 220) 0 permute[0][0]
sequential_2[
0][0]
______________________________________________________________________________
____________________
lstm (LSTM) (None, 32) 32384 concatenate[0
][0]
______________________________________________________________________________
____________________
dropout_3 (Dropout) (None, 32) 0 lstm[0][0]
______________________________________________________________________________
____________________
dense (Dense) (None, 38) 1254 dropout_3[0][
0]
______________________________________________________________________________
____________________
activation_1 (Activation) (None, 38) 0 dense[0][0]
==============================================================================
====================
Total params: 38,730
Trainable params: 38,730
Non-trainable params: 0
______________________________________________________________________________
____________________
In [54]:
# train
history = model.fit([inputs_train, queries_train], answers_train,batch_size=16,
epochs=64,validation_data=([inputs_test, queries_test],
answers_test))
Epoch 1/64
625/625 [==============================] - 9s 7ms/step - loss: 1.0687 - accura
cy: 0.4781 - val_loss: 0.6935 - val_accuracy: 0.4970
Epoch 2/64
625/625 [==============================] - 4s 7ms/step - loss: 0.6991 - accura
cy: 0.5009 - val_loss: 0.6934 - val_accuracy: 0.5030
Epoch 3/64
625/625 [==============================] - 3s 6ms/step - loss: 0.6954 - accura
cy: 0.4946 - val_loss: 0.6937 - val_accuracy: 0.4970
Epoch 4/64
625/625 [==============================] - 3s 6ms/step - loss: 0.6946 - accura
cy: 0.5066 - val_loss: 0.6938 - val_accuracy: 0.4970
Epoch 5/64
625/625 [==============================] - 4s 6ms/step - loss: 0.6954 - accura
cy: 0.4939 - val_loss: 0.6933 - val_accuracy: 0.5030
Epoch 6/64
625/625 [==============================] - 3s 6ms/step - loss: 0.6942 - accura
cy: 0.5117 - val_loss: 0.6934 - val_accuracy: 0.4970
Epoch 7/64
625/625 [==============================] - 3s 6ms/step - loss: 0.6947 - accura
cy: 0.5005 - val_loss: 0.6951 - val_accuracy: 0.4970
Epoch 8/64
625/625 [==============================] - 4s 6ms/step - loss: 0.6947 - accura
cy: 0.5007 - val_loss: 0.6872 - val_accuracy: 0.5610
Epoch 9/64
625/625 [==============================] - 4s 6ms/step - loss: 0.6257 - accura
cy: 0.6501 - val_loss: 0.4553 - val_accuracy: 0.8070
Epoch 10/64
625/625 [==============================] - 4s 6ms/step - loss: 0.4416 - accura
cy: 0.8128 - val_loss: 0.4154 - val_accuracy: 0.8210
Epoch 11/64
625/625 [==============================] - 4s 7ms/step - loss: 0.3915 - accura
cy: 0.8414 - val_loss: 0.3925 - val_accuracy: 0.8280
Epoch 12/64
625/625 [==============================] - 4s 6ms/step - loss: 0.3762 - accura
cy: 0.8514 - val_loss: 0.3852 - val_accuracy: 0.8430
Epoch 13/64
625/625 [==============================] - 3s 6ms/step - loss: 0.3561 - accura
cy: 0.8596 - val_loss: 0.3930 - val_accuracy: 0.8400
Epoch 14/64
625/625 [==============================] - 4s 6ms/step - loss: 0.3490 - accura
cy: 0.8544 - val_loss: 0.3663 - val_accuracy: 0.8320
Epoch 15/64
625/625 [==============================] - 4s 6ms/step - loss: 0.3289 - accura
cy: 0.8639 - val_loss: 0.3694 - val_accuracy: 0.8400
Epoch 16/64
625/625 [==============================] - 4s 6ms/step - loss: 0.3295 - accura
cy: 0.8565 - val_loss: 0.3696 - val_accuracy: 0.8220
Epoch 17/64
625/625 [==============================] - 4s 6ms/step - loss: 0.3328 - accura
cy: 0.8582 - val_loss: 0.3611 - val_accuracy: 0.8450
Epoch 18/64
625/625 [==============================] - 4s 6ms/step - loss: 0.3142 - accura
cy: 0.8645 - val_loss: 0.3791 - val_accuracy: 0.8350
Epoch 19/64
625/625 [==============================] - 4s 6ms/step - loss: 0.3227 - accura
cy: 0.8602 - val_loss: 0.3590 - val_accuracy: 0.8430
Epoch 20/64
625/625 [==============================] - 4s 7ms/step - loss: 0.3196 - accura
cy: 0.8661 - val_loss: 0.3491 - val_accuracy: 0.8400
Epoch 21/64
625/625 [==============================] - 4s 6ms/step - loss: 0.3122 - accura
cy: 0.8656 - val_loss: 0.3509 - val_accuracy: 0.8360
Epoch 22/64
625/625 [==============================] - 4s 6ms/step - loss: 0.3078 - accura
cy: 0.8647 - val_loss: 0.3472 - val_accuracy: 0.8430
Epoch 23/64
625/625 [==============================] - 4s 6ms/step - loss: 0.3052 - accura
cy: 0.8721 - val_loss: 0.3475 - val_accuracy: 0.8440
Epoch 24/64
625/625 [==============================] - 4s 6ms/step - loss: 0.3095 - accura
cy: 0.8667 - val_loss: 0.3408 - val_accuracy: 0.8410
Epoch 25/64
625/625 [==============================] - 4s 6ms/step - loss: 0.3096 - accura
cy: 0.8609 - val_loss: 0.3473 - val_accuracy: 0.8480
Epoch 26/64
625/625 [==============================] - 4s 6ms/step - loss: 0.3097 - accura
cy: 0.8646 - val_loss: 0.3442 - val_accuracy: 0.8430
Epoch 27/64
625/625 [==============================] - 3s 6ms/step - loss: 0.2969 - accura
cy: 0.8725 - val_loss: 0.3491 - val_accuracy: 0.8450
Epoch 28/64
625/625 [==============================] - 4s 6ms/step - loss: 0.3020 - accura
cy: 0.8680 - val_loss: 0.3443 - val_accuracy: 0.8350
Epoch 29/64
625/625 [==============================] - 4s 7ms/step - loss: 0.3097 - accura
cy: 0.8646 - val_loss: 0.3540 - val_accuracy: 0.8400
Epoch 30/64
625/625 [==============================] - 4s 6ms/step - loss: 0.2999 - accura
cy: 0.8708 - val_loss: 0.3465 - val_accuracy: 0.8420
Epoch 31/64
625/625 [==============================] - 3s 6ms/step - loss: 0.2899 - accura
cy: 0.8717 - val_loss: 0.3542 - val_accuracy: 0.8420
Epoch 32/64
625/625 [==============================] - 4s 6ms/step - loss: 0.2938 - accura
cy: 0.8758 - val_loss: 0.3471 - val_accuracy: 0.8380
Epoch 33/64
625/625 [==============================] - 3s 6ms/step - loss: 0.3031 - accura
cy: 0.8654 - val_loss: 0.3583 - val_accuracy: 0.8440
Epoch 34/64
625/625 [==============================] - 4s 6ms/step - loss: 0.3037 - accura
cy: 0.8629 - val_loss: 0.3682 - val_accuracy: 0.8370
Epoch 35/64
625/625 [==============================] - 4s 6ms/step - loss: 0.3013 - accura
cy: 0.8658 - val_loss: 0.3610 - val_accuracy: 0.8480
Epoch 36/64
625/625 [==============================] - 3s 6ms/step - loss: 0.2947 - accura
cy: 0.8671 - val_loss: 0.3717 - val_accuracy: 0.8410
Epoch 37/64
625/625 [==============================] - 4s 6ms/step - loss: 0.2953 - accura
cy: 0.8704 - val_loss: 0.3543 - val_accuracy: 0.8400
Epoch 38/64
625/625 [==============================] - 4s 6ms/step - loss: 0.2957 - accura
cy: 0.8725 - val_loss: 0.3529 - val_accuracy: 0.8440
Epoch 39/64
625/625 [==============================] - 4s 6ms/step - loss: 0.3002 - accura
cy: 0.8635 - val_loss: 0.3701 - val_accuracy: 0.8440
Epoch 40/64
625/625 [==============================] - 4s 6ms/step - loss: 0.2931 - accura
cy: 0.8705 - val_loss: 0.3398 - val_accuracy: 0.8460
Epoch 41/64
625/625 [==============================] - 4s 6ms/step - loss: 0.3020 - accura
cy: 0.8633 - val_loss: 0.3744 - val_accuracy: 0.8400
Epoch 42/64
625/625 [==============================] - 4s 6ms/step - loss: 0.2971 - accura
cy: 0.8744 - val_loss: 0.3542 - val_accuracy: 0.8360
Epoch 43/64
625/625 [==============================] - 3s 6ms/step - loss: 0.2897 - accura
cy: 0.8740 - val_loss: 0.3605 - val_accuracy: 0.8460
Epoch 44/64
625/625 [==============================] - 4s 6ms/step - loss: 0.2918 - accura
cy: 0.8750 - val_loss: 0.3742 - val_accuracy: 0.8410
Epoch 45/64
625/625 [==============================] - 4s 6ms/step - loss: 0.2802 - accura
cy: 0.8799 - val_loss: 0.3582 - val_accuracy: 0.8510
Epoch 46/64
625/625 [==============================] - 4s 7ms/step - loss: 0.2920 - accura
cy: 0.8701 - val_loss: 0.3567 - val_accuracy: 0.8370
Epoch 47/64
625/625 [==============================] - 4s 7ms/step - loss: 0.2844 - accura
cy: 0.8817 - val_loss: 0.3567 - val_accuracy: 0.8380
Epoch 48/64
625/625 [==============================] - 3s 6ms/step - loss: 0.2812 - accura
cy: 0.8783 - val_loss: 0.3834 - val_accuracy: 0.8410
Epoch 49/64
625/625 [==============================] - 4s 6ms/step - loss: 0.2858 - accura
cy: 0.8773 - val_loss: 0.3502 - val_accuracy: 0.8410
Epoch 50/64
625/625 [==============================] - 4s 6ms/step - loss: 0.2707 - accura
cy: 0.8861 - val_loss: 0.3516 - val_accuracy: 0.8510
Epoch 51/64
625/625 [==============================] - 4s 6ms/step - loss: 0.2813 - accura
cy: 0.8803 - val_loss: 0.3492 - val_accuracy: 0.8430
Epoch 52/64
625/625 [==============================] - 4s 6ms/step - loss: 0.2633 - accura
cy: 0.8912 - val_loss: 0.3304 - val_accuracy: 0.8700
Epoch 53/64
625/625 [==============================] - 4s 6ms/step - loss: 0.2753 - accura
cy: 0.8912 - val_loss: 0.3337 - val_accuracy: 0.8620
Epoch 54/64
625/625 [==============================] - 4s 6ms/step - loss: 0.2626 - accura
cy: 0.8954 - val_loss: 0.3405 - val_accuracy: 0.8640
Epoch 55/64
625/625 [==============================] - 4s 6ms/step - loss: 0.2528 - accura
cy: 0.8952 - val_loss: 0.3277 - val_accuracy: 0.8690
Epoch 56/64
625/625 [==============================] - 4s 6ms/step - loss: 0.2500 - accura
cy: 0.8975 - val_loss: 0.3092 - val_accuracy: 0.8650
Epoch 57/64
625/625 [==============================] - 4s 6ms/step - loss: 0.2473 - accura
cy: 0.8965 - val_loss: 0.2837 - val_accuracy: 0.8700
Epoch 58/64
625/625 [==============================] - 4s 6ms/step - loss: 0.2256 - accura
cy: 0.9070 - val_loss: 0.2815 - val_accuracy: 0.8760
Epoch 59/64
625/625 [==============================] - 4s 6ms/step - loss: 0.2191 - accura
cy: 0.9123 - val_loss: 0.2876 - val_accuracy: 0.8840
Epoch 60/64
625/625 [==============================] - 4s 6ms/step - loss: 0.2067 - accura
cy: 0.9165 - val_loss: 0.2673 - val_accuracy: 0.8980
Epoch 61/64
625/625 [==============================] - 4s 6ms/step - loss: 0.1970 - accura
cy: 0.9156 - val_loss: 0.2652 - val_accuracy: 0.8980
Epoch 62/64
625/625 [==============================] - 4s 6ms/step - loss: 0.1916 - accura
cy: 0.9192 - val_loss: 0.2362 - val_accuracy: 0.9060
Epoch 63/64
625/625 [==============================] - 4s 6ms/step - loss: 0.1883 - accura
cy: 0.9252 - val_loss: 0.2682 - val_accuracy: 0.9020
Epoch 64/64
625/625 [==============================] - 4s 7ms/step - loss: 0.1823 - accura
cy: 0.9278 - val_loss: 0.2387 - val_accuracy: 0.9070
In [55]:
In [58]:
# summarize history for loss
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.grid()
plt.legend(['train', 'test'], loc='upper left')
plt.show()
In [60]:
test_data[0][0]
Out[60]:
['Mary',
'got',
'the',
'milk',
'there',
'.',
'John',
'moved',
'to',
'the',
'bedroom',
'.']
In [61]:
story =' '.join(word for word in test_data[0][0])
print(story)
Out[65]:
{'.',
'?',
'Daniel',
'Is',
'John',
'Mary',
'Sandra',
'apple',
'back',
'bathroom',
'bedroom',
'discarded',
'down',
'dropped',
'football',
'garden',
'got',
'grabbed',
'hallway',
'in',
'journeyed',
'kitchen',
'left',
'milk',
'moved',
'no',
'office',
'picked',
'put',
'the',
'there',
'to',
'took',
'travelled',
'up',
'went',
'yes'}
In [66]:
my_story = "John left the kitchen . Sandra dropped the football in the garden ."
my_story.split()
Out[66]:
['John',
'left',
'the',
'kitchen',
'.',
'Sandra',
'dropped',
'the',
'football',
'in',
'the',
'garden',
'.']
In [67]:
my_question = "Is the football in the garden ?"
In [68]:
my_question.split()
Out[68]:
['Is', 'the', 'football', 'in', 'the', 'garden', '?']
In [69]:
mydata = [(my_story.split(),my_question.split(),'yes')]
In [70]:
my_story,my_ques,my_ans = vectorize_stories(mydata)
In [71]:
pred_results = model.predict(([ my_story, my_ques]))
val_max = np.argmax(pred_results[0])
24*7 Availability :
In the present era organizations are working 24*7 to help their clients and explore
new areas. The company has hired a large number of employees to answer client
messages and phone calls so that no customer will go unnoticed. Despite this,
customers often have to wait for responses, which can lead to dissatisfaction.
When employees are trying to answer clients’ questions 24×7, it can be difficult. A
chatbot is an automated program, designed to answer customer
questions. Therefore, In order to avoid fatigue and be more responsive.
Reduce Errors:
By replacing a human with a chatbot, you can minimize your operational cost. It is
difficult for a corporation to hire employees for each role A single operator can only
handle one or two customers at a time, but a chatbot can handle several interactions
at once, which is much more when compared with your service or sales team. can
handle.
It’s critical to keep your customers engaged with your website if you want to grow
your business. Using a chatbot for branding purposes can enhance consumer
engagement and keep them interested, resulting in more conversions and sales.
Because of their adaptability, chatbots are easy to integrate with a variety of
platforms, resulting in increased customer interaction.
Lead Generation:
Chatbot is the best lead generation tool because it can suggest everything which is
there in your basket helping to capture a super targeted lead.
Chatbots are capable of asking relevant questions, persuading customers,
and generating qualified leads. It ensures that conversation flow is in the right
direction to get high converting leads.
Needs Analyzing:
To ensure that the chatbot provides the correct information to the customer. It’s
natural for users’ and businesses’ goals to vary as a result of their engagements.
Therefore, the chatbot must be updated with the correct information to meet client
demands.
People in today’s world use shortcut keys to speed up responses and increase
efficiency. As a result, chatbots are unable to adapt their language to that of humans.
So slang, misspellings, and sarcasm are frequently misunderstood by bots. It means
that a chatbot is unacceptable for a friendly discussion.
Higher Misunderstanding :
Sometimes customers cannot find the information they need or are unable to
communicate with support executive connect. Because they did not enter the correct
command. Because a chatbot is a programmed software with a specified response,
if a consumer does not provide the correct command.
Conclusion:
Advantages and Disadvantages of Chatbots are important for developing chatbots for any
business. Chatbots are more effective than people in reaching out to a big audience via
messaging apps. They have the potential to become a useful information gathering tool in
the near future. Chatbots are artificial intelligence that can provide highly tailored
communication. To the user while minimizing the workload of healthcare provider teams.