d01 Batch15 Mental Health Ai Bot

Download as pdf or txt
Download as pdf or txt
You are on page 1of 29

Mental Health AI Bot

A PROJECT REPORT

Submitted by
ANKIT SINGH DASAUNI [Reg No: RA2111003010465]

DANISH RAJA [Reg No: RA2111003010478]

YENDURI TUSHAAR [Reg No: RA2111003010481]

Under the Guidance of

DR. S. PADMINI
Assistant Professor, Department of Computing Technologies
In partial fulfillment of the requirements for the degree of
BACHELOR OF TECHNOLOGY
in
COMPUTER SCIENCE AND ENGINEERING

DEPARTMENT OF COMPUTING TECHNOLOGIES


COLLEGE OF ENGINEERING AND
TECHNOLOGYSRM INSTITUTE OF SCIENCE AND
TECHNOLOGY KATTANKULATHUR – 603 203
APRIL 2024
SRM INSTITUTE OF SCIENCE AND TECHNOLOGY

KATTANKULATHUR – 603 203

BONAFIDE CERTIFICATE

Certified that this B.Tech project report titled “Mental Health AI Bot” is the bonafide work of

MR.ANKIT SINGH DASAUNI[Reg. No.RA2111003010465] and MR. RAJA DANISH

[Reg. No.: RA2111003010478] MR.TUSHAAR YENDURI[Reg. No.RA2111003010481] who

carried out the

project work under my supervision. Certified further, that to the best of my knowledge, the work

reported herein does not form part of any other thesis or dissertation on the basis of which a degree or

award wasconferred on an earlier occasion for this or any other candidate.

DR. S. PADMINI
Ph.D.PANEL HEAD
Assistant Professor
Department of Computing Technologies
DR. M. PUSHPALATHA
Ph.D.
HEAD OF THE DEPARTMENT
Professor
Department of Computing Technologies
Department of Computing

TechnologiesSRM Institute of Science

and Technology Own Work

Declaration Form

Degree/ Course : B.Tech in Computer Science and Engineering

Student Names : ANKIT SINGH DASAUNI ,

TUSHAAR YENDURI, DANISH RAJA

Registration Number: RA2111003010458,

RA2111003010470,RA2111003010474

Title of Work :Language Translator

We hereby certify that this assessment compiles with the University’s Rules and

Regulations relating to Academic misconduct and plagiarism, as listed in the

UniversityWebsite, Regulations, and the Education Committee guidelines.

We confirm that all the work contained in this assessment is our own except where

indicated, and that we have met the following conditions:

● Clearly references / listed all sources as appropriate

● Referenced and put in inverted commas all quoted text (from

books,web, etc.)

● Given the sources of all pictures, data etc. that are not my own
● Not made any use of the report(s) or essay(s) of any other

student(s)either past or present

● Acknowledged in appropriate places any help that I have received

fromothers (e.g. fellow students, technicians, statisticians, external

sources)

● Compiled with any other plagiarism criteria specified in the

Coursehandbook / University website

I understand that any false claim for this work will be penalized in accordance with the

University policies and regulations.

DECLARATION:

I am aware of and understand the University’s policy on Academic misconduct and

plagiarism and I certify that this assessment is my / our own work, except where

indicatedby referring, and that I have followed the good academic practices noted above.

If you are working in a group, please write your registration numbers and sign with the
date

for every student in your group.


ACKNOWLEDGEMENT

We express our humble gratitude to Dr. C. Muthamizhchelvan, Vice-Chancellor, SRM

Institute of Science and Technology, for the facilities extended for the project work and his

continued support.

We extend our sincere thanks to Dean-CET, SRM Institute of Science and Technology, Dr.

T.V.Gopal, for his invaluable support.

We wish to thank Dr. Revathi Venkataraman, Professor & Chairperson, School of

Computing, SRM Institute of Science and Technology, for her support throughout the

projectwork.

We are incredibly grateful to our Head of the Department, Dr. M. Pushpalatha, Professor,

Department of Computing Technologies, SRM Institute of Science and Technology,for her

suggestions and encouragement at all the stages of the project work.

We want to convey our thanks to our Project Coordinator, Dr. P. Rajasekar, Assistant

Professor, Panel Head, Dr. S. Padmini, Associate Professor, Department of Computing

Technologies, SRM Institute of Science and Technology, for their inputs during the project

reviews and support.


We register our immeasurable thanks to our Faculty Advisors, Dr. Vinod D, Assistant

Professor, and Dr. M Uma Devi, Associate Professor, Department of Computing

Technologies, SRM Institute of Science and Technology, for leading and helping us to

complete our course.

Our inexpressible respect and thanks to our guide, Dr. V. Arun, Assistant Professor,

Department of Computing Technologies, SRM Institute of Science and Technology, for

providing us with an opportunity to pursue our project under his mentorship. He provided us

with the freedom and support to explore the research topics of our interest. His passion for

solving problems and making a difference in the world has always been inspiring.

We sincerely thank the Computing Technologies Department staff and students, SRM Institute

of Science and Technology, for their help during our project. Finally, we would like to thank

parents, family members, and friends for their unconditional love, constant support, and

encouragement.

ANKIT SINGH DASAUNI [RA2111003010458]

DANISH RAJA[RA2111003010478]

TUSHAAR YENDURI [RA2111003010481]


TABLE OF CONTENTS

C.N TITL PAGE NO.


O. E
Abstract viii
List of Figures ix
1. INTRODUCTION 10
1.1 MOTIVATION 10
1.2 PROBLEM STATEMENT 11
1.3 OBJECTIVIES 11
1.4 SCOPE

11
2 LITERATURE REVIEW 11
3 PROPOSED SYSTEM 12
3.1 PROPOSED SYSTEM 12
3.1. BLOCK DIAGRAM 12,13
1
3.2 IMPLEMENTATION 13,14
3.2. ALGORITHM/FLOWCHART 15,16
1

4 RESULTS 22
5 CONCLUSION 23
6 FUTURE SCOPE/REFERENCES 25
ABSTRACT

The abstract presents an innovative approach to mental health support using AI chatbots with
sequential learning capabilities. These chatbots use complex algorithms to understand sequential
patterns of user interactions, allowing them to provide contextually relevant and empathetic
responses.

By analyzing the temporal context of conversations, these chatbots retain memory of


past interactions and ensure ongoing consistency and empathy. interactions
dialogues They use this contextual understanding to deliver personalized interventions and
tailor their responses to users' emotional states and needs..
LIST OF FIGURES
3.1 Proposed system 3

4.1 Block Diagram 3

4.2 Implementation 4

4.3 Algorithim/ flowchart 6

4.4 Data sets 7

4.5 Pseudo code 7


5.1 Result analysis 10
CHAPTER 1

Introduction

INTRODUCTION
In today's rapidly developing technological environment, the integration of artificial intelligence (AI) into
various aspects of human life has become increasingly common. One area where AI holds promise
is supporting mental health. As mental health challenges increase worldwide, there is a growing need for
accessible and effective tools to provide support and assistance to people suffering from mental
health problems. In response to this need, our project aims to develop an AI-based chat platform specifically
designed to provide mental health support and guidance.

Motivation

Our project is motivated by the alarming increase in mental health disorders worldwide, with
limited access. to professional mental health services availability and affordability. Many people hesitate to seek
help because of stigma, lack of awareness or logistical constraints. Using the power of artificial intelligence and
natural language processing (NLP), we aim to bridge this gap by creating a user-friendly and easily accessible
platform where people can seek support and guidance on their mental well-being and stigma-free. free mental
health support services. Traditional methods of seeking help, such as seeing a therapist or counselor, can be
intimidating or impractical for many people. Additionally, a lack of mental health professionals increases the
challenge of getting help in a timely manner. Our goal is to develop an AI-powered chat room that can provide
immediate help, guidance and resources to people with mental health issues, thus complementing existing
support systems.

Goal

The main goal of our project is to design and implement an AI-based chat room , which can provide
personal mental health support for users. In particular, we aim to:

Develop a chat interface that allows users to interact with chat in a natural and intuitive way.
Implement natural language processing algorithms that analyze user input, detect emotional signals and
provide empathic responses.
Enable. a knowledge base of mental health resources, including treatment strategies, self-help techniques, and
referral information.
Train the chat using machine learning techniques, including deep learning models like LSTM, to improve its
effectiveness and responsiveness over time.
Trusted domain

Our project involves an AI-based chat platform for mental health support design, development and
evaluation. Key focuses include:

Designing the chat interface and user experience to ensure ease of use and engagement.
Integrating natural language processing algorithms so that the chat can understand and effectively respond to
user queries.
Building a robust mental database. health resources and information to support users on their path to wellness.
Training and tuning of the machine learning of the chat models using data sets suitable for its accuracy and
performance.
Conducting user tests and evaluations to evaluate the effectiveness of the chat and satisfaction and impact of the
user. on mental health outcomes..

2. REVIEW OF LITERATURE
In the area of technological innovation, several studies have explored the application of
advanced technologies in areas ranging from healthcare to social media, education and
biometrics. Among them, the search for reliable solutions to prevent the spread of fake
news on social media platforms led to the development of advanced models such as SA-
Bi-LSTM, as W. Jian et al. [2]. This model, which combines LSTM and two-way LSTM
with a self-aware mechanism, represents a significant advance in detecting fake news,
ensuring data integrity and increasing trust in online content. Similarly, in the context of
educational institutions, the arrival of chatbots in technical universities has revolutionized
information access and information processing. Attigeri, Agrawal, and Kolekar [3] discuss
the development and benchmarking of advanced NLP models for knowledge bots in
technical universities that use neural networks and LSTM to enable conversational AI and
semantic analysis. Additionally, the healthcare industry has made significant strides in
adopting AI-based medical chatbots such as S. Chakraborty et al. [1]. Using LSTM
algorithms combined with machine learning techniques, these chatbots facilitate natural
language processing and consultation, improving access to healthcare and more
accurately predicting infectious diseases. In addition, significant progress has been made
in biometrics by adopting LSTM-based models for tasks such as ECG-based heartbeat
classification and person recognition [4, 5]. These models use the temporal dynamics
collected by LSTM to analyze electrocardiographic signals, enabling accurate
classification and biometric detection. Together, these discussions highlight the versatility
and effectiveness of LSTM models in solving complex challenges in many fields, paving
the way for transformative advances in technology and society..
​​
3. PROPOSED SYSTEM

Proposed System: AI-Based Mental Health Chatbot

Conversational Interface:

The proposed system will feature a basic conversational interface, accessible through text-based interactions.
Users will interact with the chatbot through a simple command-line interface or messaging platform, such as
SMS or instant messaging apps. This minimalist interface ensures accessibility for users across various devices
without the need for complex web or mobile applications.

Natural Language Processing (NLP):

To enable effective communication, the chatbot will utilize natural language processing (NLP) techniques. NLP
algorithms will analyze user input, interpret the semantic meaning of messages, and extract relevant
information. This allows the chatbot to understand users' queries, concerns, and emotional states, facilitating
empathetic and contextually relevant responses.

Emotion Detection:

An integral feature of the chatbot will be its ability to detect users' emotions from text-based messages. Emotion
detection algorithms will be integrated to identify emotional cues expressed in user input accurately. By
recognizing emotions such as sadness, anxiety, or stress, the chatbot can provide tailored support and guidance
to address users' mental health needs effectively.

Knowledge Base:

The chatbot will be equipped with a knowledge base containing valuable mental health resources and
information. This repository will include details on common mental health disorders, symptoms, coping
strategies, self-help techniques, and relevant helplines or support services. Users can access this information by
querying the chatbot for assistance on specific topics or concerns.

Machine Learning Models:

The chatbot will leverage machine learning models, including deep learning techniques like Long Short-Term
Memory (LSTM) networks. These models will be trained on datasets of conversational data and mental health
content to enhance the chatbot's accuracy, responsiveness, and empathy over time. By continuously learning
from user interactions, the chatbot can improve its ability to provide personalized support and guidance.

Personalized Support:

The chatbot will offer personalized support tailored to each user's needs and preferences. Through ongoing
interactions and feedback, the chatbot will adapt its responses and recommendations to better address users'
unique mental health challenges. This personalized approach ensures that users receive relevant and empathetic
assistance, fostering a supportive and non-judgmental environment.

Privacy and Security:


Ensuring user privacy and data security will be paramount in the proposed system. The chatbot will
adhere to strict privacy protocols, with robust encryption mechanisms in place to protect user data and
ensure confidentiality. User interactions will be anonymized and stored securely, with adherence to
relevant data protection regulations and guidelines.

Evaluation and Feedback:

The effectiveness of the chatbot will be evaluated through user testing and feedback collection. Users will have
the opportunity to provide feedback on their experience with the chatbot, including its responsiveness, accuracy,
and helpfulness. Continuous improvement efforts will be guided by user feedback, enabling the chatbot to
evolve and adapt to users' changing needs over time.

Overall, the proposed system aims to provide a user-friendly, accessible, and empathetic platform for individuals
to seek mental health support and guidance. By leveraging AI, NLP, and machine learning technologies, the
chatbot will offer personalized assistance, promote mental health awareness, and contribute to improving users'
overall well-being.

1. .1 Block diagram
2. Implementation
Importing Essential Modules:
To start the development of the Mental Health AI Chatbot project, we will import the essential modules needed to
implement it. These include libraries for natural language processing (NLP), machine learning and user interface design. In
addition, we include special libraries such as NumPy for numerical computation and TensorFlow for deep learning tasks.

Building the user interface:


The project starts by building a user-friendly interface where users can interact with the chat. Using libraries like tkinter,
we initialize the window with input and output components. These components allow users to enter their questions or
express their feelings, while the chat provides answers or suggestions in the result area.

Defining input and output components:


In the user interface, we define input and output fields that facilitate smooth communication between users and the chat
room. Users can enter their questions or feelings in text-based input fields, while the answers from the chat are displayed
in the corresponding output field. These components are designed to efficiently manage text input/output and maintain a
smooth user experience.

Natural Language Processing (NLP) Integration:


To improve the chat's understanding of user queries and emotions, we integrate NLP algorithms and techniques. NLP
modules analyze user inputs, extract relevant information and identify emotional signals expressed in the
text. This allows the chatbot to provide empathetic and contextually appropriate responses to users' mental health needs.

Application of machine learning models:


Advanced machine learning models such as long-short-term memory (LSTM) networks have been applied to
improve chatbot. skills These models are trained using TensorFlow on datasets containing conversational data and mental
health content. The training process takes several epochs, typically around 2000 years, to optimize model performance.

Visualize training progress with graphs:


During the training process, we visualize model performance using graphs generated with matplotlib. These
graphs show metrics such as loss and accuracy during training periods, allowing us to monitor model convergence
and detect over- or under-fitting problems. This visual feedback helps optimize model parameters to improve performance.

Integration with Chatbot Interface:


After machine learning models are trained and optimized, we integrate them into the chatbot interface. Users can interact
with the chat by entering their questions or emotions, and the chat uses trained models to generate appropriate
responses. This seamless integration.
ARCHITECTURE DIAGRAM:

2.1. Algorithm/Flowchart Algorithm:


1.Step 1: Choose your preferred input method (text or speech) to express your mental health issues.
2. Step 1: Enter your concern using the space provided, ensuring clarity and relevance.
3. step: analyze the input using natural language processing techniques to understand emotional signals.
4. step: use advanced models like LSTM to understand user concerns.
5. Step 1: Create empathetic and contextual responses to mental health needs.
2.2. Data set

• Tkinter module as GUI interface


• Cttypes library
• PIL library (python imaging library)
• Tkinter.messagebox as tkMessageBox
• Speech recognition library
• pyttsx3 is a text-to-speech conversion library.
• Threading library
• From deep translator module import googletrans library
• Gtts module for text to audio
• pydub is a Python library work with audio files.
CODE:

import re
import random
import pandas as pd
import numpy as np
pd.set_option('mode.chained_assignment', None)
data=pd.read_csv("/content/mentalhealth.csv",nrows=20)

data.head()
for i in range(data.shape[0]):
data['Answers'][i]=re.sub(r'\n', ' ',data['Answers'][i])
data['Answers'][i]=re.sub('\(', '',data['Answers'][i])
data['Answers'][i]=re.sub(r'\)', '',data['Answers'][i])
data['Answers'][i]=re.sub(r',', '',data['Answers'][i])
data['Answers'][i]=re.sub(r'-', '',data['Answers'][i])
data['Answers'][i]=re.sub(r'/', '',data['Answers'][i])
data['Answers'][i]=re.sub(r'/', '',data['Answers'][i])
pairs = []

for i in range(data.shape[0]):
question = data['Questions'][i]
answer = data['Answers'][i]

pairs.append((question, answer))
pairs
input_docs = []
target_docs = []
input_tokens = set()
target_tokens = set()

for line in pairs:


input_doc, target_doc = line[0], line[1]

# Appending each input sentence to input_docs


input_docs.append(input_doc)

# Splitting words from punctuation


target_doc = " ".join(re.findall(r"[\w']+|[^\s\w]", target_doc))

# Redefine target_doc below and append it to target_docs


target_doc = '<START> ' + target_doc + ' <END>'

target_docs.append(target_doc)

for token in re.findall(r"[\w']+|[^\s\w]", input_doc):


if token not in input_tokens:
input_tokens.add(token)
for token in target_doc.split():
if token not in target_tokens:
target_tokens.add(token)

input_tokens = sorted(list(input_tokens))
target_tokens = sorted(list(target_tokens))
num_encoder_tokens = len(input_tokens)
num_decoder_tokens = len(target_tokens)

input_docs

target_docs
# Include start and end tokens in the dictionaries
input_features_dict = dict([(token, i + 2) for i, token in enumerate(input_tokens)])
target_features_dict = dict([(token, i + 2) for i, token in enumerate(target_tokens)])

# Add start and end tokens to the dictionaries


input_features_dict['<START>'] = 0
input_features_dict['<END>'] = 1
target_features_dict['<START>'] = 0
target_features_dict['<END>'] = 1

# Create reverse dictionaries


reverse_input_features_dict = dict((i, token) for token, i in input_features_dict.items())
reverse_target_features_dict = dict((i, token) for token, i in target_features_dict.items())

input_features_dict

max_encoder_seq_length = max([len(re.findall(r"[\w']+|[^\s\w]", input_doc)) for input_doc in input_docs])


max_decoder_seq_length = max([len(re.findall(r"[\w']+|[^\s\w]", target_doc)) for target_doc in target_docs])

encoder_input_data = np.zeros(
(len(input_docs), max_encoder_seq_length, num_encoder_tokens),
dtype='float32')
decoder_input_data = np.zeros(
(len(input_docs), max_decoder_seq_length, num_decoder_tokens),
dtype='float32')
decoder_target_data = np.zeros(
(len(input_docs), max_decoder_seq_length, num_decoder_tokens),
dtype='float32')

for line, (input_doc, target_doc) in enumerate(zip(input_docs, target_docs)):


for timestep, token in enumerate(re.findall(r"[\w']+|[^\s\w]", input_doc)):
# Assign 1. for the current line, timestep, & word in encoder_input_data
if input_features_dict[token] < num_encoder_tokens:
encoder_input_data[line, timestep, input_features_dict[token]] = 1.

for timestep, token in enumerate(target_doc.split()):


# Assign 1. for the current line, timestep, & word in decoder_input_data and decoder_target_data
if target_features_dict[token] < num_decoder_tokens:
decoder_input_data[line, timestep, target_features_dict[token]] = 1.
if timestep > 0:
decoder_target_data[line, timestep - 1, target_features_dict[token]] = 1.

encoder_input_data

decoder_target_data
from tensorflow import keras
from keras.layers import Input, LSTM, Dense
from keras.models import Model
dimensionality = 256 # Dimensionality
batch_size = 10 # The batch size and number of epochs
epochs = 2000

#Encoder
encoder_inputs = Input(shape=(None, num_encoder_tokens))
encoder_lstm = LSTM(dimensionality, return_state=True)
encoder_outputs, state_hidden, state_cell = encoder_lstm(encoder_inputs)
encoder_states = [state_hidden, state_cell]

#Decoder
decoder_inputs = Input(shape=(None, num_decoder_tokens))
decoder_lstm = LSTM(dimensionality, return_sequences=True, return_state=True)
decoder_outputs, decoder_state_hidden, decoder_state_cell = decoder_lstm(decoder_inputs,
initial_state=encoder_states)
decoder_dense = Dense(num_decoder_tokens, activation='softmax')
decoder_outputs = decoder_dense(decoder_outputs)
training_model = Model([encoder_inputs, decoder_inputs], decoder_outputs) # Compiling
training_model.summary()
from tensorflow.keras.utils import plot_model

plot_model(training_model, to_file='model_plot.png', show_shapes=True, show_layer_names=True) # plot


model
training_model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'],
sample_weight_mode='temporal')#Training
history1=training_model.fit([encoder_input_data, decoder_input_data], decoder_target_data, batch_size =
batch_size, epochs = epochs, validation_split = 0.2)
training_model.save(‘training_model.h5')
import matplotlib.pyplot as plt

acc = history1.history['accuracy']
val_acc = history1.history['val_accuracy']
loss=history1.history['loss']
val_loss=history1.history['val_loss']

plt.figure(figsize=(16,8))
plt.subplot(1, 2, 1)
plt.plot(acc, label='Training Accuracy')
plt.plot(val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.xlabel("epochs")
plt.ylabel("accuracy")

plt.subplot(1, 2, 2)
plt.plot(loss, label='Training Loss')
plt.plot(val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.xlabel("epochs")
plt.ylabel("loss")
plt.show()
from keras.models import load_model
training_model = load_model('training_model.h5')
encoder_inputs = training_model.input[0]
encoder_outputs, state_h_enc, state_c_enc = training_model.layers[2].output
encoder_states = [state_h_enc, state_c_enc]
encoder_model = Model(encoder_inputs, encoder_states)

latent_dim = 256
decoder_state_input_hidden = Input(shape=(latent_dim,))
decoder_state_input_cell = Input(shape=(latent_dim,))
decoder_states_inputs = [decoder_state_input_hidden, decoder_state_input_cell]

decoder_outputs, state_hidden, state_cell = decoder_lstm(decoder_inputs, initial_state=decoder_states_inputs)


decoder_states = [state_hidden, state_cell]
decoder_outputs = decoder_dense(decoder_outputs)

decoder_model = Model([decoder_inputs] + decoder_states_inputs, [decoder_outputs] + decoder_states)


from keras.models import load_model
training_model = load_model('training_model.h5')
encoder_inputs = training_model.input[0]
encoder_outputs, state_h_enc, state_c_enc = training_model.layers[2].output
encoder_states = [state_h_enc, state_c_enc]
encoder_model = Model(encoder_inputs, encoder_states)

latent_dim = 256
decoder_state_input_hidden = Input(shape=(latent_dim,))
decoder_state_input_cell = Input(shape=(latent_dim,))
decoder_states_inputs = [decoder_state_input_hidden, decoder_state_input_cell]
decoder_outputs, state_hidden, state_cell = decoder_lstm(decoder_inputs, initial_state=decoder_states_inputs)
decoder_states = [state_hidden, state_cell]
decoder_outputs = decoder_dense(decoder_outputs)
decoder_model = Model([decoder_inputs] + decoder_states_inputs, [decoder_outputs] + decoder_states)

def decode_response(test_input):
# Getting the output states to pass into the decoder
states_value = encoder_model.predict(test_input)

# Generating empty target sequence of length 1


target_seq = np.zeros((1, 1, num_decoder_tokens))

# Setting the first token of the target sequence with the start token
target_seq[0, 0, target_features_dict['<START>']] = 1.

# A variable to store our response word by word


decoded_sentence = ''

stop_condition = False
while not stop_condition:
# Predicting output tokens with probabilities and states
output_tokens, hidden_state, cell_state = decoder_model.predict([target_seq] + states_value)

# Choosing the one with the highest probability


sampled_token_index = np.argmax(output_tokens[0, -1, :])
sampled_token = reverse_target_features_dict[sampled_token_index]
decoded_sentence += " " + sampled_token

# Stop if hit max length


if len(decoded_sentence) > max_decoder_seq_length:
stop_condition = True

# Update the target sequence


target_seq = np.zeros((1, 1, num_decoder_tokens))
target_seq[0, 0, sampled_token_index] = 1.

# Update states
states_value = [hidden_state, cell_state]

# Stop if the end token is predicted


if sampled_token == '<END>':
stop_condition = True

return decoded_sentence

class ChatBot:
negative_responses = ("no", "nope", "nah", "naw", "not a chance", "sorry")
exit_commands = ("quit", "pause", "exit", "goodbye", "bye", "later", "stop")

#Method to start the conversation


def start_chat(self):
user_response = input("Hi, I'm a chatbot trained on random dialogs. AMA!\n")

if user_response in self.negative_responses:
print("Ok, have a great day!")
return
self.chat(user_response)

#Method to handle the conversation


def chat(self, reply):
while not self.make_exit(reply):
reply = input(self.generate_response(reply)+"\n")

#Method to convert user input into a matrix


def string_to_matrix(self, user_input):
tokens = re.findall(r"[\w']+|[^\s\w]", user_input)
user_input_matrix = np.zeros(
(1, max_encoder_seq_length, num_encoder_tokens),
dtype='float32')
for timestep, token in enumerate(tokens):
if token in input_features_dict:
user_input_matrix[0, timestep, input_features_dict[token]] = 1.
return user_input_matrix

#Method that will create a response using seq2seq model we built


def generate_response(self, user_input):
input_matrix = self.string_to_matrix(user_input)
chatbot_response = decode_response(input_matrix)
#Remove and tokens from chatbot_response
chatbot_response = chatbot_response.replace("",'')
chatbot_response = chatbot_response.replace("",'')
return chatbot_response

#Method to check for exit commands


def make_exit(self, reply):
for exit_command in self.exit_commands:
if exit_command in reply:
print("Ok, have a great day!")
return True
return False

chatbot = ChatBot()
chatbot.start_chat()
4. RESULT ANALYSIS

An analysis of the results of the mental health AI chat shows promising progress in its
performance. Thanks to rigorous training and optimization, the accuracy of the chat responses
has improved significantly. Users now receive more relevant and helpful guidance tailored to their
mental health issues. Although the chat is effective in providing answers, there is still room for
improvement. By further refining the machine learning models and adding more training data, we
can further improve the accuracy and responsiveness of the chat. This ongoing effort to improve the
capabilities of the chat underlines our commitment to continuously improve its effectiveness to support
the mental well-being of users..

TECHNOLOGY:
1. Natural Language Processing (NLP): NLP techniques are utilized for analyzing and understanding user
input, extracting relevant information, and generating contextually appropriate responses.
2. Long Short-Term Memory (LSTM) Networks: LSTM networks, a type of recurrent neural network
(RNN), are employed for sequence modeling and understanding the context of user queries and responses.
3. TensorFlow: TensorFlow, an open-source machine learning framework, is utilized for implementing and
training deep learning models, including LSTM networks, to enhance the chatbot's capabilities.
4. NumPy: NumPy, a fundamental package for scientific computing in Python, is used for numerical
computations and data manipulation tasks involved in preprocessing and training the chatbot models.
5. Matplotlib: Matplotlib, a plotting library for Python, is utilized for visualizing training progress, including
metrics such as loss and accuracy, to monitor the performance of the chatbot models.
6. Google Colab: Google Colab, a cloud-based platform for running Python code, is utilized for training and
experimenting with machine learning models, providing access to powerful computing resources and
collaborative tools.
7. We Use VS Code Software as code editor MODULES: USER MODULE
5. CONCLUSION

A mental health chat based on LSTM has undergone extensive training and
evaluation, showing promising results. During the training, the accuracy of the model steadily improved,
demonstrating its ability to learn and adapt to the complexity of mental health conversations. As
training sessions progressed, the model's accuracy showed a consistent upward trend, indicating effective
convergence to optimal performance. Furthermore, the validation results reflected this improvement,
further confirming the effectiveness of the model in generalizing to unseen data. In each era, the model refined
its understanding of mental health contexts, improving its ability to generate meaningful and empathetic
responses to user questions. This accelerating trajectory highlights the potential of the chat to provide valuable
support and assistance in mental health domains. As such, LSTM-based chat is a
promising tool for using artificial intelligence to improve mental health services and promote positive outcomes
for people seeking support. Continuous improvement and validation work is paramount to improve the
effectiveness of the chat and ensure its usability in real-world situations.

6. FUTURE SCOPE

Additional improvements to the mental health AI chat include the development of mobile
applications with offline functionality and the integration of emotion recognition features.
Offline functionality:
Implementing offline functionality in mobile applications allows users to access important features
and capabilities. resources even without an internet connection. Using this feature, users can continue to
interact with the chat, access self-help resources, and receive guidance and support regardless of their
internet connection status. This feature is especially valuable for people in remote areas with limited
internet access, or in situations where network connectivity is unreliable, such as when traveling or in an
emergency.
Emotion detection:
Integrate the emotion detection features in the chat enables a deeper understanding. and a more empathetic
response to users' emotional states during interactions. Using advanced algorithms, the chat can analyze text or
voice recordings to detect emotional signals such as stress, anxiety, sadness or happiness. By detecting users'
emotions, the chat can adjust its responses and interventions accordingly, providing personalized and empathetic
support. In addition, with this function, the chat can monitor the mental well-being of users over
time, identify patterns or changes in emotional states, and provide proactive actions or direct professional help
when necessary.
By implementing offline functions and emotional recognition functions, mental functions. Healthy AI
chatbot apps for mobile phones aim to greatly improve usability, efficiency and user experience. These
advancements allow users to access mental health support anytime, anywhere and receive
personalized help sensitive to their emotional needs and well-being.
For accurate results, performance improvements and user satisfaction metrics resulting from
these improvements can be measured through user feedback. , engagement metrics and analysis
of chatbot effectiveness in providing support and guidance to users in different emotional states and conditions.

REFERENCES:

1.S. Chakraborty et al., "An AI-Based Medical Chatbot Model for Infectious Disease Prediction," in IEEE
Access, vol. 10, pp. 128469-128483, 2022, doi: 10.1109/ACCESS.2022.3227208. keywords:
{Chatbots;COVID-19;Artificial intelligence;Medical services;Machine learning;Computer science;Analytical
models;Artificial intelligence;chatbot;LSTM algorithm;machine learning;natural language processing;query
processing},

2.W. Jian et al., "SA-Bi-LSTM: Self Attention With Bi-Directional LSTM-Based Intelligent Model for Accurate
Fake News Detection to Ensured Information Integrity on Social Media Platforms," in IEEE Access, vol. 12, pp.
48436-48452, 2024, doi: 10.1109/ACCESS.2024.3382832. keywords: {Fake news;Long short term
memory;Deep learning;Classification algorithms;Predictive models;Training;Data models;Detection
algorithms;Fake news detection;long short-term memory (LSTM);bid-LSTM;self-attention mechanism;SA-
BiLSTM;ISOT dataset},

3.G. Attigeri, A. Agrawal and S. V. Kolekar, "Advanced NLP Models for Technical University Information
Chatbots: Development and Comparative Analysis," in IEEE Access, vol. 12, pp. 29633-29647, 2024, doi:
10.1109/ACCESS.2024.3368382. keywords: {Chatbots;Oral communication;Artificial intelligence;Virtual
assistants;Computational modeling;Natural language processing;Neural networks;Pattern
analysis;Semantics;Pattern matching;Real-time systems;Educational institutions;Web design;Web sites;Content
management;Information management;Conversational AI;natural language processing;artificial
intelligence;chatbots;neural networks;sequential modeling;pattern matching;semantic analysis},
4.A. Rana and K. K. Kim, "ECG Heartbeat Classification Using a Single Layer LSTM Model," 2019
International SoC Design Conference (ISOCC), Jeju, Korea (South), 2019, pp. 267-268, doi: 10.1109/
ISOCC47750.2019.9027740.
keywords: {Electrocardiography;Heart beat;Logic gates;Recurrent neural networks;Classification
algorithms;Training;Testing;Recurrent Neural Network;LSTM;Heartbeat classification},

5.D. Jyotishi and S. Dandapat, "An LSTM-Based Model for Person Identification Using ECG Signal," in IEEE
Sensors Letters, vol. 4, no. 8, pp. 1-4, Aug. 2020, Art no. 6001904, doi: 10.1109/LSENS.2020.3012653.
keywords: {Electrocardiography;Databases;Biometrics (access control);Biological system
modeling;Training;Testing;Microsoft Windows;Sensor applications;electrocardiogram (ECG);ECG-based
biometry;intrabeat variation;long short-term memory (LSTM)},

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy