Crypto Currency Prediction
Crypto Currency Prediction
Submitted by
ASWANTHYGA J (20ITR007)
HARSHINI K B (20ITR029)
KEERTHANA M (20ITR047)
KOWSHIGA C (20ITR050)
BACHELOR OF TECHNOLOGY
in
INFORMATION TECHNOLOGY
(AUTONOMOUS)
ERODE-638012
APRIL 2023
BONAFIDECERTIFICATE
work reported here in does not form part of any other thesis or
SIGNATURE SIGNATURE
Ms. T.NITHYA, M.E Dr. V.K .MANAVALASUNDARAM, M.E.,
Ph.D.SUPERVISOR, HEADOFTHEDEPARTMENT,
Assistant Professor, Professor,
Department of IT, Department of IT,
Velalar College of Engineering Velalar College
andTechnology, Erode-12. Of Engineering
andTechnology ,Erode-12.
ACKNOWLEDGEMENT
ABSTRACT
CRYPTO CURRENCY
PREDICTION
Bit coin forecasting is a knotty challenging task due to the highly noisy,
nonparametric, complex and chaotic nature of the Bit coin price time series.
With a simple eight-trigram feature engineering scheme of the inter-day
candlestick patterns, we construct a novel ensemble machine learning
framework for daily Bit coin pattern prediction, combining traditional
candlestick charting with the latest artificial intelligence methods.
Several machine learning techniques, including deep learning
methods, are applied to bit coin data to predict the direction of the closing price.
This framework can give a suitable machine learning prediction method for
each pattern based on the trained results.
The investment strategy is constructed according to the ensemble
machine learning techniques. Various measures such as big data, feature
standardization, and elimination of abnormal data can effectively solve data
noise. An investment strategy based on our forecasting framework excels in
both individual bit coin and portfolio performance theoretically.
However, transaction costs have a significant impact on investment.
Additional technical indicators can improve the forecast accuracy to varying
degrees. Technical indicators, especially momentum indicators, can improve
forecasting accuracy in most cases.
TABLE OF CONTENTS
ABSTRACT
1 INTRODUCTION
2 SYSTEM ANALYSIS
3 SYSTEM DESIGN
Modules 34
3.1 Input Design
4.1.1Unit Testing 40
4.1.2Integration Testing 40
4.1.3Validation Testing 41
3.3.2
4 SYSTEM TESTING
5.1
5.2
SYSTEM CHAPTER 1
INTRODUCTION
MAINTENANCE
6.1 Conclusion
7 APPENDICES
8 REFERANCES
The forecasting of the bit coin is an important objective in the financial world
and remains one of the most challenging problems due to the non-linear and
chaotic financial nature. Investments in the bitcoin are often guided by different
prediction methods which can be divided into two groups of technical analysis
and fundamental analysis. The fundamental analysis approach is concerned with
the company which used the economic standing of the firm, employees, yearly
reports, financial status, balance sheets, income reports and so on.
On the other hand, technical analysis, also called charting, predicts the future by
studying the trends from the historical data. .Bitcoin price prediction based on K-line patterns
is the essence of candlestick technical analysis. However, there are some disputes on whether
the K-line patterns have predictive power in academia. To help resolve the debate, this paper
uses the data mining methods of pattern recognition, pattern clustering, and pattern
knowledge mining to research the predictive power of K-line patterns. The similarity match
model and nearest neighbor-clustering algorithm are proposed for solving the problem of
similarity match and clustering of K-line series, respectively. The experiment includes testing
the predictive power of the Three Inside Up pattern and Three Inside Down pattern with the
testing dataset of the K-line series data of Shanghai 180 index component bitcoins over the
latest 10 years. Experimental results show that the predictive power of a pattern varies a great
deal for different shapes and each of the existing K-line patterns requires further
classification based on the shape feature for improving the prediction performance.
A time series is a series of observations listed in time order. It is the most commonly
encountered data type, touching almost every aspect of human life, for example, the
meteorological time series, the time series of bitcoin prices (bitcoin time series for short)
which are composed of bitcoin price observations, and the time series of personal health that
are consisted of the observation of blood pressure, temperature, white corpuscle, and so forth.
Researches show that the time series have two import features. The historical
information will affect the future trend . That is, the historical values of observations will
exert an influence on the future values in the time series. The influence can be described by
time series’ period, non-stationarity, varying volatility, and so on.
History repeats itself . That is to say, some special time subseries will repeat in the
entire time series. Because of the two features, all kinds of time series forecasting have
become a present hot research, one of which is the prediction of bitcoin time series, bitcoin
prediction for short. As a typical time series, not only have bitcoin time series the features of
time series, but also the trend of bitcoin prices is directly related to the people’s vital
interests. Therefore, bitcoin prediction has aroused the interest of a wide variety of
researchers.
BITCOIN
Bitcoin is a decentralized digital currency that can be transferred on the peer-to-peer bitcoin
network. Bitcoin transactions are verified by network nodes through cryptography and
recorded in a public distributed ledger called a block chain. The crypto currency was invented
in 2008 by an unknown person or group of people using the name Satoshi Nakamoto. The
currency began use in 2009 when its implementation was released as open-source software.
Bitcoins are created as a reward for a process known as mining. They can be exchanged for
other currencies, products, and services. Bitcoin has been criticized for its use in illegal
transactions, the large amount of electricity (and thus carbon footprint) used by mining, price
volatility, and thefts from exchanges. Some investors and economists have characterized it as
a speculative bubble at various times
Network nodes can validate transactions, add them to their copy of the ledger, and then
broadcast these ledger additions to other nodes. To achieve independent verification of the
chain of ownership each network node stores its own copy of the block chain. At varying
intervals of time averaging to every 10 minutes, a new group of accepted transactions, called
a block, is created, added to the block chain, and quickly published to all nodes, without
requiring central oversight. This allows bitcoin software to determine when a particular
bitcoin was spent, which is needed to prevent double-spending. A conventional ledger
records the transfers of actual bills or promissory notes that exist apart from it, but the
blockchain is the only place that bitcoins can be said to exist in the form of unspent outputs of
transactions.
BITCOIN FORECASTING
Bitcoin prediction is the act of trying to determine the future value of a company bitcoin or
other financial instrument traded on an exchange. The successful prediction of a bitcoin's
future price could yield significant profit. The efficient-market hypothesis suggests that
bitcoin prices reflect all currently available information and any price changes that are not
based on newly revealed information thus are inherently unpredictable. Others disagree and
those with this viewpoint possess myriad methods and technologies which purportedly allow
them to gain future price information.
The efficient market hypothesis posits that bitcoin prices are a function of information and
rational expectations, and that newly revealed information about a company's prospects is
almost immediately reflected in the current bitcoin price. This would imply that all publicly
known information about a company, which obviously includes its price history, would
already be reflected in the current price of the bitcoin. Accordingly, changes in the bit coin
price reflect release of new information, changes in the market generally, or random
movements around the value that reflects the existing information set. Burton Malkiel, in his
influential 1973 work A Random Walk Down Wall Street, claimed that bitcoin prices could
therefore not be accurately predicted by looking at price history. As a result, Malkiel argued,
bitcoin prices are best described by a statistical process called a "random walk" meaning each
day's deviations from the central value are random and unpredictable. This led Malkiel to
conclude that paying financial services persons to predict the market actually hurt, rather than
helped, net portfolio return. A number of empirical tests support the notion that the theory
applies generally, as most portfolios managed by professional bitcoin predictors do not
outperform the market average return after accounting for the managers' fees.
Fundamental analysis is built on the belief that human society needs capital to make progress
and if a company operates well, it should be rewarded with additional capital and result in a
surge in bitcoin price. Fundamental analysis is widely used by fund managers as it is the most
reasonable, objective and made from publicly available information like financial statement
analysis.
CHAPTER 2
SYSTEM ANALYSIS
EXISTING SYSTEM
Forecasting the bitcoin price prediction is the difficult method to get the high
accuracy .The individual indications for the bitcoin are hard to identify.
Logistic Regression is the most basic machine learning algorithm. The
Logistic Regression model returns an equation that determines the relationship
between the independent variables and the dependent variable.
First, the model calculates linear functions and then converts the result into a
probability. Finally, it converts the probability into a label. Economic data is
large and complex so it is extremely difficult to delineate its complicated inner
relationships with an econometric model.
Machine learning models are universal algorithms that are able to capture
complex nonlinear relationships within data which makes them appealing for
use in financial modeling. A great deal of effort has been directed to applying
machine learning to bitcoin price prediction though with varying degreeof
success.
Over fitting: Machine learning models are susceptible to over fitting, which means
that the model fits the training data too closely, making it difficult to generalize to
new, and unseen data. Over fitting can lead to inaccurate predictions and reduced
model performance.
Data quality: The accuracy of machine learning models depends on the quality of
data. If the data used to train the model is incomplete or contains errors, it can lead
to biased or inaccurate predictions.
PROPOSED SYSTEM
LSTM is used as the proposed methodology The forecasting of the bit coin is an
important objective in the financial world and remains one of the most challenging
problems due to the non-linear and chaotic financial nature.
The data is the price history and trading volumes of the fifty bit coins in the index
NIFTY 50 from NSE (National Bit coin Exchange) India. All datasets are at a
day-level with pricing and trading values split across .CVS files for each bit coin
along with a metadata file with some macro-information about the bit coins itself.
Nonlinear relationships: Financial data, including Bit coin prices, is known for
being nonlinear and chaotic in nature. LSTM is capable of learning complex
nonlinear relationships between input features and output variables, allowing it to
capture the nonlinear patterns and dependencies in the data more accurately than
traditional linear models.
Handling missing data: Time-series data often contains missing values, which can
be challenging for traditional models to handle. LSTM can effectively handle
missing data by using the previous time step's output as input when data is
missing, allowing it to continue making accurate predictions even when data is
incomplete.
Scalability: LSTM can be scaled up to handle large and complex datasets, making
it well-suited for financial data with multiple input features and large amounts of
historical data.
SYSTEM SPECIFICATION
SOFTWARE DESCRIPTION
HARDWARE DESCRIPTION
SOFTWARE DESCRIPTION
FRONT END: JAVA
The software requirement specification is created at the end of the analysis task. The
function and performance allocated to software as part of system engineering are developed
by establishing a complete information report as functional representation, a representation of
system behavior, an indication of performance requirements and design constraints,
appropriate validation criteria.
FEATURES OF JAVA
Java platform has two components:
The Java Virtual Machine (Java VM)
The Java Application Programming Interface (Java API)
The Java API is a large collection of ready-made software components that provide many
useful capabilities, such as graphical user interface (GUI) widgets. The Java API is grouped
into libraries (packages) of related components.
The following figure depicts a Java program, such as an application or applet, that's running
on the Java platform. As the figure shows, the Java API and Virtual Machine insulates the
Java program from hardware dependencies.
SOCKET OVERVIEW:
A network socket is a lot like an electrical socket. Various plugs around the network
have a standard way of delivering their payload. Anything that understands the standard
protocol can “plug in” to the socket and communicate.
Internet protocol (IP) is a low-level routing protocol that breaks data into small
packets and sends them to an address across a network, which does not guarantee to deliver
said packets to the destination.
Transmission Control Protocol (TCP) is a higher-level protocol that manages to reliably
transmit data. A third protocol, User DatagramProtocol (UDP), sits next to TCP and can be
used directly to support fast, connectionless, unreliable transport of packets.
CLIENT/SERVER:
A server is anything that has some resource that can be shared. There are
compute servers, which provide computing power; print servers, which manage a collection
of printers; disk servers, which provide networked disk space; and web servers, which store
web pages. A client is simply any other entity that wants to gain access to a particular server.
A server process is said to “listen” to a port until a client connects to it. A
server is allowed to accept multiple clients connected to the same port number, although each
session is unique. To manage multiple client connections, a server process must be
multithreaded or have some other means of multiplexing the simultaneous I/O.
RESERVED SOCKETS:
Once connected, a higher-level protocol ensues, which is dependent on which
port user are using. TCP/IP reserves the lower, 1,024 ports for specific protocols. Port
number 21 is for FTP, 23 is for Telnet, 25 is for e-mail, 79 is for finger, 80 is for HTTP, 119
is for Netnews-and the list goes on. It is up to each protocol to determine how a client should
interact with the port.
INETADDRESS:
The InetAddress class is used to encapsulate both the numerical IP address and
the domain name for that address. User interacts with this class by using the name of an IP
host, which is more convenient and understandable than its IP address. The InetAddress class
hides the number inside. As of Java 2, version 1.4, InetAddress can handle both IPv4 and
IPv6 addresses.
FACTORY METHODS:
The InetAddress class has no visible constructors. To create an InetAddress
object, user use one of the available factory methods. Factory methods are merely a
convention whereby static methods in a class return an instance of that class. This is done in
lieu of overloading a constructor with various parameter lists when having unique method
names makes the results much clearer.
Three commonly used InetAddress factory methods are:
1. Static InetAddressgetLocalHost ( ) throws
UnknownHostException
2. Static InetAddressgetByName (String hostName)
throwsUnknowsHostException
3. Static InetAddress [ ] getAllByName (String hostName)
ThrowsUnknownHostException
INSTANCE METHODS:
The InetAddress class also has several other methods, which can be used on
the objects returned by the methods just discussed. Here are some of the most commonly
used.
Boolean equals (Object other)- Returns true if this object has the same
Internet address as other.
1. byte [ ] get Address ( )- Returns a byte array that represents the object’s
Internet address in network byte order.
2. String getHostAddress ( ) - Returns a string that represents the host address
associated with the InetAddress object.
3. String get Hostname ( ) - Returns a string that represents the host name associated
with the InetAddress object.
4. booleanisMulticastAddress ( )- Returns true if this Internet address is a multicast
address. Otherwise, it returns false.
5. String toString ( ) - Returns a string that lists the host name and the IP address for
convenience.
The creation of a Socket object implicitly establishes a connection between the client
and server. There are no methods or constructors that explicitly expose the details of
establishing that connection. Here are two constructors used to create client sockets
Socket (String hostName, intport) - Creates a socket connecting the local host to the named
host and port; can throw an UnknownHostException or anIOException.
Socket (InetAddressipAddress, intport) - Creates a socket using a
preexistingInetAddressobject and a port; can throw an IOException.
A socket can be examined at any time for the address and port information
associated with it, by use of the following methods:
InetAddressgetInetAddress ( ) - Returns the InetAddress associated
with the Socket object.
IntgetPort ( ) - Returns the remote port to which this Socket object is
connected.
IntgetLocalPort ( ) - Returns the local port to which this Socket object
is connected.
Once the Socket object has been created, it can also be examined to gain
access to the input and output streams associated with it. Each of these methods can throw an
IO Exception if the sockets have been invalidated by a loss of connection on the Net.
Input Streamget Input Stream ( ) - Returns the InputStream associated with
the invoking socket.
Output Streamget Output Stream ( ) - Returns the OutputStream associated
with the invoking socket.
URL:
The Web is a loose collection of higher-level protocols and file formats, all unified in
a web browser. One of the most important aspects of the Web is that Tim Berners-Lee
devised a saleable way to locate all of the resources of the Net. The Uniform Resource
Locator (URL) is used to name anything and everything reliably.
The URLprovides a reasonably intelligible form to uniquely identify or address
information on the Internet. URLs are ubiquitous; every browser uses them to identify
information on the Web.
CHAPTER 3
SYSTEM DESIGN
MODULES
• INPUT DATASET
• LSTM implementation
• Performance analysis
INPUT MODULE
In the input module the nifty or the Sensex dataset is given as the input for the predication
process and the number of year needs to be selected so that the candle chart and the box
plot both the format chart is maintained and the performance graph can be possible.
Bitcoin time series datset is used as input.
LSTM IMPLEMENTATION
Long-short term memory is one of the recurrent neural network (RNNs) architecture.
Proposed a solution by using memory cells which consists of three components, including
input gate, output gate and forget gate. The gates control the interactions between
neighboring memory cells and the memory cell itself. The input gate controls the input
state while the output gate controls the output state which is the input of other memory
cells. The forget gate can choose to remember or forget its previous state.
PERFORMANCE ANALYSIS
If the prediction is correct, as validated by the empirical data, record a profit in the
amount of price change.If the prediction is incorrect, record a loss in the amount of price
change.We construct a long-only strategy as a limited version to above, to record a profit
only when the bitcoin rises and the prediction correctly predicts the rise.Performance
Analysis is a specialized discipline that provides athletes and coaches with objective
information that helps them understand performance. This process is underpinned by
systematic observation, which provides valid, reliable and detailed information relating to
performance.
INPUT DESIGN
Input design is a part of overall system design, which requires careful attention. Input of data
as designed as user-friendly and easier. Input design is a process of converting the user-
oriented description of the input to the computer based information system into programmer-
oriented specification. The objective of the input design is to create an input layout that is
easy to follow and prevent operator errors.
Input Data Set: Collecting dataset from UCI Machine learning repository. the real-life data
set, named Wisconsin Breast Cancer is used. The data set is publicly available on UCI
machine learning repository and consists of 699 instances with nine continuous attributes. by
removing some malignant instances to form a very unbalanced distribution has been
employed. The resultant data set had 483 instances (39 (8 percent) malignant and 444 (92
percent) benign instances). The nine continuous attributes are not transformed into
categorical attributes.
OUTPUT DESIGN
The output design refers to the results and information that are generated by
the system for many end users. Efficient and intelligent output design improves the system
relationships with the user and help in decision making. The output of the system is in the
form of report. outliers are present among Wisconsin cancer samples, the distribution of gene
expression values in cancer samples will have three sets. The Upper set corresponds to
activated attributes results while the Lower indicates inactivated attributes result.the Kernel
set named Kernel Set, that is a subset of the original data set, which is able to describe the
original data set both in terms of data structure and of obtained results Consequently, this
outlier issue can be addressed through the idea of detecting a “change point” or “break point”
in the ordered gene expression values of the cancer group. A model related to fitting least
squares should be effective for this goal A remarkable note should be made for the definition
of a new set, called kernel set, that has been demonstrated to be able to generate the “same”
output results in terms of rough outlier set with time computational benefits.
EASIBILITY STUDY
Preliminary investigation examine project feasibility, the likelihood the system will be
useful to the organization. The main objective of the feasibility study is to test the Technical,
Operational and Economical feasibility for adding new modules and debugging old running
system. All system is feasible if they are unlimited resources and infinite time. There are
aspects in the feasibility study portion of the preliminary investigation:
Technical Feasibility
Operation Feasibility
Economical Feasibility
TECHNICAL FEASIBILITY
The technical issue usually raised during the feasibility stage of the investigation
includes the following:
Does the necessary technology exist to do what is suggested?
Do the proposed equipments have the technical capacity to hold the data required to use
the new system?
Will the proposed system provide adequate response to inquiries, regardless of the
number or location of users?
Can the system be upgraded if developed?
Are there technical guarantees of accuracy, reliability, ease of access and data security?
Earlier no system existed to cater to the needs of ‘Secure Infrastructure
Implementation System’. The current system developed is technically feasible. It is a web
based user interface for audit workflow at DB2 Database. Thus it provides an easy access to
the users. The database’s purpose is to create, establish and maintain a workflow among
various entities in order to facilitate all concerned users in their various capacities or roles.
Permission to the users would be granted based on the roles specified.
Therefore, it provides the technical guarantee of accuracy, reliability and security. The
software and hard requirements for the development of this project are not many and are
already available in-house at NIC or are available as free as open source. The work for the
project is done with the current equipment and existing software technology. Necessary
bandwidth exists for providing a fast feedback to the users irrespective of the number of users
using the system.
OPERATIONAL FEASIBILITY
Proposed projects are beneficial only if they can be turned out into information
system. That will meet the organization’s operating requirements. Operational feasibility
aspects of the project are to be taken as an important part of the project implementation.
Some of the important issues raised are to test the operational feasibility of a project includes
the following: -
Is there sufficient support for the management from the users?
Will the system be used and work properly if it is being developed and implemented?
Will there be any resistance from the user that will undermine the possible application
benefits?
This system is targeted to be in accordance with the above-mentioned issues.
Beforehand, the management issues and user requirements have been taken into
consideration. So there is no question of resistance from the users that can undermine the
possible application benefits.
The well-planned design would ensure the optimal utilization of the computer resources and
would help in the improvement of performance status.
ECONOMIC FEASIBILITY
A system can be developed technically and that will be used if installed must still be a
good investment for the organization. In the economical feasibility, the development cost in
creating the system is evaluated against the ultimate benefit derived from the new systems.
Financial benefits must equal or exceed the costs.
The system is economically feasible. It does not require any addition hardware or
software. Since the interface for this system is developed using the existing resources and
technologies available at NIC, There is nominal expenditure and economical feasibility for
certain.
DATAFLOW DIAGRAM
CHAPTER 4
SYSTEM TESTING
SYSTEM TESTING
System testing is the stage of implementation, which is aimed at ensuring that the
system works accurately and efficiently before live operation commences. Testing is vital to
the success of the system. System testing makes a logical assumption that if all the parts of
the system are correct, the goal will be successfully achieved. The candidate system is subject
to a variety of tests.
A series of tests are performed for the proposed system before the system is ready for
user acceptance testing.
The testing steps are:
Unit testing
Integration testing
Validation testing
Output testing
User acceptance testing
UNIT TESTING
Unit testing focuses verification efforts on the smallest unit of software design, the
module. This is also known as “module testing” .The modules are tested separately. This
testing is carried out during programming stage itself. In this testing step, each module is
found to be working satisfactorily as regard to the expected output from the module.
INTEGRATION TESTING
Data can be lost across an interface; one module can have an adverse effect on others;
sub-functions when combined may not produce the desired major functions; integration
testing is a systematic testing for constructing the program structure. While at the same time
conducting to uncover errors associated within the interface? The objective is to take unit
tested modules and to combine them and test it as a whole. Here correction is difficult
because the vast expenses of the entire program complicate the isolation of causes. This is the
integration-testing step; all the errors encountered are corrected for the next testing step.
VALIDATION TESTING
Verification testing runs the system in a simulated environment using simulated data.
This simulated test is sometimes called alpha testing. This simulated test is primarily looking
for errors and monitions regarding end user and decisions design specifications hat where
specified in the earlier phases but not fulfilled during construction.
Validation refers to the process of using software in a live environment in order to
find errors. The feedback from the validation phase generally produces changes in the
software to deal with errors and failures that are uncovered. Than a set of user sites is selected
that puts the system in to use on a live basis. They are called beta tests.
The beta test suits use the system in day to day activities. They process live
transactions and produce normal system output. The system is live in every sense of the
word; except that the users are aware they are using a system that can fail. But the
transactions that are entered and persons using the system are real. Validation may continue
for several months. During the course of validating the system, failure may occur and the
software will be changed. Continued use may produce additional failures and need for still
more changes.
OUTPUT TESTING
After performing the validation, the next step is output testing of the proposed
system, since no system could be useful if it does not produce the required output in the
specified format. Asking the users about the format required by them tests the output
generated or displayed by the system under consideration. Hence the output format is
considered in two ways-one is on screen and another in printed format.
USER ACCEPTANCE TESTING
User acceptance of a system is the key factor for the success of any system. The
system under consideration is tested for the user acceptance by constantly keeping in touch
with the prospective system users at the time of developing and making changes whenever
required. This is done in regard to the following point:
An acceptance test has the objective of selling the user on the validity and reliability of
the system .it verifies that the system’s procedures operate to system specifications and that
the integrity of important data is maintained. Performance of an acceptance test is actually the
user’s show. User motivation is very important for the successful performance of the system.
After that a comprehensive test report is prepared. This report shows the system’s tolerance,
Performance range, error rate and accuracy.
CHAPTER 5
MAINTENANCE
SYSTEM MAINTENANCE
The objectives of this maintenance work are to make sure that the system
gets into work all time without any bug. Provision must be for environmental changes which
may affect the computer or software system. This is called the maintenance of the system.
Nowadays there is the rapid change in the software world. Due to this rapid change, the
system should be capable of adapting these changes. In this project the process can be added
without affecting other parts of the system.
Maintenance plays a vital role. The system is liable to accept any
modification after its implementation. This system has been designed to favor all new
changes. Doing this will not affect the system’s performance or its accuracy.
Maintenance is necessary to eliminate errors in the system during its working life and
to tune the system to any variations in its working environment. It has been seen that there are
always some errors found in the system that must be noted and corrected. It also means the
review of the system from time to time.
The review of the system is done for:
Knowing the full capabilities of the system.
TYPES OF MAINTENANCE:
Corrective maintenance
Adaptive maintenance
Perfective maintenance
Preventive maintenance
CORRECTIVE MAINTENANCE
Changes made to a system to repair flows in its design coding or implementation. The
design of the software will be changed. The corrective maintenance is applied to correct the
errors that occur during that operation time. The user may enter invalid file type while
submitting the information in the particular field, then the corrective maintenance will
displays the error message to the user in order to rectify the error.
Maintenance is a major income source. Nevertheless, even today many organizations
assign maintenance to unsupervised beginners, and less competent programmers.
The user’s problems are often caused by the individuals who developed the product,
not the maintainer. The code itself may be badly written maintenance is despised by many
software developers Unless good maintenance service is provided, the client will take future
development business elsewhere. Maintenance is the most important phase of software
production, the most difficult and most thankless.
ADAPTIVE MAINTENANCE:
It means changes made to system to evolve its functionalities to change
business needs or technologies. If any modification in the modules the software will adopt
those modifications. If the user changes the server then the project will adapt those changes.
The modification server work as the existing is performed.
PERFECTIVE MAINTENANCE:
Perfective maintenance means made to a system to add new features or improve
performance. The perfective maintenance is done to take some perfect measures to maintain
the special features. It means enhancing the performance or modifying the programs to
respond to the users need or changing needs. This proposed system could be added with
additional functionalities easily. In this project, if the user wants to improve the performance
further then this software can be easily upgraded.
PREVENTIVE MAINTENANCE:
Preventive maintenance involves changes made to a system to reduce the
changes of features system failure. The possible occurrence of error that might occur are
forecasted and prevented with suitable preventive problems. If the user wants to improve the
performance of any process then the new features can be added to the system for this project.
EXPERIMENTAL SETUP
This study makes contributions into four aspects. Firstly, this article combines
traditional candlestick charting with the latest artificial intelligence methods to enrich the
forecasting research of the bit coin. One-day candlestick patterns under different machine
learning methods, we combine traditional technical analysis methods with AI technology
We can see that the return rate based on our research method is better than the market
performance from the result. The effect will be more prominent if we can short-sell bit
coins. However, after considering, the error is significantly reduced.
ALGORITHM ACCURACY
LSTM 98
CNN 95
NB 84
ACCURACY
100
95
90 ACCURACY
85
80
75
LSTM CNN NB
AUTOCORRELATION FUNCTION PLOT
Autocorrelation function plot (ACF): Autocorrelation refers to how correlated a
time series is with its past values whereas the ACF is the plot used to see the correlation
between the points, up to and including the lag unit.
CONCLUSION