Written Assignment 7 Page 1 of 5

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

Written Assignment 7 Page 1 of 5

Written Assignment 7

University of the People

CS 4407: Data Mining and Machine Learning

Unit 7: Artificial Neural Networks – Part 2

<Name Removed>
Written Assignment 7 Page 2 of 5

For Unit 7, you will report the results from neural network training you completed in Unit 6. Write a

short paper explaining the process of developing the network provided including:

• Details on how many iterations of network designs were evaluated.

• What results were obtained?

• What alternatives were tested to determine the best approach for training a network that

would yield accurate results in the minimum of training steps?

When training the artificial neural network, it is preferable to employ a larger number of learning steps

rather than a smaller number of learning steps. The machine learning algorithm/function would be

able to begin lowering the amount of error in finding the end result with a greater number of learning

stages. Looking at the error progress graph, we can see that as the number of training steps decreases,

there is much greater space between each data point, indicating that there is a considerable margin of

error between each data point. As the machine learns more steps, there is less and less room for

mistake when predicting the ultimate outcome, as seen by the error progress graph, which moves to

the right as the number of training steps grows.

In terms of network design iterations, we must initially begin with an arbitrary number of training steps

and can begin with the default learning rate, momentum, and weight range. Choose a learning rate of

0.3, a momentum of 0.9, and a weight range of -1 to 1. Even if we utilize 1000 training steps, there is still

a large amount of space between the data points, and hence a margin of error. To find the lowest

number of training steps required, we must randomly increase the training steps until the error progress

graph shows the data points just beginning to touch each other. In our scenario, if we increase the

number of training steps to 2500, we can observe a tiny percentage of the data points clustering

extremely closely together towards the bottom of the graph.


Written Assignment 7 Page 3 of 5

We may also arbitrarily raise the number of training steps to 5000, in which case we can utilize the

error progress graph's tick marks to mark off where the data points begin to cluster more together as

the number of training steps increases. Setting the training steps to 5000 is a suitable number since we

can still observe the significant error margin in the left half without having the clustered data points

reach too far to the right, as we would not be able to view the entire graph if we used a big training

steps number, such as 10,000.

Figure 1: Example of error progress graph at 1000 training steps

Figure 2: Example of error progress graph at 10,000 training steps


Written Assignment 7 Page 4 of 5

Figure 3: Example of error progress graph at 5000 training steps

We must also examine what we would accept as an acceptable level of uncertainty between

approximated data. In many circumstances, the acceptable margin of error is 5% or less of the

predicted outcome and based on the findings in the error progress graph, we can conclude that the

minimum number of training steps required should be between 2500 and 3000 steps. Using 5000

training steps instead of 10,000 or randomly picking a number will yield similar results and a better

understanding of the whole image, which also generates the minimal amount of training steps at

about 2500 to 3000. In my situation, I went through three cycles of network design assessments to

get the results.


Written Assignment 7 Page 5 of 5
Conclusion

Despite the initial challenge of grasping the components that drive the design of meaningful and

successful neural networks, the project greatly expanded my knowledge of the subject. The most

essential lesson I took away from this experience was an understanding of the influence of the neural

network architecture on the output. Throughout the numerous attempts to build up the network, I

saw radically different behaviors that I am now unsure how to manage.


Written Assignment 7 Page 6 of 5

References

James, G., Witten, D., Hastie, T., & Tibshirani, R. (2013). An Introduction to Statistical Learning with

Applications in R. Retrieved from http://faculty.marshall.usc.edu/gareth-

james/ISL/ISLR%20Seventh%20Printing.pdf

Venables, W. N., & Smith, D. M. (2012). An Introduction to R. Retrieved from https://cran.r-

project.org/doc/manuals/R-intro.pdf

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy