Neuromorphic Computing
Neuromorphic Computing
A seminar report
Submitted by
SAYAN ROY
1706453
the degree of
BACHELOR OF TECHNOLOGY
IN
INFORMATION TECHNOLOGY
At
[1]
TABLE OF CONTENTS
Abstract : 1
Table of Contents : 2
List of Figures : 3
List of Tables : 4
1. Introduction : 5
2. Von Neumann vs Neuromorphic : 6
3. System Level Understanding : 7
4. Device Level Understanding : 8
5. Performance : 9-10
6. Proposed Implementation : 11
7. Existing Systems : 12
8. Neuromorphic Concepts : 13
9. Open Issues : 14
10. Steps Involved in building a well defined neuromorphic system : 15
11. Conclusion : 16
References : 17
[2]
LIST OF FIGURES
[3]
LIST OF TABLES
[4]
1. INTRODUCTION
Computers have become essential to all aspects of modern life—from process
controls, engineering, and science to entertainment and communications—and are
omnipresent all over the globe. Currently, about 5–15% of the world’s energy is spent
in some form of data manipulation, transmission, or processing.
While conventional computational devices had achieved notable feats, they failed in
some of the most basic tasks that biological systems have mastered, such as speech
and image recognition. Hence the idea that taking cues from biology might lead to
fundamental improvements in computational capabilities. Since that time, we have
witnessed unprecedented progress in neuromorphic technology that has resulted in
systems that are significantly more power efficient than imagined. Systems have been
mass-produced with over 5 billion transistors per die, and feature sizes are now
approaching 10 nm. These advances made possible a revolution in parallel computing.
Today, parallel computing is commonplace with hundreds of millions of cell phones
and personal computers containing multiple processors, and the largest
supercomputers having CPU counts in the millions.
[5]
Neuromorphic computing systems are aimed at addressing these needs. They will
have much lower power consumption than conventional processors and they are
explicitly designed to support dynamic learning in the context of complex and
unstructured data. Early signs of this need show up in the Office of Science portfolio
with the emergence of machine learning based methods applied to problems where
traditional approaches are inadequate.
Table 1
[6]
[7]
[8]
5. PERRFORMANCE
The performance gap between neuromorphic and our general silicon machines
comes from a variety of factors, including unclear internal knowledge about the
neuronal model/ connection topology/ coding scheme/ learning algorithm,
discrete state space with limited precision, and weak external supports of
benchmark data resource/ computing platform/ programming tool, all of which
are not sophisticated than those in machine learning. Currently, researchers in
different sub-domains for the exploration of neuromorphic computing usually
have distinct optimization objective, such as reproducing cell-level or circuit-
level neural behaviors, emulating the brain-like functionalities at macro level, or
just reducing the execution cost from hardware perspective. Neuromorphic
computing still lags behind machine learning models if we cannot have clear
target and only consider the application accuracy.
Therefore, we need to rethink the true advantage of human brain (e.g. strong
generalization, multi-modal processing and association, memory-based
computing) and the goal of neuromorphic computing, rather than coming to a
dilemma of confronting machine learning. We should make efforts to
understand and bridge the current “gap” between neuromorphic computing and
machine learning.
[9]
6. PROPOSED IMPLEMENTATION
Ultimately, an architecture that can scale neuromorphic systems to “brain scale”
and beyond is needed. A brain scale system integrates approximately 1011
neurons and 1015 synapses into a single system. The high-level neuromorphic
architecture illustrated in Figure 1 consists of several large-scale synapse arrays
connected to soma arrays such that flexible layering of neurons (including
recurrent networks) is possible and that off-chip communication uses the
address event representation (AER) approach to enable digital communication
to link spiking analog circuits. Currently, most neuromorphic designs implement
synapses and somata as discrete sub-circuits connected via wires implementing
dendrites and axons. In the future, new materials and new devices are expected
to enable integrated constructs as the basis for neuronal connections in large-
scale systems. For this, progress is needed in each of the discrete components
with the primary focus on identification of materials and devices that would
dramatically improve the implementation of synapses and somata.
One might imagine a generic architectural framework that separates the
implementation of the synapses from the soma in order to enable alternative
materials and devices for synapses to be tested with common learning/spiking
circuits (see Figure 6). A reasonable progression for novel materials test devices
would be the following: (1) single synapsedendrite-axon-soma feasibility test
devices, (2) chips with dozens of neurons and hundreds of synapses, followed
by (3) demonstration chips with hundreds of neurons and tens of thousands of
synapses.Once hundreds of neurons and tens of thousands of synapses have
been demonstrated in a novel system, it may be straightforward to scale these
building blocks to the scale of systems competitive with the largest CMOS
implementations.
State-of-the-art neural networks that support object and speech recognition can
have tens of millions of synapses and networks with thousands of inputs and
thousands of outputs. Simple street-scene recognition needed for autonomous
vehicles require hundreds of thousands of synapses and tens of thousands of
neurons. The largest networks that have been published—using over a billion
synapses and a million neurons—have been used for face detection and object
recognition in large video databases.
[10]
Fig 4. Block diagram of Neuromorphic Computing
[11]
7. EXISTING SYSTEMS
Recently Intel Corp. delivered fifty million artificial neurons to Sandia National
Laboratories, which is equivalent to the brain of a small mammal. The shipment is
first in a three-year series, by the end of which they are expecting the number of
experimental neurons in the final model to reach 1 billion or more. This collaboration
aims to boost neuromorphic computing solutions to newer heights while prototyping
the software, algorithms, and architectures. “With a neuromorphic computer of this
scale, we have a new tool to understand how brain-based computers can do
impressive feats that we cannot currently do with ordinary computers,” said Craig
Vineyard, project leader at Sandia. Researchers believe that improved algorithms and
computer circuitry can create broader applications for neuromorphic computers. They
also hope to determine how brain-inspired processors use the information at a
processing power of human brains. With these developments, let us further explore
neuromorphic computers and how it aims to revolutionise the AI application areas.
There are many companies and projects that are leading applications in this space. For
instance, as a part of the Loihi project by Intel, it has created a Liohi chip with 130000
neurons and 130 million synapses and excels at self-learning. Because the hardware is
optimised specifically for SNNs, it supports dramatically accelerated learning in
unstructured environments for systems that require autonomous operation and
continuous learning, with extremely low power consumption, plus high performance
and capacity.
[12]
8. NEUROMORPHIC CONCEPTS
The following concepts play an important role in the operation of a system,
which imitates the brain. It should be mentioned that sometimes the definitions
listed below are used in slightly different ways by different investigators.
Spiking - Signals are communicated between neurons through voltage or
current spikes. This communication is different from that used in current digital
systems, in which the signals are binary, or an analogue implementation, which
relies on the manipulation of continuous signals. Spiking signaling systems are
time encoded and transmitted via “action potentials”.
Plasticity - A conventional device has a unique response to a particular stimulus
or input. In contrast, the typical neuromorphic architecture relies on changing
the properties of an element or device depending on the past history. Plasticity is
a key property that allows the complex neuromorphic circuits to be modified
(“learn”) as they are exposed to different signals.
Fan-in/fan-out - In conventional computational circuits, the different elements
generally are interconnected by a few connections between the individual
devices. In the brain, however, the number of dendrites is several orders of
magnitude larger (e.g., 10,000). Further research is needed to determine how
essential this is to the fundamental computing model of neuromorphic systems.
Hebbian learning/dynamical resistance change - Long-term changes in the
synapse resistance after repeated spiking by the presynaptic neuron. This is also
sometimes referred to as spike time-dependent plasticity (STDP). An alternative
characterization in Hebbian learning is “devices that fire together, wire
together”.
Adaptability - Biological brains generally start with multiple connections out of
which, through a selection or learning process, some are chosen and others
abandoned. This process may be important for improving the fault tolerance of
individual devices as well as for selecting the most efficient computational path.
In contrast, in conventional computing the system architecture is rigid and fixed
from the beginning.
Criticality - The brain typically must operate close to a critical point at which
the system is plastic enough that it can be switched from one state to another,
neither extremely stable nor very volatile. At the same time, it may be important
for the system to be able to explore many closely lying states. In terms of
materials science, for example, the system may be close to some critical state
such as a phase transition.
Accelerators - The ultimate construction of a neuromorphic–based thinking machine
requires intermediate steps, working toward small-scale applications based on
neuromorphic ideas. Some of these types of applications require combining sensors
with some limited computation.
[13]
9. OPEN ISSUES
As we consider the building of large-scale systems from neuron like building
blocks, there are a large number of challenges that must be overcome.A number
of critical issues remain as we consider the artificial physical implementation of
a system that partially resembles a brain-like architecture such as:
1. What are the minimal physical elements needed for a working artificial
structure: dendrite, soma, axon, and synapse?
2. What are the minimal characteristics of each one of these elements
needed in order to have a first proven system?
3. What are the essential conceptual ideas needed to implement a minimal
system: spike-dependent plasticity, learning, reconfigurability, criticality,
short- and long term memory, fault tolerance, co-location of memory and
processing, distributed processing, large fan-in/fan-out, dimensionality?
Can we organize these in order of importance?
4. What are the advantages and disadvantages of a chemical vs. a solid-state
implementation?
5. What features must neuromorphic architecture have to support critical
testing of new materials and building block implementations?
6. What intermediate applications would best be used to prove the concept?
These and certainly additional questions should be part of a coherent approach
to investigating the development of neuromorphic computing systems. The field
could also use a comprehensive review of what has been achieved already in the
exploration of novel materials, as there are a number of excellent groups that are
pursuing new materials and new device architectures. Many of these activities
could benefit from a framework that can be evaluated on simple applications. At
the same time, there is a considerable gap in our understanding of what it will
take to implement state-of-the-art applications on neuromorphic hardware in
general. To date, most hardware implementations have been rather specialized
to specific problems and current practice largely uses conventional hardware for
the execution of deep learning applications and large-scale parallel clusters with
accelerators for the development and training of deep neural networks. Moving
neuromorphic hardware out of the research phase into applications and end use
would be helpful. This would require advances which support training of the
device itself and to show performance above that of artificial neural
Neuromorphic Computing: From Materials to Systems Architecture 24 networks
already implemented in conventional hardware. These improvements are
necessary both regarding power efficiency and ultimate performance.
[14]
[15]
11. CONCLUSION
The conclusions we derived from our study of neuromorphic system are as
follows:
1. Creating the architectural design for neuromorphic computing requires an
integrative, interdisciplinary approach between computer scientists,
engineers, physicists, and materials scientists.
2. Creating a new computational system will require developing new system
architectures to accommodate all needed functionalities.
3. One or more reference architectures should be used to enable comparisons of
alternative devices and materials.
4. The basis for the devices to be used in these new computational systems
require the development of novel nano and meso structured materials; this will
be accomplished by unlocking the properties of quantum materials based on
new materials physics.
5. The most promising materials require fundamental understanding of strongly
correlated materials, understanding formation and migration of ions, defects and
clusters, developing novel spin based devices, and/or discovering new quantum
functional materials.
6. To fully realize open opportunities requires designing systems and materials
that exhibit self- and external-healing, three-dimensional reconstruction,
distributed power delivery, fault tolerance, co-location of memory and
processors, multistate— i.e., systems in which the present response depends on
past history and multiple interacting state variables that define the present state.
7. The development of a new brain-like computational system will not evolve in
a single step; it is important to implement well-defined intermediate steps that
give useful scientific and technological information.
[16]
REFERENCES
1. Furber, S. (2016). Large-scale neuromorphic computing systems. Journal of
neural engineering, 13(5), 051001.
2. Schuman, C. D., Potok, T. E., Patton, R. M., Birdwell, J. D., Dean, M. E., Rose,
G. S., & Plank, J. S. (2017). A survey of neuromorphic computing and neural
networks in hardware. arXiv preprint arXiv:1705.06963.
3. van De Burgt, Y., Melianas, A., Keene, S. T., Malliaras, G., & Salleo, A. (2018).
Organic electronics for neuromorphic computing. Nature Electronics, 1(7),
386-397.
4. Monroe, D. (2014). Neuromorphic computing gets ready for the (really) big
time.
5. Burr, G. W., Shelby, R. M., Sebastian, A., Kim, S., Kim, S., Sidler, S., ... &
Leblebici, Y. (2017). Neuromorphic computing using non-volatile
memory. Advances in Physics: X, 2(1), 89-124.
6. Torrejon, J., Riou, M., Araujo, F. A., Tsunegi, S., Khalsa, G., Querlioz, D., ... &
Grollier, J. (2017). Neuromorphic computing with nanoscale spintronic
oscillators. Nature, 547(7664), 428-431.
7. Roy, K., Jaiswal, A., & Panda, P. (2019). Towards spike-based machine
intelligence with neuromorphic computing. Nature, 575(7784), 607-617.
8. Esser, S. K., Appuswamy, R., Merolla, P., Arthur, J. V., & Modha, D. S. (2015).
Backpropagation for energy-efficient neuromorphic computing. Advances in
neural information processing systems, 28, 1117-1125.
9. Pfeil, T., Grübl, A., Jeltsch, S., Müller, E., Müller, P., Petrovici, M. A., ... &
Meier, K. (2013). Six networks on a universal neuromorphic computing
substrate. Frontiers in neuroscience, 7, 11.
10. Marković, D., Mizrahi, A., Querlioz, D., & Grollier, J. (2020). Physics for
neuromorphic computing. Nature Reviews Physics, 2(9), 499-510.
[17]