0% found this document useful (0 votes)
155 views

Neuromorphic Computing

This document provides an overview of neuromorphic computing. It discusses how neuromorphic computing systems aim to address the limitations of von Neumann architectures by taking inspiration from biological neural networks. Specifically, it summarizes that neuromorphic systems distribute computing and memory across interconnected processing units like the brain, rather than separating them as in traditional computers. This allows for much lower power consumption compared to conventional processors. The document also provides background on neuromorphic concepts and existing neuromorphic systems like IBM's TrueNorth chip.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
155 views

Neuromorphic Computing

This document provides an overview of neuromorphic computing. It discusses how neuromorphic computing systems aim to address the limitations of von Neumann architectures by taking inspiration from biological neural networks. Specifically, it summarizes that neuromorphic systems distribute computing and memory across interconnected processing units like the brain, rather than separating them as in traditional computers. This allows for much lower power consumption compared to conventional processors. The document also provides background on neuromorphic concepts and existing neuromorphic systems like IBM's TrueNorth chip.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

NEUROMORPHIC COMPUTING

A seminar report

Submitted by

SAYAN ROY

1706453

in partial fulfillment for the award of

the degree of

BACHELOR OF TECHNOLOGY

IN

INFORMATION TECHNOLOGY
At

SCHOOL OF COMPUTER ENGINEERING

KIIT Deemed to be University


BHUBANESWAR
JANUARY 2021
CERTIFICATE
This is to certify that the SEMINAR REPORT entitled Neuromorphic
Computing, is a bonafide work done by Sayan Roy (1706453) in partial
fulfillment for the requirement for the award of the degree of Bachelor of
Information Technology.

Prof. Sital. Dash


Seminar Supervisor
ABSTRACT
Computation in its many forms is the engine that fuels our modern civilization.
Modern computation—based on the von Neumann architecture—has allowed, until
now, the development of continuous improvements, as predicted by Moore’s law.
However, computation using current architectures and materials will inevitably—
within the next 10 years—reach a limit because of fundamental scientific reasons.
Understanding how the brain manages billions of processing units connected via
kilometers of fibers and trillions of synapses, while consuming a few tens of Watts
could provide the key to a completely new category of hardware (neuromorphic
computing systems). In order to achieve this, a paradigm shift for computing as a
whole is needed, which will see it moving away from current "bit precise" computing
models and towards new techniques that exploit the stochastic behavior of simple,
reliable, very fast, low power computing devices embedded in intensely recursive
architectures.

Keywords: analog hardware , digital hardware , high-performance computing , neural


networks , neural models , neuromorphic computing

[1]

TABLE OF CONTENTS
Abstract : 1
Table of Contents : 2
List of Figures : 3
List of Tables : 4

1. Introduction : 5
2. Von Neumann vs Neuromorphic : 6
3. System Level Understanding : 7
4. Device Level Understanding : 8
5. Performance : 9-10
6. Proposed Implementation : 11
7. Existing Systems : 12
8. Neuromorphic Concepts : 13
9. Open Issues : 14
10. Steps Involved in building a well defined neuromorphic system : 15
11. Conclusion : 16

References : 17

[2]

LIST OF FIGURES

Figure Figure Title Page


ID

1 Comparison between conventional and Neuromorphic Computers 5


2 Interconnection between Conventional and Neural Circuits 6
3 Delay time per transistor 7
4 Block diagram of Neuromorphic Computing 9

[3]

LIST OF TABLES

Table Table Title Page


ID
1 Comparison between Neuromorphic Computing, Brain Computing 6
and Von Neumann Architecture

[4]

1. INTRODUCTION
Computers have become essential to all aspects of modern life—from process
controls, engineering, and science to entertainment and communications—and are
omnipresent all over the globe. Currently, about 5–15% of the world’s energy is spent
in some form of data manipulation, transmission, or processing.
While conventional computational devices had achieved notable feats, they failed in
some of the most basic tasks that biological systems have mastered, such as speech
and image recognition. Hence the idea that taking cues from biology might lead to
fundamental improvements in computational capabilities. Since that time, we have
witnessed unprecedented progress in neuromorphic technology that has resulted in
systems that are significantly more power efficient than imagined. Systems have been
mass-produced with over 5 billion transistors per die, and feature sizes are now
approaching 10 nm. These advances made possible a revolution in parallel computing.
Today, parallel computing is commonplace with hundreds of millions of cell phones
and personal computers containing multiple processors, and the largest
supercomputers having CPU counts in the millions.

Moreover, most major research universities have machine learning groups in


computer science, mathematics, or statistics. Machine learning is such a rapidly
growing field that it was recently called the “infrastructure for everything.” Over the
years, a number of groups have been working on direct hardware implementations of
deep neural networks. These designs vary from specialized but conventional
processors optimized for machine learning “kernels” to systems that attempt to
directly simulate an ensemble of “silicon” neurons, better known as neuromorphic
computing. While the former approaches can achieve dramatic results, e.g., 120 times
lower power compared with that of general-purpose processors, they are not
fundamentally different from existing CPUs. The latter neuromorphic systems are
more in line with what researchers began working on in the 1980s with the
development of analog CMOS-based devices with an architecture that is modeled
after biological neurons. One of the more recent accomplishments in neuromorphic
computing has come from IBM research, namely, a biologically inspired chip
(“TrueNorth”) that implements one million spiking neurons and 256 million synapses
on a chip with 5.5 billion transistors with a typical power draw of 70 milliwatts. As
impressive as this system is, if scaled up to the size of the human brain, it is still about
10,000 times too power intensive.

[5]

2. VON NEUMANN vs. NEUROMORPHIC


Well-supported predictions, based on solid scientific and engineering data, indicate
that conventional approaches to computation will hit a wall in the next 10 years.
Principally, this situation is due to three major factors: (1) fundamental (atomic) limits
exist beyond which devices cannot be miniaturized, (2) the local energy dissipation
limits the device packing density, and (3) the increase and lack of foreseeable limit in
overall energy consumption are becoming prohibitive.

Neuromorphic computing systems are aimed at addressing these needs. They will
have much lower power consumption than conventional processors and they are
explicitly designed to support dynamic learning in the context of complex and
unstructured data. Early signs of this need show up in the Office of Science portfolio
with the emergence of machine learning based methods applied to problems where
traditional approaches are inadequate.

Table 1

[6]

3. SYSTEM LEVEL UNDERSTANDING


Traditional computational architectures and their parallel derivatives are based
on a core concept known as the von Neumann architecture (see Figure 1). The
system is divided into several major, physically separated, rigid functional units
such as memory (MU), control processing (CPU), arithmetic/logic (ALU), and
data paths. This separation produces a temporal and energetic bottleneck
because information has to be shuttled repeatedly between the different parts of
the system. This “von Neumann” bottleneck limits the future development of
revolutionary computational systems. Traditional parallel computers introduce
thousands or millions of conventional processors each connected to others.
Aggregate computing performance is increased, but the basic computing
element is fundamentally the same as that in a serial computer and is similarly
limited by this bottleneck.
In contrast, the brain is a working system that has major advantages in these
aspects. The energy efficiency is markedly—many orders of magnitude—
superior. In addition, the memory and processors in the brain are collocated
because the constituents can have different roles depending on a learning
process. Moreover, the brain is a flexible system able to adapt to complex
environments, self-programming, and capable of complex processing. While the
design, development, and implementation of a computational system similar to
the brain is beyond the scope of today’s science and engineering, some
important steps in this direction can be taken by imitating nature. Clearly a new
disruptive technology is needed which must be based on revolutionary scientific
developments. In this “neuromorphic” architecture (see Figure 1), the various
computational elements are mixed together and the system is dynamic, based on
a “learning” process by which the various elements of the system change and
readjust depending on the type of stimuli they receive.

[7]

4. DEVICE LEVEL UNDERSTANDING


A major difference is also present at the device level (see Figure 2). Classical
von Neumann computing is based on transistors, resistors, capacitors, inductors
and communication connections as the basic devices. While these conventional
devices have some unique characteristics (e.g., speed, size, operation range),
they are limited in other crucial aspects (e.g., energy consumption, rigid design
and functionality, inability to tolerate faults, and limited connectivity). In
contrast, the brain is based on large collections of neurons, each of which has a
body (soma), synapses, axon, and dendrites that are adaptable and fault tolerant.
Also, the connectivity between the various elements in the brain is much more
complex than in a conventional computational circuit (see Figure 2).

[8]

5. PERRFORMANCE
The performance gap between neuromorphic and our general silicon machines
comes from a variety of factors, including unclear internal knowledge about the
neuronal model/ connection topology/ coding scheme/ learning algorithm,
discrete state space with limited precision, and weak external supports of
benchmark data resource/ computing platform/ programming tool, all of which
are not sophisticated than those in machine learning. Currently, researchers in
different sub-domains for the exploration of neuromorphic computing usually
have distinct optimization objective, such as reproducing cell-level or circuit-
level neural behaviors, emulating the brain-like functionalities at macro level, or
just reducing the execution cost from hardware perspective. Neuromorphic
computing still lags behind machine learning models if we cannot have clear
target and only consider the application accuracy.

Therefore, we need to rethink the true advantage of human brain (e.g. strong
generalization, multi-modal processing and association, memory-based
computing) and the goal of neuromorphic computing, rather than coming to a
dilemma of confronting machine learning. We should make efforts to
understand and bridge the current “gap” between neuromorphic computing and
machine learning.

[9]

6. PROPOSED IMPLEMENTATION
Ultimately, an architecture that can scale neuromorphic systems to “brain scale”
and beyond is needed. A brain scale system integrates approximately 1011
neurons and 1015 synapses into a single system. The high-level neuromorphic
architecture illustrated in Figure 1 consists of several large-scale synapse arrays
connected to soma arrays such that flexible layering of neurons (including
recurrent networks) is possible and that off-chip communication uses the
address event representation (AER) approach to enable digital communication
to link spiking analog circuits. Currently, most neuromorphic designs implement
synapses and somata as discrete sub-circuits connected via wires implementing
dendrites and axons. In the future, new materials and new devices are expected
to enable integrated constructs as the basis for neuronal connections in large-
scale systems. For this, progress is needed in each of the discrete components
with the primary focus on identification of materials and devices that would
dramatically improve the implementation of synapses and somata.
One might imagine a generic architectural framework that separates the
implementation of the synapses from the soma in order to enable alternative
materials and devices for synapses to be tested with common learning/spiking
circuits (see Figure 6). A reasonable progression for novel materials test devices
would be the following: (1) single synapsedendrite-axon-soma feasibility test
devices, (2) chips with dozens of neurons and hundreds of synapses, followed
by (3) demonstration chips with hundreds of neurons and tens of thousands of
synapses.Once hundreds of neurons and tens of thousands of synapses have
been demonstrated in a novel system, it may be straightforward to scale these
building blocks to the scale of systems competitive with the largest CMOS
implementations.
State-of-the-art neural networks that support object and speech recognition can
have tens of millions of synapses and networks with thousands of inputs and
thousands of outputs. Simple street-scene recognition needed for autonomous
vehicles require hundreds of thousands of synapses and tens of thousands of
neurons. The largest networks that have been published—using over a billion
synapses and a million neurons—have been used for face detection and object
recognition in large video databases.

[10]
Fig 4. Block diagram of Neuromorphic Computing

[11]

7. EXISTING SYSTEMS

Recently Intel Corp. delivered fifty million artificial neurons to Sandia National
Laboratories, which is equivalent to the brain of a small mammal. The shipment is
first in a three-year series, by the end of which they are expecting the number of
experimental neurons in the final model to reach 1 billion or more. This collaboration
aims to boost neuromorphic computing solutions to newer heights while prototyping
the software, algorithms, and architectures. “With a neuromorphic computer of this
scale, we have a new tool to understand how brain-based computers can do
impressive feats that we cannot currently do with ordinary computers,” said Craig
Vineyard, project leader at Sandia. Researchers believe that improved algorithms and
computer circuitry can create broader applications for neuromorphic computers. They
also hope to determine how brain-inspired processors use the information at a
processing power of human brains. With these developments, let us further explore
neuromorphic computers and how it aims to revolutionise the AI application areas.

There are many companies and projects that are leading applications in this space. For
instance, as a part of the Loihi project by Intel, it has created a Liohi chip with 130000
neurons and 130 million synapses and excels at self-learning. Because the hardware is
optimised specifically for SNNs, it supports dramatically accelerated learning in
unstructured environments for systems that require autonomous operation and
continuous learning, with extremely low power consumption, plus high performance
and capacity.

TrueNorth’s neurons by IBM aims to revolutionise the brain-inspired computing


system. DARPA SyNAPSE, which consists of one million neurons brain-inspired
processor consumes merely 70 milliwatts. Also, it can perform 46 billion synaptic
operations per second, per watt. There are other companies such as HPE, Qualcomm,
and Samsung Electronics, among others exploring the area of neuromorphic
computing. In fact, according to a study, the global market for neuromorphic chips
which was estimated at $2.3 billion in the year 2020, is projected to reach a revised
size of $10.4 billion by 2027. These numbers only suggest that neuromorphic
computers are a way ahead in AI-based research and development.

[12]

8. NEUROMORPHIC CONCEPTS
The following concepts play an important role in the operation of a system,
which imitates the brain. It should be mentioned that sometimes the definitions
listed below are used in slightly different ways by different investigators.
Spiking - Signals are communicated between neurons through voltage or
current spikes. This communication is different from that used in current digital
systems, in which the signals are binary, or an analogue implementation, which
relies on the manipulation of continuous signals. Spiking signaling systems are
time encoded and transmitted via “action potentials”.
Plasticity - A conventional device has a unique response to a particular stimulus
or input. In contrast, the typical neuromorphic architecture relies on changing
the properties of an element or device depending on the past history. Plasticity is
a key property that allows the complex neuromorphic circuits to be modified
(“learn”) as they are exposed to different signals.
Fan-in/fan-out - In conventional computational circuits, the different elements
generally are interconnected by a few connections between the individual
devices. In the brain, however, the number of dendrites is several orders of
magnitude larger (e.g., 10,000). Further research is needed to determine how
essential this is to the fundamental computing model of neuromorphic systems.
Hebbian learning/dynamical resistance change - Long-term changes in the
synapse resistance after repeated spiking by the presynaptic neuron. This is also
sometimes referred to as spike time-dependent plasticity (STDP). An alternative
characterization in Hebbian learning is “devices that fire together, wire
together”.
Adaptability - Biological brains generally start with multiple connections out of
which, through a selection or learning process, some are chosen and others
abandoned. This process may be important for improving the fault tolerance of
individual devices as well as for selecting the most efficient computational path.
In contrast, in conventional computing the system architecture is rigid and fixed
from the beginning.
Criticality - The brain typically must operate close to a critical point at which
the system is plastic enough that it can be switched from one state to another,
neither extremely stable nor very volatile. At the same time, it may be important
for the system to be able to explore many closely lying states. In terms of
materials science, for example, the system may be close to some critical state
such as a phase transition.
Accelerators - The ultimate construction of a neuromorphic–based thinking machine
requires intermediate steps, working toward small-scale applications based on
neuromorphic ideas. Some of these types of applications require combining sensors
with some limited computation.

[13]

9. OPEN ISSUES
As we consider the building of large-scale systems from neuron like building
blocks, there are a large number of challenges that must be overcome.A number
of critical issues remain as we consider the artificial physical implementation of
a system that partially resembles a brain-like architecture such as:
1. What are the minimal physical elements needed for a working artificial
structure: dendrite, soma, axon, and synapse?
2. What are the minimal characteristics of each one of these elements
needed in order to have a first proven system?
3. What are the essential conceptual ideas needed to implement a minimal
system: spike-dependent plasticity, learning, reconfigurability, criticality,
short- and long term memory, fault tolerance, co-location of memory and
processing, distributed processing, large fan-in/fan-out, dimensionality?
Can we organize these in order of importance?
4. What are the advantages and disadvantages of a chemical vs. a solid-state
implementation?
5. What features must neuromorphic architecture have to support critical
testing of new materials and building block implementations?
6. What intermediate applications would best be used to prove the concept?
These and certainly additional questions should be part of a coherent approach
to investigating the development of neuromorphic computing systems. The field
could also use a comprehensive review of what has been achieved already in the
exploration of novel materials, as there are a number of excellent groups that are
pursuing new materials and new device architectures. Many of these activities
could benefit from a framework that can be evaluated on simple applications. At
the same time, there is a considerable gap in our understanding of what it will
take to implement state-of-the-art applications on neuromorphic hardware in
general. To date, most hardware implementations have been rather specialized
to specific problems and current practice largely uses conventional hardware for
the execution of deep learning applications and large-scale parallel clusters with
accelerators for the development and training of deep neural networks. Moving
neuromorphic hardware out of the research phase into applications and end use
would be helpful. This would require advances which support training of the
device itself and to show performance above that of artificial neural
Neuromorphic Computing: From Materials to Systems Architecture 24 networks
already implemented in conventional hardware. These improvements are
necessary both regarding power efficiency and ultimate performance.

[14]

10. STEPS INVOLVED IN BUILDING A WELL


DEFINEDNEUROMORPHIC COMPUTING SYSTEM
We envision the following stages in the development of such a project:
1. Identify conceptual design of neuromorphic architectures
2. Identify devices needed to implement neuromorphic computing
3. Define properties needed for prototype constituent devices
4. Define materials properties needed
5. Identify major materials classes that satisfy needed properties Neuromorphic
Computing: From Materials to Systems Architecture 25
6. Develop a deep understanding of the quantum materials used in these
applications
7. Build and test devices (e.g., synapse, soma, axon, dendrite)
8. Define and implement small systems, and to the extent possible,
integrate/demonstrate with appropriate research and development results in
programming languages, development and programming environments,
compilers, libraries, runtime systems, networking, data repositories, von
Neumann neuromorphic computing interfaces, etc.
9. Identify possible “accelerator” needs for intermediate steps in neuromorphic
computing (e.g., vision, sensing, data mining, event detection)
10. Integrate small systems for intermediate accelerators.
11. Integrate promising devices into end-to-end system experimental chips
(order 10 neurons, 100 synapses)
12. Scale promising end-to-end experiments to demonstration scale chips (order
100 neurons and 10,000 synapses)
13. Scale successful demonstration chips to system scale implementations (order
millions of neurons and billion synapses)
14. Scale successful demonstration chips to system scale implementations (order
millions of neurons and billion synapses)

[15]

11. CONCLUSION
The conclusions we derived from our study of neuromorphic system are as
follows:
1. Creating the architectural design for neuromorphic computing requires an
integrative, interdisciplinary approach between computer scientists,
engineers, physicists, and materials scientists.
2. Creating a new computational system will require developing new system
architectures to accommodate all needed functionalities.
3. One or more reference architectures should be used to enable comparisons of
alternative devices and materials.
4. The basis for the devices to be used in these new computational systems
require the development of novel nano and meso structured materials; this will
be accomplished by unlocking the properties of quantum materials based on
new materials physics.
5. The most promising materials require fundamental understanding of strongly
correlated materials, understanding formation and migration of ions, defects and
clusters, developing novel spin based devices, and/or discovering new quantum
functional materials.
6. To fully realize open opportunities requires designing systems and materials
that exhibit self- and external-healing, three-dimensional reconstruction,
distributed power delivery, fault tolerance, co-location of memory and
processors, multistate— i.e., systems in which the present response depends on
past history and multiple interacting state variables that define the present state.
7. The development of a new brain-like computational system will not evolve in
a single step; it is important to implement well-defined intermediate steps that
give useful scientific and technological information.

[16]

REFERENCES
1. Furber, S. (2016). Large-scale neuromorphic computing systems. Journal of
neural engineering, 13(5), 051001.

2. Schuman, C. D., Potok, T. E., Patton, R. M., Birdwell, J. D., Dean, M. E., Rose,
G. S., & Plank, J. S. (2017). A survey of neuromorphic computing and neural
networks in hardware. arXiv preprint arXiv:1705.06963.

3. van De Burgt, Y., Melianas, A., Keene, S. T., Malliaras, G., & Salleo, A. (2018).
Organic electronics for neuromorphic computing. Nature Electronics, 1(7),
386-397.

4. Monroe, D. (2014). Neuromorphic computing gets ready for the (really) big
time.

5. Burr, G. W., Shelby, R. M., Sebastian, A., Kim, S., Kim, S., Sidler, S., ... &
Leblebici, Y. (2017). Neuromorphic computing using non-volatile
memory. Advances in Physics: X, 2(1), 89-124.

6. Torrejon, J., Riou, M., Araujo, F. A., Tsunegi, S., Khalsa, G., Querlioz, D., ... &
Grollier, J. (2017). Neuromorphic computing with nanoscale spintronic
oscillators. Nature, 547(7664), 428-431.

7. Roy, K., Jaiswal, A., & Panda, P. (2019). Towards spike-based machine
intelligence with neuromorphic computing. Nature, 575(7784), 607-617.

8. Esser, S. K., Appuswamy, R., Merolla, P., Arthur, J. V., & Modha, D. S. (2015).
Backpropagation for energy-efficient neuromorphic computing. Advances in
neural information processing systems, 28, 1117-1125.

9. Pfeil, T., Grübl, A., Jeltsch, S., Müller, E., Müller, P., Petrovici, M. A., ... &
Meier, K. (2013). Six networks on a universal neuromorphic computing
substrate. Frontiers in neuroscience, 7, 11.

10. Marković, D., Mizrahi, A., Querlioz, D., & Grollier, J. (2020). Physics for
neuromorphic computing. Nature Reviews Physics, 2(9), 499-510.

[17]

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy