Overview of IAS Computer Function - Organization of The Von Neumann Machine and Harvard Architecture

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 18

Overview of IAS computer function - Organization of

the von Neumann machine and


Harvard architecture
2 types of Computers
• Fixed Program Computers
– Their function is very specific and they couldn’t be
programmed, e.g. Calculators. 

• Stored Program Computers


– These can be programmed to carry out many different
tasks, applications are stored on them, hence the name. 
– This novel idea meant that a computer built with this
architecture would be much easier to reprogram. 
IAS
• The most famous first- generation computer, known as the IAS
computer.

• A fundamental design approach first implemented in the IAS


computer is known as the stored- program concept.

• This idea is usually attributed to the mathematician John von Neumann

• In 1946, von Neumann and his colleagues began the design of a new
stored- program computer, referred to as the IAS computer, at the
Princeton Institute for Advanced Studies.

• The IAS computer, although not completed until 1952, is the prototype
of all subsequent general- purpose computers
Von Neumann architecture
• The modern computers are based on a stored-program concept
introduced by John Von Neumann.

• It is an architecture where the data and programs are subjected


to shared memory i.e., are stored in the same memory block.

• The Von Neumann processor operates fetching and execution


cycles in a serial manner.
• Like we have said that in this architecture, data and instructions both reside in a
single memory unit hence a single set of buses is used by the CPU to access the
memory.
• A computer architecture that uses a single memory unit within which both
data and instructions get stored is known as Von Neumann architecture.
•  Along with this, there is a single bus for memory access, an arithmetic
unit, and a program control unit.
• The memory of the IAS consists of 4,096 storage locations, called words, of
40 binary digits (bits) each.
• Both data and instructions are stored there. Numbers are represented in binary
form, and each instruction is a binary code.
• Each number is represented by a sign bit and a 39-bit value.
• A word may alternatively contain two 20-bit instructions
– each instruction consisting of an 8-bit operation code (opcode) specifying the
operation to be performed and
– a 12-bit address designating one of the words in memory (numbered from 0 to
999).
• The IAS operates by repetitively performing an instruction cycle
• Each instruction cycle consists of two subcycles

• This instruction may be taken from the IBR, or it can be obtained from
memory by loading a word into the MBR and then to IBR. MBR - > IBR
– the opcode of the instruction is loaded into the IR and
– the address portion is loaded into the MAR.
• The IAS computer had a total of 21 instructions,
• These can be grouped as follows:
– Data transfer: Move data between memory and ALU registers or between two ALU
registers.
– Unconditional branch: Normally, the control unit executes instructions in sequence
from memory. This sequence can be changed by a branch instruction, which facilitates
repetitive operations
– Conditional branch: The branch can be made dependent on a condition, thus allowing
decision points.
– Arithmetic: Operations performed by the ALU.
– Address modify: Permits addresses to be computed in the ALU and then inserted into
instructions stored in memory. This allows a program considerable addressing flexibility
Basic operational concepts
• In most of Modern computers the instructions can be realized as follows
Ex. C=a+b;
– Load LOCA, R1
– Load LOCB, R2
– Add R1, R2, R3
• Execution procedure
– First instruction transfers the contents of memory location LOC into
processor register R1 and R2
– Then instruction adds the content of R1 and R2 and places the sum into
R3
Von Neumann bottleneck
• Whatever we do to enhance performance, we cannot get away from the fact that
instructions can only be done one at a time and can only be carried out sequentially.
• Both of these factors hold back the competence of the CPU. This is commonly
referred to as the ‘Von Neumann bottleneck’.
• We can provide a Von Neumann processor with more cache, more RAM, or faster
components but if original gains are to be made in CPU performance then an
influential inspection needs to take place of CPU configuration. 
• This architecture is very important and is used in our PCs and even in Super
Computers.
Harvard architecture 
• The Harvard architecture has two separate memory spaces
dedicated to program code and to data, respectively, two
corresponding address buses, and two data buses for accessing
two memory spaces.
• The Harvard processor offers fetching and executions in
parallel.
Harvard Architecture
Features of Harvard Architecture

• Separate data path and instruction path is available.


• Fetching of data and instructions can be done simultaneously
• Different sized cells can be allowed in both the memories.
• Both memories can use different cell sizes making effective use
of resources.
• Greater memory bandwidth that is more predictable (separate
memory for instructions and data)
• There is less chance of corruption since data and instructions are
transferred via different buses
Comparison of von Neumann and Harvard
Architecture

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy