Overview of IAS Computer Function - Organization of The Von Neumann Machine and Harvard Architecture
Overview of IAS Computer Function - Organization of The Von Neumann Machine and Harvard Architecture
Overview of IAS Computer Function - Organization of The Von Neumann Machine and Harvard Architecture
• In 1946, von Neumann and his colleagues began the design of a new
stored- program computer, referred to as the IAS computer, at the
Princeton Institute for Advanced Studies.
• The IAS computer, although not completed until 1952, is the prototype
of all subsequent general- purpose computers
Von Neumann architecture
• The modern computers are based on a stored-program concept
introduced by John Von Neumann.
• This instruction may be taken from the IBR, or it can be obtained from
memory by loading a word into the MBR and then to IBR. MBR - > IBR
– the opcode of the instruction is loaded into the IR and
– the address portion is loaded into the MAR.
• The IAS computer had a total of 21 instructions,
• These can be grouped as follows:
– Data transfer: Move data between memory and ALU registers or between two ALU
registers.
– Unconditional branch: Normally, the control unit executes instructions in sequence
from memory. This sequence can be changed by a branch instruction, which facilitates
repetitive operations
– Conditional branch: The branch can be made dependent on a condition, thus allowing
decision points.
– Arithmetic: Operations performed by the ALU.
– Address modify: Permits addresses to be computed in the ALU and then inserted into
instructions stored in memory. This allows a program considerable addressing flexibility
Basic operational concepts
• In most of Modern computers the instructions can be realized as follows
Ex. C=a+b;
– Load LOCA, R1
– Load LOCB, R2
– Add R1, R2, R3
• Execution procedure
– First instruction transfers the contents of memory location LOC into
processor register R1 and R2
– Then instruction adds the content of R1 and R2 and places the sum into
R3
Von Neumann bottleneck
• Whatever we do to enhance performance, we cannot get away from the fact that
instructions can only be done one at a time and can only be carried out sequentially.
• Both of these factors hold back the competence of the CPU. This is commonly
referred to as the ‘Von Neumann bottleneck’.
• We can provide a Von Neumann processor with more cache, more RAM, or faster
components but if original gains are to be made in CPU performance then an
influential inspection needs to take place of CPU configuration.
• This architecture is very important and is used in our PCs and even in Super
Computers.
Harvard architecture
• The Harvard architecture has two separate memory spaces
dedicated to program code and to data, respectively, two
corresponding address buses, and two data buses for accessing
two memory spaces.
• The Harvard processor offers fetching and executions in
parallel.
Harvard Architecture
Features of Harvard Architecture