0% found this document useful (0 votes)
6 views

Csc 343 Computer Architecture 2

The document provides an overview of computer architecture, detailing the fundamental components such as the control unit, ALU, memory, and I/O controllers, and their roles in computing systems. It outlines the evolution of computers from mainframes to microcomputers, emphasizing the importance of compatibility and the instruction set architecture. Additionally, it discusses the design challenges faced by computer architects and introduces concepts related to registers, micro-operations, and data representation.

Uploaded by

Aluu Emmanuel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Csc 343 Computer Architecture 2

The document provides an overview of computer architecture, detailing the fundamental components such as the control unit, ALU, memory, and I/O controllers, and their roles in computing systems. It outlines the evolution of computers from mainframes to microcomputers, emphasizing the importance of compatibility and the instruction set architecture. Additionally, it discusses the design challenges faced by computer architects and introduces concepts related to registers, micro-operations, and data representation.

Uploaded by

Aluu Emmanuel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 16

COMPUTER ARCHITECTURE

INTRODUCTION
COMPUTER ARCHITECTURE: deals with the design of computers, data storage devices, and
networking components that store and run programs, transmit data, and drive interactions
between computers, across networks, and with users. Computer architects use parallelism and
various strategies for memory organization to design computing systems with very high
performance. Computer architecture requires strong communication between computer scientists
and computer engineers, since they both focus fundamentally on hardware design.
At its most fundamental level, a computer consists of a control unit, an arithmetic and logic
unit (ALU), a memory unit, and input/output (I/O) controllers. The ALU performs simple
addition, subtraction, multiplication, division, and logic operations, such as OR and AND. The
memory stores the program’s instructions and data. The control unit fetches data and instructions
from memory and uses operations of the ALU to carry out those instructions using that data.
(The control unit and ALU together are referred to as the central processing unit [CPU].) When
an input or output instruction is encountered, the control unit transfers the data between the
memory and the designated I/O controller. The operational speed of the CPU primarily
determines the speed of the computer as a whole. All of these components—the control unit, the
ALU, the memory, and the I/O controllers—are realized with transistor circuits.

Computers also have another level of memory called a cache, a small, extremely fast (compared
with the main memory, or random access memory RAM unit that can be used to store
information that is urgently or frequently needed. Current research includes cache design and
algorithms that can predict what data is likely to be needed next and preload it into the cache for
improved performance.
I/O controllers connect the computer to specific input devices (such as keyboards and touch
screen displays) for feeding information to the memory, and output devices (such as printers and
displays) for transmitting information from the memory to users. Additional I/O controllers
connect the computer to a network via ports that provide the conduit through which data flows
when the computer is connected to the Internet.
Linked to the I/O controllers are secondary storage devices, such as a disk drive, that are slower
and have a larger capacity than main or cache memory. Disk drives are used for maintaining
permanent data. They can be either permanently or temporarily attached to the computer in the
form of a compact disc (CD), a digital video disc (DVD), or a memory stick also called a flash
drive
FUNDAMENTALS of Computer Design
Four lines of evolution have emerged from the first computers (definitions are very loose and in
many case the borders between different classes are blurring):
1. Mainframes: large computers that can support very many users while delivering great
computing power. It is mainly in mainframes where most of the innovations (both in architecture
and in organization) have been made.
2. Minicomputers: have adopted many of the mainframe techniques, yet being
designed to sell for less, satisfying the computing needs for smaller groups of users. It is the
minicomputer group that improved at the fastest pace (since 1965 when DEC introduced the first
minicomputer, PDP-8), mainly due to the evolution of integrated circuits technology (the first IC
appeared in 1958).
3. Supercomputers: designed for scientific applications, they are the most expensive computers
(over one million dollars), processing is usually done in batch mode, for reasons of performance.
4. Microcomputers: have appeared in the microprocessor era (the first microprocessor, Intel
4004, was introduced in 1971). The term micro refers only to physical dimensions, not to
computing performance. A typical microcomputer (either a PC or a workstation) nicely fits on a
desk. Microcomputers are a direct product of technological advances: faster CPUs,
semiconductor memories, etc. Over the time many of the concepts previously used in
mainframes and minicomputers have become common place in microcomputers.
For many years the evolution of computers was concerned with the problem of object code
compatibility. A new architecture had to be, at least partly, compatible with older ones. Older
programs (“the dusty deck”) had to run without changes on the new machines. A dramatic
example is the IBM-PC architecture, launched in 1981, it proved so successful that further
developments had to conform with the first release, despite the flaws which became apparent in a
couple of years thereafter.
The assembly language is no longer the language in which new applications are written, although
the most sensitive parts continue to be written in assembly language, and this is due to advances
in languages and compiler technology.

WHAT DRIVES THE WORK OF A COMPUTER DESIGNER


Designing a computer is a challenging task. It involves software (at least at the level of designing
the instruction set), and hardware as well at all levels: functional organization, logic design,
implementation. Implementation itself deals with designing/specifying ICs, packaging, noise,
power, cooling etc.
It would be a terrible mistake to disregard one aspect or other of computer design, rather the
computer designer has to design an optimal machine across all mentioned levels. You can not
find a minimum unless you are familiar with a wide range of technologies, from compiler and
operating system design to logic design and packaging.
Architecture is the art and science of building. Vitruvius, in the 1st century AD, said that
architecture was a building that incorporated utilitas, firmitas and venustas, in English terms
commodity, firmness and delight. This definition recognizes that architecture embraces
functional, technological and aesthetic aspects.
Thus a computer architect has to specify the performance requirements of various parts of a
computer system, to define the interconnections between them, and to keep it harmoniously
balanced. The computer architect's job is more than designing the Instruction Set, as it has been
understood for many years. The more an architect is exposed to all aspects of computer design,
the more efficient she will be.
• the instruction set architecture refers to what the programmer sees as the machine's instruction
set. The instruction set is the boundary between the hardware and the software, and most of the
decisions concerning the instruction set affect the hardware, and the converse is also true, many
hardware decisions may beneficially/adversely affect the instruction set.
• the implementation of a machine refers to the logical and physical design techniques used to
implement an instance of the architecture. It is possible to have different implementations for
some architecture, in the same way there are different possibilities to build a house using the
same plans, but other materials and techniques. The implementation has two aspects:
• the organization refers to logical aspects of an implementation. In other words it refers to the
high level aspects of the design: CPU design, memory system, bus structure(s) etc.
• the hardware refers to the specifics of an implementation. Detailed logic design and packaging
are included here.

REGISTERS
What is Register Transfer?
A Register is a group of flip-flops with each flip-flop capable of storing one bit of information.
An n-bit register has a group of n flip-flops and is capable of storing binary information of n-bits
A register consists of a group of flip-flops and gates. The flip-flops hold the binary information
and gates control when and how new information is transferred into a register. Various types of
registers are available commercially. The simplest register is one that consists of only flip-flops
with no external gates.
Registers also define the storage area that influences the data and instructions. It can send data
and instructions from one register to another register, memory to register, and memory to
memory, the register transfer approach is used. Register Transfer Language, RTL, otherwise
called register transfer notation) is a powerful high level method of describing the architecture of
a circuit. VHDL code and schematics are often created from RTL. This register is used in the
transmission of data and instructions between memory and processors to implement the
particular tasks.

Register Transfer Language


The symbolic notation used to describe the micro-operation transfers amongst registers is called
Register transfer language.
The term register transfer means the availability of hardware logic circuits that can perform a
stated micro-operation and transfer the result of the operation to the same or another register.
The word language is borrowed from programmers who apply this term to programming
languages. This programming language is a procedure for writing symbols to specify a given
computational process.
The data transfer from one register to another is named in representative design using a
replacement operator.
The statement is
R2←R1

It indicates a transfer of the content of register R1 into register R2. It labeled a replacement of
the content of R2 by the content of R1. The content of the source register R1 does not shift after
the transfer.
A statement that specifies a register transfer involves that circuits are accessible from the outputs
of the source register to the inputs of the destination register and that the destination register has
corresponding load efficiency.
We need the transfer to appear only under a fixed control condition. This can be displayed using
an if-then statement.

If (P = 1) then (R2 ← R1)


where P is a control signal created in the control area. A control function is a Boolean variable
that is similar to 1 or 0. The control function is contained in the statement as follows −

P: R2 ← R1
The control condition is terminated with a colon. It represents the specification that the transfer
operation is implemented by the hardware only if P = 1. Each statement written in a register
transfer notation indicates a hardware structure for executing the transfer.
The diagram demonstrates the block diagram that shows the transfer from R1 to R2. The n
outputs of register R1 are linked to the n inputs of register R2. The letter n can denote any
number of bits for the register. It will be restored by an actual number when the duration of the
register is established.

Register R2 has a load input that is activated by the control variable P. It is considered that the
control variable is synchronized with the equivalent clock like the one used to the register.
As displayed in the timing diagram, P is activated in the control area by the increasing edge of a
clock pulse at time t. The next positive transition of the clock at time t + 1 discovers the load
input active and the data inputs of R2 are then loaded into the register in parallel. P can go back
to 0 at time t + 1. The transfer will appear with each clock pulse transition while P stays active.

The clock is not contained as a variable in the register transfer statements. It is considered that all
transfers appear during a clock edge transition. The control condition including P becomes active
only after time t, the actual transfer does not appear until the register is triggered by the next
positive transition of the clock at time t + 1.

The Following are some commonly used registers:


1. Accumulator: This is the most common register, used to store data taken out from the
memory.
2. General Purpose Registers: This is used to store data intermediate results during
program execution. It can be accessed via assembly programming.
3. Special Purpose Registers: Users do not access these registers. These registers are for
Computer system,
o MAR: Memory Address Register are those registers that holds the address for
memory unit.
o MBR: Memory Buffer Register stores instruction and data received from the
memory and sent from the memory.
o PC: Program Counter points to the next instruction to be executed.

o IR: Instruction Register holds the instruction to be executed.

Micro-Operations
The operations executed on data stored in registers are called micro-operations. A micro-
operation is an elementary operation performed on the information stored in one or more
registers.

Types of Micro-Operations
The micro-operations in digital computers are of 4 types:
1. Register transfer micro-operations transfer binary information from one register to
another.
2. Arithmetic micro-operations perform arithmetic operations on numeric data stored in
registers.
3. Logic micro-operations perform bit manipulation operation on non-numeric data stored in
registers.
4. Shift micro-operations perform shift micro-operations performed on data.

Arithmetic Micro-Operations
Some of the basic micro-operations are addition, subtraction, increment and decrement.
Add Micro-Operation
It is defined by the following statement:
R3 → R1 + R2
The above statement instructs the data or contents of register R1 to be added to data or content of
register R2 and the sum should be transferred to register R3.
Subtract Micro-Operation
Let us again take an example:
R3 → R1 + R2' + 1
In subtract micro-operation, instead of using minus operator we take 1's compliment and add 1
to the register which gets subtracted, i.e R1 - R2 is equivalent to R3 → R1 + R2' + 1
Increment/Decrement Micro-Operation
Increment and decrement micro-operations are generally performed by adding and subtracting 1
to and from the register respectively.
R1 → R1 + 1
R1 → R1 – 1
Symbolic Designation Description
R3 ← R1 + R2 Contents of R1+R2 transferred to R3.
R3 ← R1 - R2 Contents of R1-R2 transferred to R3.
R2 ← (R2)' Compliment the contents of R2.
R2 ← (R2)' + 1 2's compliment the contents of R2.
R3 ← R1 + (R2)' + 1 R1 + the 2's compliment of R2 (subtraction).
R1 ← R1 + 1 Increment the contents of R1 by 1.
R1 ← R1 - 1 Decrement the contents of R1 by 1.

Logic Micro-Operations
These are binary micro-operations performed on the bits stored in the registers. These operations
consider each bit separately and treat them as binary variables.
Let us consider the X-OR micro-operation with the contents of two registers R1 and R2.
P: R1 ← R1 X-OR R2

In the above statement we have also included a Control Function.

Assume that each register has 3 bits. Let the content of R1 be 010 and R2 be 100. The X-OR
micro-operation will be:

Shift Micro-Operations
These are used for serial transfer of data. That means we can shift the contents of the register to
the left or right. In the shift left operation the serial input transfers a bit to the right most position
and in shift right operation the serial input transfers a bit to the left most position.

There are three types of shifts as follows:

a) Logical Shift
It transfers 0 through the serial input. The symbol "shl" is used for logical shift left and "shr" is
used for logical shift right.
R1 ← she R1
R1 ← she R1

The register symbol must be same on both sides of arrows.


b) Circular Shift
This circulates or rotates the bits of register around the two ends without any loss of data or
contents. In this, the serial output of the shift register is connected to its serial input. "cil" and
"cir" is used for circular shift left and right respectively.
c) Arithmetic Shift
This shifts a signed binary number to left or right. An arithmetic shift left multiplies a signed
binary number by 2 and shift left divides the number by 2. Arithmetic shift micro-operation
leaves the sign bit unchanged because the signed number remains same when it is multiplied or
divided by 2.

Arithmetic Logical Unit


Instead of having individual registers performing the micro-operations, computer system
provides a number of registers connected to a common unit called as Arithmetic Logical Unit
(ALU). ALU is the main and one of the most important unit inisde CPU of computer. All the
logical and mathematical operations of computer are performed here. The contents of specific
register is placed in the in the input of ALU. ALU performs the given operation and then transfer
it to the destination register.

Data Representation
• Data refers to the symbols that represent people, events, things, and ideas. Data can be a name,
a number, the colors in a photograph, or the notes in a musical composition. • Data
Representation refers to the form in which data is stored, processed, and transmitted. • Devices
such as smartphones, iPods, and computers store data in digital formats that can be handled by
electronic circuitry. Data representation can also be referred to the method used to represent data in a
form that can be processed by a computer. It involves encoding information into a format suitable for
storage, transmission, or manipulation. This can include various forms such as numeric, textual, graphic,
or multimedia formats, each with its own encoding schemes, structures, and rules to enable computers to
interpret and work with the data effectively.

Digitization is the process of converting information, such as text, numbers, photo, or music, into
digital data that can be manipulated by electronic devices. • The Digital Revolution has evolved
through four phases, beginning with big, expensive, standalone computers, and progressing to
today’s digital world in which small, inexpensive digital devices are everywhere
The 0s and 1s used to represent digital data are referred to as binary digits — from this term we
get the word bit that stands for binary digit. • A bit is a 0 or 1 used in the digital representation of
data. • A digital file, usually referred to simply as a file, is a named collection of data that exits
on a storage medium, such as a hard disk, CD, DVD, or flash drive.
Numeric data consists of numbers that can be used in arithmetic operations. • Digital devices
represent numeric data using the binary number system, also called base 2. • The binary number
system only has two digits: 0 and 1. • No numeral like 2 exists in the system, so the number
“two” is represented in binary as 10 (pronounced “one zero”). Numeric Representation can also
Utilizes binary digits (0s and 1s) to represent numbers, employing formats like integers (whole numbers)
or floating-point numbers (decimal numbers). Different data types allocate varying amounts of memory
for numeric values based on precision and range requirements.

Representing Text
• Character data is composed of letters, symbols, and numerals that are not used in calculations.
Examples of character data include your name, address, and hair color. • Character data is
commonly referred to as “text.”
Digital devices employ several types of codes to represent character data, including ASCII,
Unicode, and their variants. • ASCII (American Standard Code for Information Interchange,
pronounced “ASK ee”) requires seven bits for each character. • The ASCII code for an uppercase
A is 1000001.
Extended ASCII is a superset of ASCII that uses eight bits for each character. • For example,
Extended ASCII represents the uppercase letter A as 01000001. • Using eight bits instead of
seven bits allows Extended ASCII to provide codes for 256 characters.
Unicode (pronounced “YOU ni code”) uses sixteen bits and provides codes or 65,000 characters.
• This is a bonus for representing the alphabets of multiple languages. • UTF-8 is a variable-
length coding scheme that uses seven bits for common ASCII characters but uses sixteen-bit
Unicode as necessary
Bites and Bytes
• All of the data stored and transmitted by digital devices is encoded as bits. • Terminology
related to bits and bytes is extensively used to describe storage capacity and network access
speed. • The word bit, an abbreviation for binary digit, can be further abbreviated as a lowercase
b. • A group of eight bits is called a byte and is usually abbreviated as an uppercase B.
When reading about digital devices, you’ll frequently encounter references such as 90 kilobits
per second, 1.44 megabytes, 2.8 gigahertz, and 2 terabytes. • Kilo, mega, giga, tera, and similar
terms are used to quantify digital data.
Use bits for data rates, such as Internet connection speeds, and movie download speeds. • Use
bytes for file sizes and storage capacities. • 104 KB: Kilobyte (KB or Kbyte) is often used when
referring to the size of small computer files.

Audio Representation: Audio data is represented through formats like MP3, WAV, or MIDI. Digital
audio involves encoding sound waves into digital signals, enabling storage, transmission, and
reproduction of audio information. 5.
Image Representation: Images are stored using formats like JPEG, PNG, or GIF. Each pixel's color and
intensity are encoded to form a visual representation, allowing storage and display of images in various
sizes and resolutions.
Video Representation: Moving images are encoded into formats like MP4, AVI, or MKV, which store
sequences of frames as well as audio data. Video representation involves compressing and storing frames
to enable playback.
.
Boolean Representation: Utilizes binary digits (0s and 1s) to represent logical values such as true or false,
yes or no, or on or off. Boolean logic is essential in computer programming and circuit design. Each
method serves specific purposes and comes with its own encoding rules and formats, facilitating the
manipulation, storage, and interpretation of diverse types of data by computers and digital devices.
CHARACTER SETS. Character sets are collections of characters with corresponding numerical
representations used by computers to encode text. Here are a few common character sets:
ASCII (American Standard Code for Information Interchange): A standard character encoding that
represents English characters using 7 or 8 bits, covering letters, digits, punctuation, and control
characters.
Unicode: A more extensive character set that supports multiple languages and symbols worldwide. It
includes over a million characters and assigns each a unique numeric value, allowing representation of
various scripts, emojis, and symbols.
ISO-8859: A series of character encodings designed for different languages and regions, each covering a
specific subset of characters beyond ASCII.
UTF-8 (Unicode Transformation Format 8-bit): A variable-width character encoding capable of
representing all Unicode characters. It's backward compatible with ASCII and supports multilingual text
efficiently. 5. EBCDIC (Extended Binary Coded Decimal Interchange Code): Primarily used in IBM
mainframe computers, representing characters using 8 bits and covering various languages and symbols.
These character sets differ in the range of characters they support, the number of bits used for each
character, and their compatibility with different languages and systems. Unicode, especially UTF-8, has
become increasingly prevalent due to its broad support for diverse languages and symbols.

Data Compression
• To reduce file size and transmission times, digital data can be compressed. • Data compression
refers to any technique that recodes the data in a file so that it contains fewer bits. • Compression
is commonly referred to as “zipping.”
Compression techniques divided into two categories: lossless and lossy • Lossless compression
provides a way to compress data and reconstitute it into its original state; uncompressed data
stays exactly the same as the original data • Lossy compression throws away some of the original
data during the compression process; uncompressed data is not exactly the same as the original

The operation of a computer, once a program and some data have been loaded into RAM, takes
place as follows. The first instruction is transferred from RAM into the control unit and
interpreted by the hardware circuitry. For instance, suppose that the instruction is a string of bits
that is the code for load 10. this instruction loads the contents of memory location 10 into the
ALU. The next instruction, say ADD 15, is fetched. The control unit then loads the contents of
memory location 15 into the ALU and adds it to the number already there. Finally, the
instruction STORE 20 would store that sum into location 20. At this level, the operation of a
computer is not much different from that of a pocket calculator.

In general, programs are not just lengthy sequences of load, store, and arithmetic operations.
Most importantly, computer languages include conditional instructions—essentially, rules that
say, “If memory location n satisfies condition a, do instruction number x next, otherwise do
instruction y.” This allows the course of a program to be determined by the results of previous
operations—a critically important ability.

Finally, programs typically contain sequences of instructions that are repeated a number of times
until a predetermined condition becomes true. Such a sequence is called a loop. For example, a
loop would be needed to compute the sum of the first n integers, where n is a value stored in a
separate memory location. Computer architectures that can execute sequences of instructions,
conditional instructions, and loops are called “Turing complete,” which means that they can
carry out the execution of any algorithm that can be defined. Turing completeness is a
fundamental and essential characteristic of any computer organization.

Logic design is the area of computer science that deals with the design of electronic circuits
using the fundamental principles and properties of logic to carry out the operations of the control
unit, the ALU, the I/O controllers, and other hardware. Each logical function (AND, OR, and
NOT) is realized by a particular type of device called a gate. For example, the addition circuit of
the ALU has inputs corresponding to all the bits of the two numbers to be added and outputs
corresponding to the bits of the sum. The arrangement of wires and gates that link inputs to
outputs is determined by the mathematical definition of addition. The design of the control unit
provides the circuits that interpret instructions. Due to the need for efficiency, logic design must
also optimize the circuitry to function with maximum speed and has a minimum number of gates
and circuits.

An important area related to architecture is the design of microprocessors, which are complete
CPUs—control unit, ALU, and memory—on a single integrated circuit chip. Additional memory
and I/O control circuitry are linked to this chip to form a complete computer. These thumbnail-
sized devices contain millions of transistors that implement the processing and memory units of
modern computers.

Von Neumann Architecture


Von Neumann architecture was first published by John von Neumann in 1945.
His computer architecture design consists of a Control Unit, Arithmetic and Logic Unit (ALU),
Memory Unit, Registers and Inputs/Outputs.
Von Neumann architecture is based on the stored-program computer concept, where instruction
data and program data are stored in the same memory. This design is still used in
most computers produced today.

Central Processing Unit


(CPU)
The Central Processing Unit (CPU) is the electronic circuit responsible for executing the
instructions of a computer program.
It is sometimes referred to as the microprocessor or processor.
The CPU contains the ALU, CU and a variety of registers.
Registers
Registers are high speed storage areas in the CPU. All data must be stored in a register before it
can be processed.

Holds the memory location of data that needs to be


MAR Memory Address Register
accessed

MDR Memory Data Register Holds data that is being transferred to or from memory

AC Accumulator Where intermediate arithmetic and logic results are stored

PC Program Counter Contains the address of the next instruction to be executed

Current Instruction
CIR Contains the current instruction during processing
Register

Arithmetic and Logic Unit (ALU)


The ALU allows arithmetic (add, subtract etc) and logic (AND, OR, NOT etc) operations to be
carried out.

Control Unit (CU)


The control unit controls the operation of the computer’s ALU, memory and input/output
devices, telling them how to respond to the program instructions it has just read and interpreted
from the memory unit.
The control unit also provides the timing and control signals required by other computer
components.

Buses
Buses are the means by which data is transmitted from one part of a computer to another,
connecting all major internal components to the CPU and memory.
A standard CPU system bus is comprised of a control bus, data bus and address bus.
Address
Bus Carries the addresses of data (but not the data) between the processor and
memory

Carries data between the processor, the memory unit and the input/output
Data Bus
devices

Carries control signals/commands from the CPU (and status signals from other
Control
devices) in order to control and coordinate all the activities within the
Bus
computer

Memory Unit
The memory unit consists of RAM, sometimes referred to as primary or main memory. Unlike a
hard drive (secondary memory), this memory is fast and also directly accessible by the CPU.
RAM is split into partitions. Each partition consists of an address and its contents (both
in binary form).
The address will uniquely identify every location in the memory.
Loading data from permanent memory (hard drive), into the faster and directly accessible
temporary memory (RAM), allows the CPU to operate much quicker.

Hierarchy in Computer Architecture


In the design of the computer system, a processor, as well as a large amount of memory devices,
has been used. However, the main problem is, these parts are expensive. So the memory
organization of the system can be done by memory hierarchy. It has several levels of memory
with different performance rates. But all these can supply an exact purpose, such that the access
time can be reduced. The memory hierarchy was developed depending upon the behavior of the
program. We discuss the overview of the memory hierarchy in computer architecture.

What is Memory Hierarchy?


The memory in a computer can be divided into five hierarchies based on the speed as well as use.
The processor can move from one level to another based on its requirements. The five
hierarchies in the memory are registers, cache, main memory, magnetic discs, and magnetic
tapes. The first three hierarchies are volatile memories which mean when there is no power, and
then automatically they lose their stored data. Whereas the last two hierarchies are not volatile
which means they store the data permanently.
A memory element is the set of storage devices which stores the binary data in the type of bits. In
general, the storage of memory can be classified into two categories such as volatile as well as
non- volatile.

Memory Hierarchy in Computer Architecture


The memory hierarchy design in a computer system mainly includes different storage devices.
Most of the computers were inbuilt with extra storage to run more powerfully beyond the main
memory capacity. the following memory hierarchy diagram is a hierarchical pyramid for
computer memory. The designing of the memory hierarchy is divided into two types such as
primary (Internal) memory and secondary (External) memory.

Primary Memory
The primary memory is also known as internal memory, and this is accessible by the processor
straightly. This memory includes main, cache, as well as CPU registers.
Secondary Memory
The secondary memory is also known as external memory, and this is accessible by the processor
through an input/output module. This memory includes an optical disk, magnetic disk, and
magnetic tape.

Characteristics of Memory Hierarchy


The memory hierarchy characteristics mainly include the following.
Performance
Previously, the designing of a computer system was done without memory hierarchy, and the
speed gap among the main memory as well as the CPU registers enhances because of the huge
disparity in access time, which will cause the lower performance of the system. So, the
enhancement was mandatory. The enhancement of this was designed in the memory hierarchy
model due to the system’s performance increase.
Ability
The ability of the memory hierarchy is the total amount of data the memory can store. Because
whenever we shift from top to bottom inside the memory hierarchy, then the capacity will
increase.
Access Time
The access time in the memory hierarchy is the interval of the time among the data availability as
well as request to read or write. Because whenever we shift from top to bottom inside the
memory hierarchy, then the access time will increase

Cost per bit


When we shift from bottom to top inside the memory hierarchy, then the cost for each bit will
increase which means an internal Memory is expensive compared with external memory.
Memory Hierarchy Design
The memory hierarchy in computers mainly includes the following.
Cache Memory
Cache memory can also be found in the processor, however rarely it may be another IC
(integrated circuit) which is separated into levels. The cache holds the chunk of data which are
frequently used from main memory. When the processor has a single core then it will have two
(or) more cache levels rarely. Present multi-core processors will be having three, 2-levels for
each one core, and one level is shared.
Main Memory
The main memory in the computer is nothing but, the memory unit in the CPU that
communicates directly. It is the main storage unit of the computer. This memory is fast as well as
large memory used for storing the data throughout the operations of the computer. This memory
is made up of RAM as well as ROM.
Magnetic Disks
The magnetic disks in the computer are circular plates fabricated of plastic otherwise metal by
magnetized material. Frequently, two faces of the disk are utilized as well as many disks may be
stacked on one spindle by read or write heads obtainable on every plane. All the disks in
computer turn jointly at high speed. The tracks in the computer are nothing but bits which are
stored within the magnetized plane in spots next to concentric circles. These are usually
separated into sections which are named as sectors.
Magnetic Tape
This tape is a normal magnetic recording which is designed with a slender magnetizable covering
on an extended, plastic film of the thin strip. This is mainly used to back up huge data. Whenever
the computer requires to access a strip, first it will mount to access the data. Once the data is
allowed, then it will be unmounted. The access time of memory will be slower within magnetic
strip as well as it will take a few minutes for accessing a strip.
Advantages of Memory Hierarchy
The need for a memory hierarchy includes the following.

 Memory distributing is simple and economical


 Removes external destruction
 Data can be spread all over
 Permits demand paging & pre-paging
 Swapping will be more proficient

thus, this is all about memory hierarchy. from the above information, finally, we can conclude
that it is mainly used to decrease the bit cost, access frequency, and to increase the capacity,
access time. So it is up to the designer how much they need these characteristics for satisfying
the necessities of their consumers. Here is a question for you, memory hierarchy in OS?

Computer Bus
What is Computer Bus: The electrically conducting path along which data is transmitted inside
any digital electronic device. A Computer bus consists of a set of parallel conductors, which may
be conventional wires, copper tracks on a printed circuit board, or microscopic aluminum trails
on the surface of a silicon chip. Each wire carries just one bit, so the number of wires determines
the largest data word the bus can transmit: a bus with eight wires can carry only 8-bit data words,
and hence defines the device as an 8-bit device.
a computer bus normally has a single word memory circuit called a latch attached to either end,
which briefly stores the word being transmitted and ensures that each bit has settled to its
intended state before its value is transmitted.
The Computer bus helps the various parts of the PC communicate. If there was no bus, you
would have an unwieldy number of wires connecting every part to every other part. It would be
like having separate wiring for every light bulb and socket in your house.
Types of Computer Bus
There are a variety of buses found inside the computer.
Data Bus: The data bus allows data to travel back and forth between the microprocessor (CPU)
and memory (RAM).
Address Bus: The address bus carries information about the location of data in memory.
Control Bus : The control bus carries the control signals that make sure everything is flowing
smoothly from place to place.
Expansion Bus: If your computer has expansion slots, there’s an expansion bus. Messages and
information pass between your computer and the add-in boards you plug in over the expansion
bus.
Although this is a bit confusing, these different buses are sometimes together called simply “the
bus.” A user can think of the computer’s “bus” as one unit made up of three parts: data, address,
and control, even though the three electrical pathways do not run along each other (and therefore
don’t really form a single “unit”) within the computer.
There are different sizes, or widths of data buses found in computers today. A data bus’ width is
measured by the number of bits that can travel on it at once. The speed at which its bus can
transmit words, that is, its bus bandwidth, crucially determines the speed of any digital device.
One way to make a bus faster is to increase its width;
for example a 16-bit bus can transmit two 8-bit words at once, ‘side-by-side’, and so carries 8-bit
data twice as fast as an 8-bit bus can. A computer’s CPU will typically contain several buses,
often of differing widths, that connect its various subunits. It is common for modern CPUs to use
on-chip buses that are wider than the bus they use to communicate with external devices such as
memory, and the speed difference between on- and off-chip operations must then be bridged by
keeping a reservoir of temporary data in a CACHE. For example many of the Pentium class of
processors use 256 bits for their fastest on-chip buses, but only 64 bits for external links.
An 8-bit bus carries data along 8 parallel lines. A 16-bit bus, also called ISA (Industry Standard
Architecture), carries data along 16 lines. A 32-bit bus, classified as EISA (Enhanced Industry
Standard Architecture) or MCA (Micro Channel Architecture), can carry data along 32 lines.

The speed at which buses conduct signals is measured in megahertz (Mhz). Typical PCs today
run at speeds between 20 and 65Mhz. Also see CPU, Expansion Card, Memory, Motherboard,
RAM, ROM, and System Unit.
How Does Computer Bus Work?
A bus transfers electrical signals from one place to another. An actual bus appears as an endless
amount of etched copper circuits on the motherboard’s surface. The bus is connected to the CPU
through the Bus Interface Unit.
Data travels between the CPU and memory along the data bus. The location (address) of that
data is carried along the address bus. A clock signal which keeps everything in synch travels
along the control bus.
The clock acts like a traffic light for all the PC’s components; the “green light” goes on with
each clock tick. A PC’s clock can “tick” anywhere from 20 to 65 million times per second,
which makes it seem like a computer is really fast. But since each task (such as saving a file) is
made up of several programmed instructions, and each of those instructions takes several clock
cycles to carry out, a person sometimes has to sit and wait for the computer to catch up.
Computer Architecture VS Computer Organization

Computer Architecture Computer Organization

Computer Architecture is concerned with the way hardware Computer Organization is concerned with the
components are connected together to form a computer system. structure and behaviour of a computer system as seen
by the user.

It acts as the interface between hardware and software. It deals with the components of a connection in a
system.

Computer Architecture helps us to understand the functionalities Computer Organization tells us how exactly all the
of a system. units in the system are arranged and interconnected.

A programmer can view architecture in terms of instructions, Whereas Organization expresses the realization of
addressing modes and registers. architecture.

While designing a computer system architecture is considered An organization is done on the basis of architecture.
first.

Computer Architecture deals with high-level design issues. Computer Organization deals with low-level design
issues.

Architecture involves Logic (Instruction sets, Addressing Organization involves Physical Components (Circuit
modes, Data types, Cache optimization) design, Adders, Signals, Peripherals)
Next Topic

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy