0% found this document useful (0 votes)
35 views

Unit Iii

The document discusses memory organization and hierarchy, including main memory, auxiliary memory such as magnetic disks, optical disks, and magnetic tape. It also covers cache memory, including cache mapping and hit/miss procedures.

Uploaded by

Itachi Uchiha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views

Unit Iii

The document discusses memory organization and hierarchy, including main memory, auxiliary memory such as magnetic disks, optical disks, and magnetic tape. It also covers cache memory, including cache mapping and hit/miss procedures.

Uploaded by

Itachi Uchiha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 42

ANURAG GROUP OF INSTITUTIONS

(An Autonomous Institution)


(Affiliated to JNTU-HYD, Approved by AICTE and NBA Accredited)
VENKATAPUR, GHATKESAR, RR Dist, AP-501301
(2016-17)

COMPUTER ORGANIZATION
II B.Tech II semester
Unit-III PPT Slides
Text Books: (1) Computer Systems Architecture by M. Morris Mano
INDEX
UNIT-III PPT SLIDES
Sl. No Module as per Session planner Lecture No

1. Memory organization: Memory hierarchy L7

2. Main memory L8

3. Auxiliary memory L9

4. Associative memory L10

5. Cache memory L11

6. Virtual memory L12


CHARACTERISTICS
 Location
- CPU, Internal, External
 Capacity

- Word size, Number of words


 Unit of Transfer

- Word, Block
 Access Method

- Sequential, Random, Associative


 Performance

- Access time, Cycle time, Transfer Rate


 Physical Type

- Semi conductor, Magnetic ,Optical, Magneto-Optical


 Physical Characteristics
- Volatile, Non Volatile
 Organization
- Erasable, Non-erasable
Memory Hierarchy

The overall goal of using a memory hierarchy is to obtain the highest-possible average
access speed while minimizing the total cost of the entire memory system.

Microprogramming: refers to the existence of many programs in different parts of main


memory at the same time.
Memory Hierarchy

It is described by 3 characteristics:
-- Access time
-- Capacity
-- Cost

As we are moving down, the following


occurs:

Cost per bit decreases


Capacity will be increases
Access time increases
Frequency to access the memory device
by the CPU decreases

The overall goal of using a memory hierarchy is to obtain the highest-


possible average access speed while minimizing the total cost of the
entire memory system.
Main memory
ROM Chip
Memory Address Map
The designer of a computer system must calculate the amount of
memory required for the particular application and assign it to either
RAM or ROM.
The interconnection between memory and processor is then established
from knowledge of the size of memory needed and the type of RAM
and ROM chips available.
The addressing of memory can be established by means of a table that
specifies the memory address assigned to each chip.

The table, called a memory address map, is a pictorial representation


of assigned address space for each chip in the system.

Memory Configuration (case study):

Required: 512 bytes ROM + 512 bytes RAM


Available: 512 byte ROM + 128 bytes RAM
Memory Address Map
Memory connections to the Address bus CPU

CPU 16 - 11 10 9 8 7 - 1 RD WR Data bus

Decoder
3 2 1 0
CS1
CS2
128×8 Data
RD
RAM 1
WR
AD7

CS1
CS2
128×8 Data
RD
RAM 2
WR
AD7

CS1
CS2
128×8 Data
RD
RAM 3
WR
AD7

CS1
CS2
128×8 Data
RD
RAM 4
WR
AD7

CS1
CS2
1-7 128×8 Data
ROM
8
AD9
9
SECONDARY STORAGE
MAGNETIC HARD DISKS

Disk

Disk drive

Disk controller
ORGANIZATION OF DATA ON A DISK

Sector 0, track 1
Sector 3, track n
Sector 0, track 0

Figure 5.30. Organization of one surface of a disk.


ACCESS DATA ON A DISK
 Sector header
 Following the data, there is an error-correction code
(ECC).
 Formatting process
 Difference between inner tracks and outer tracks
 Access time – seek time / rotational delay (latency time)
 Data buffer/cache
DISK CONTROLLER

Processor Main memory


• Seek
• Read
• Write System bus
• Error Disk controller
checking

Disk drive Disk drive

Figure 5.31. Disks connected to the system bus.


RAID DISK ARRAYS
 Redundant Array of Inexpensive Disks
 Using multiple disks makes it cheaper for huge storage,
and also possible to improve the reliability of the overall
system.
 RAID0 – data striping

 RAID1 – identical copies of data on two disks

 RAID2, 3, 4 – increased reliability

 RAID5 – parity-based error-recovery


Aluminum Acrylic Label

OPTICAL
DISKS Pit Land Polycarbonate plastic

(a) Cross-section

Pit Land

Reflection Reflection

No reflection

Source Detector Source Detector Source Detector

(b) Transition from pit to land

0 1 0 0 1 0 0 0 0 1 0 0 0 1 0 0 1 0 0 1 0

(c) Stored binary pattern

Figure 5.32. Optical disk.


OPTICAL DISKS
 CD-ROM

 CD-Recordable (CD-R)
 CD-ReWritable (CD-RW)

 DVD

 DVD-RAM
MAGNETIC TAPE SYSTEMS

File File
mark File
mark
• •
• • 7 or 9
• • bits
• •

File gap Record Record Record Record


gap gap

Figure 5.33. Organization of data on magnetic tape.


Cache Memory
Cache Memory
Consider the following memory organization to show mapping
procedures of the cache memory.

· The main memory can stores 32k word of 12 bits each.


• The cache is capable of storing 512 of these words at any given time.
• For every word stored in cache, there is a duplicate copy in main
memory.
The CPU communicates with both memories
• It first sends a 15 – bit address to cache.
• If there is a hit, the CPU accepts the 12 bit data from cache
• If there is a miss, the CPU reads the word from main memory and the
word is then transferred to cache.
Cache Memory

If the active portions of the program and data are placed in a


fast small memory, the average memory access time can be
reduced.
Thus, reducing the total execution time of the program. Such a
fast small memory is referred to as “Cache Memory”.
The performance of the cache memory is measured in terms of a
quantity called “Hit Ratio”.

When the CPU refers to memory and finds the word in cache, it
produces a hit. If the word is not found in cache, it counts it as
a miss.
The ratio of the number of hits divided by the total CPU
references to memory (hits + misses) is the hit ratio. The hit
ratios of 0.9 and higher have been reported
Cache Memory

The average memory access time of a computer system can be improved


considerably by use of cache.
The cache is placed between the CPU and main memory. It is the faster
component in the hierarchy and approaches the speed of CPU components.

When the CPU needs to access memory, the cache is examined. If it is


found in the cache, it is read very quickly.
If it is not found in the cache, the main memory is accessed.

A block of words containing the one just accessed is then transferred from
main memory to cache memory.

For example,
A computer with cache access time of 100ns, a main memory access time
of 1000ns and a hit of 0.9 produce an average access time of 200ns. This is
a considerable improvement over a similar computer without a cache
memory, whose access time is 1000ns.
AVERAGE ACCESS TIME FORMULA
 In order to find avg memory access time we have the
formula :
Tavg = h*Tc +(1-h)*M
where
h = hit rate
(1-h) = miss rate
Tc = time to access information from cache
M = miss penalty (time to access main memory)
Cache Memory

The basic characteristic of cache memory is its fast access


time. Therefore, very little or no time must be wasted when
searching for words in the cache.

The transformation of data from main memory to cache


memory is referred to as a “Mapping Process”.

There are three types of mapping procedures are available.

 Associative Mapping
 Direct Mapping
 Set – Associative Mapping.
Associative Mapping
The associative mapping stores both the address and content (data) of the
memory word.
Octal
Argument register

A CPU address of 15 bits is placed in the argument register and associative


memory is searched for a matching address.
If the address is found, the corresponding 12 bit data is read and sent to the
CPU.
If no match occurs, the main memory is accessed for the word. The address –
data pair is then transferred to associative cache memory.

If the cache is full, it must be replaced, using replacement algorithm.


Direct Mapping

The 15-bit CPU address is divided into two fields.

The 9 least significant bits constitute the index field and the
remaining 6 bits form the tag fields.
The main memory needs an address but includes both the tag and the
index bits.
The cache memory requires the index bits only i.e., 9 bits.
If there are 2k words in the cache memory & 2n words in the main
memory then k-address bits to refer cache and n address bits to refer
main memory are required .
Example:
512 words = 29 words in Cache
32K words = 215 words in Main memory
k = 9, n = 15 bits required to access cache and Min memory respectively
Direct Mapping
Direct Mapping
Direct Mapping

Each word in cache consists of the data word and it


associated tag.
When a new word is brought into cache, the tag bits
store along data
When the CPU generates a memory request, the index field is
used in the address to access the cache.

The tag field of the CPU address is equal to tag in the


word from cache; there is a hit, otherwise miss.

How can we calculate the word size of the cache


memory?
Direct Mapping The direct-mapping organization using a
block size of 8 words is shown in Fig. 14.
Direct Mapping
 The index field is now divided into two parts: the block field and
the word field.
 In a 512-word cache there are 64 blocks of 8 words each, since 64
x 8 = 512.
 The block number is specified with a 6-bit field and the word
within the block is specified with a 3-bit field.
 The tag field stored within the cache is common to all eight words
of the same block.
 Every time a miss occurs, an entire block of eight words must be
transferred from main memory to cache memory.
 Although this takes extra time, the hit ratio will most likely
improve with a larger block size because of the sequential nature of
computer programs.
Set – Associative Mapping
A third type of cache organization, called set-associative mapping, is an
improvement over the direct mapping organization in that each word of
cache can store two or more words of memory under the same index address.
Each data word is stored together with its tag and the number of tag – data items in
one word of cache is said to form a set.

Each index address refers to two data words and their associated tags.
Set – Associative Mapping

• The words stored at addresses 01000 and 02000 of main memory are stored in
cache memory at index address 000.
• Similarly, the words at addresses 02777 and 00777 are stored in cache at index
address 777.
• When the CPU generates a memory request, the index value of the address is
used to access the cache. The tag field of the CPU address is then compared with
both tags in the cache to determine if a match occurs.
• The comparison logic is done by an associative search of the tags in the set
similar to an associative memory search: thus the name "set-associative.“
• The hit ratio will improve as the set size increases because more words with the
same index but different tags can reside in cache.
• However, an increase in the set size increases the number of bits in words of
cache and requires more complex comparison logic.
• When a miss occurs in a set-associative cache and the set is full, it is necessary
to replace one of the tag-data items with a new value.
• The most common replacement algorithms used are: random replacement, first-
in, first -out (FIFO), and least recently used (LRU).
Set – Associative Mapping

Each tag requires 6 bits & each data word has 12 bits, so the word length is
2(6+12) =36 bits

An index address of 9 bits can accommodate 512 cache words. It can accommodate
1024 memory words.

When the CPU generates a memory request, the index value of the address is used to
access the cache.

The tag field of the CPU address is compared with both tags in the cache.

The most common replacement algorithms are:

· Random replacement
· FIFO
· Least Recently Used (LRU)
Writing into cache
An important aspect of cache organization is concerned with memory write requests.
When the CPU finds a word in cache during a read operation, the main memory is not
involved in the transfer.
However, if the operation is a write, there are two writing methods that the
system can proceed.
Write-through method (The simplest & commonly used way)
Update main memory with every memory write operation, with cache memory
being update in parallel if it contains the word at the specified address.

This method has the advantage that main memory always contains the same data as
the cache.

Write-back method
In this method only the cache location is updated during a write operation.

The location is then marked by a flag so that later when the word is
removed from the cache it is copied into main memory.
The reason for the write-back method is that during the time a word resides in the
cache, it may be updated several times.
Cache Initialization
• One more aspect of cache organization is the problem of initialization.
• The cache is initialized when power is applied to the computer or when the
main memory is loaded with a complete set of programs from auxiliary
memory.
• After initialization the cache is considered to be empty, but in effect it
contains some non-valid data.
• It is customary to include with each word in cache a valid bit to indicate
whether or not the word contains valid data.
• The cache is initialized by clearing all the valid bits to 0.
• The valid bit of a particular cache word is set to 1 the first time this word is
loaded from main memory and stays set unless the cache has to be initialized
again.
• The introduction of the valid bit means that a word in cache is not replaced
by another word unless the valid bit is set to 1 and a mismatch of tags occurs.
• If the valid bit happens to be 0, the new word automatically replaces the
invalid data.
• Thus the initialization condition has the effect of forcing misses from the
cache until it fills with valid data.
REPLACEMENT ALGORITHMS

CPU A B C A D E A D C F
Reference
Miss Miss Miss Hit Miss Miss Miss Hit Hit Miss

Cache A A A A A E E E E E
FIFO  B B B B B A A A A
C C C C C C C F
D D D D D D

Hit Ratio = 3 / 10 = 0.3

42 /
19

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy