0% found this document useful (0 votes)
3 views39 pages

Associative and Set Associative Memory Mappiing

The document discusses two cache mapping techniques: direct mapping and associative mapping. Direct mapping assigns each memory block to a specific cache location, which can lead to cache thrashing, while associative mapping allows any memory block to be stored in any cache line, reducing thrashing but increasing hardware complexity. It also outlines types of cache misses (compulsory, capacity, conflict, and coherence), and provides examples of numerical problems related to cache mapping and address formats.

Uploaded by

tgowdabj
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views39 pages

Associative and Set Associative Memory Mappiing

The document discusses two cache mapping techniques: direct mapping and associative mapping. Direct mapping assigns each memory block to a specific cache location, which can lead to cache thrashing, while associative mapping allows any memory block to be stored in any cache line, reducing thrashing but increasing hardware complexity. It also outlines types of cache misses (compulsory, capacity, conflict, and coherence), and provides examples of numerical problems related to cache mapping and address formats.

Uploaded by

tgowdabj
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 39

Direct and Associative Mapping

 Direct mapping is a cache mapping technique where each block of main


memory is mapped to exactly one location in the cache using a simple
modulo operation.
 The address of a memory block is divided into three fields: the tag, index,
and block offset.
 The index field identifies the specific cache line, while the tag field helps to
verify if the data in the indexed cache line corresponds to the requested
memory block.
 This mapping method is straightforward to implement due to its simplicity,
making it suitable for hardware implementations that prioritize speed.
However, direct mapping may lead to frequent cache misses in cases where
multiple memory blocks map to the same cache line, a situation known as
cache thrashing.
Associative Mapping
Associative mapping is a more flexible cache mapping technique where a memory
block can be stored in any cache line rather than being restricted to a specific one.
This is achieved by using a tag field that identifies which memory block is currently
stored in each cache line. During access, the cache searches all lines in parallel for
the tag that matches the requested memory address.

While this technique offers more flexibility and reduces the risk of cache thrashing,
it requires more complex hardware to compare tags across all cache lines
simultaneously. The increased hardware complexity can lead to higher costs and
power consumption, making associative mapping more suitable for smaller caches.
Types of Cache Miss
• Compulsory Miss
A compulsory miss, also known as a cold miss, occurs when data is accessed for the first
time. Since the data has not been requested before, it is not present in the cache,
leading to a miss. This type of miss is unavoidable as it is inherent in the first reference
to the data. The only way to eliminate compulsory misses would be to have an infinite
prefetch of data, which is not feasible in real-world systems.

• Capacity Miss
A capacity miss happens when the cache cannot contain all the data needed by the
system. This type of miss occurs when the working set (the set of data that a program
accesses frequently) is larger than the cache size. When the cache is filled to capacity
and a new data item is referenced, existing data must be evicted to accommodate the
new data, leading to a miss. Capacity misses can be reduced by increasing the cache
size or optimizing the program to decrease the size of the working set.
• Conflict Miss
Conflict misses, also known as collision misses, occur when multiple data items, which
are accessed in a sequence, map to the same cache location, known as a cache set. This
type of miss is a result of the cache’s organization. In a set-associative or direct-mapped
cache, different data items may be mapped to the same set, leading to conflicts. When
a new item is loaded into a filled set, another item must be evicted, leading to a miss if
the evicted item is accessed again. Conflict misses can be mitigated by improving the
cache’s mapping function or by increasing the cache’s associativity.

• Coherence Miss
Coherence misses are specific to multiprocessor systems. In such systems, several
processors have their own private caches and access shared data. A coherence miss
occurs when one processor updates a data item in its private cache, making the
corresponding data item in another processor’s cache stale. When the second
processor accesses the stale data, a cache miss occurs. Coherence misses are managed
by implementing cache coherence protocols that ensure consistency among the various
caches.
Implementation of Associative Mapping
In fully associative mapping, any memory block can be placed in any cache line.
During data retrieval, the cache checks each line for a matching tag, which
indicates that the desired data is present. The implementation relies on a
process called *tag comparison*, where all cache lines are searched in parallel
for a match.

For example, if a memory block with address 25 is requested, the cache will
search through all lines to see if any contain a tag that matches the address. If a
match is found, the corresponding data is retrieved; otherwise, the memory
block is loaded into an available cache line, possibly replacing an existing one
based on the cache replacement policy.
Set Associative Memory Mapping
Problem # 1 :A computer system uses 16-bit memory addresses. It has a 2K-byte cache organized in a direct-mapped manner
with 64 bytes per cache block. Assume that the size of each memory word is 1 byte.

(a) Calculate the number of bits in each of the Tag, Block, and Word fields of the memory address.

(b) When a program is executed, the processor reads data sequentially from the following word addresses: 128, 144, 2176,
2180, 128, 2176 All the above addresses are shown in decimal values.

Assume that the cache is initially empty. For each of the above addresses, indicate whether the cache access will result in a hit
or a miss.
Problem # 1 :A computer system uses 16-bit memory addresses. It has a 2K-byte cache organized in a direct-mapped manner
with 64 bytes per cache block. Assume that the size of each memory word is 1 byte.

(a) Calculate the number of bits in each of the Tag, Block, and Word fields of the memory address.

(b) When a program is executed, the processor reads data sequentially from the following word addresses: 128, 144, 2176,
2180, 128, 2176 All the above addresses are shown in decimal values.

Assume that the cache is initially empty. For each of the above addresses, indicate whether the cache access will result in a hit
or a miss.
Numerical problem on Direct mapping
Q. Consider a direct mapped cache of size 16 KB with block size of 256 bytes. The
size of main memory is 128 KB.
1. Find number of bits in tag
2. Find tag directory size

Ans. First we have to find the number of bits in each given memory.
Cache of size = 16 KB = 214 bytes => Its having 14 bits
Number of bits in Tag
Block size = 256 bytes = 28 bytes => Its having 8 Number of bits in Tag =
bits Number of bits in main memory – number of bits
Main memory size = 128 KB = 217 bytes => Its line
having 17
Number ofbits
bits in main memory = bits in tag + bits number – number of bits in block
in cache line + bits in block = 17 – Line bits – Block bits
Cache Line = 17 – 6 – 8 =3
Number of bits in Line number = Cache size / Tag directory size
Block size Tag directory size = Size of line number X Numb
= 214 bytes / 28 bytes bits in tag
= 26 =26 X 3 bits
Line number having 6 bits. = 192 bits
= 192 / 8 bytes
= 24 bytes
Question : A computer has an 4 GB memory with 32-bit word sizes. Each block of
memory stores 8 words. The computer has a direct-mapped cache of 64 blocks. The
computer uses word level addressing. What is the address format? If we change the
cache to a 2 - way set associative cache, what is the new address format?

Question: Consider a main memory of size 8MB that needs to be mapped with a cache
memory of 128KB with a block size of 128 bytes. For the given hardware specifications,
design the memory mapping using an 8-way set associative method. Compute the tag
size, number of searches and main memory address format. Also, comment on the hit
ratio of the mapping technique.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy