Slot05 CH04 CacheMemory 35 Slides
Slot05 CH04 CacheMemory 35 Slides
Slot05 CH04 CacheMemory 35 Slides
Contents
No. Question
1 List characteristics of a component in computer’s memory system.
2 What are the differences among sequential access, direct access, and random
access?
3 What is the general relationship among access time, memory cost, and
capacity?
4 What is the cache memory? – Refer to the figure 4.3.
5 How does the principle of locality relate to the use of multiple memory
levels?
6 Explain about key elements of cache design.
7 Distinguish among direct mapping, associative mapping, and set-associative
mapping.
8 A memory system has only one 20-line cache using direct mapping. What
will the cache line be used if the 1024th main memory block is accessed?
9 For a direct-mapped cache, a main memory address is viewed as consisting
of three fields. List and define the three fields. – Refer to the textbook.
10 For an associative cache, a main memory address is viewed as consisting of
two fields. List and define the two fields– Refer to the textbook.
+ 6
Location
Refers to whether memory is internal and external to the computer
Internal memory is often equated (make equal) with main memory
Processor requires its own local memory, in the form of registers
Cache is another form of internal memory
External memory consists of peripheral storage devices that are accessible
to the processor via I/O controllers
Capacity
Memory is typically expressed in terms of bytes
Unit of transfer
For internal memory the unit of transfer is equal to the number of electrical
lines into and out of the memory module
Method of Accessing Units of Data
9
Random
Direct access Associative
Sequential access Access
(Disk) (Cache)
(Main memory)
Random Access
Design
constraints on a computer’s memory can be
summed up by three questions:
How much (capacity), how fast (performance), how expensive
(cost)
Nơi đặt
Nhanh
hơn/ mắc
tiền hơn
+ 16
What is cache?
Cache and Main Memory
17
What is a Cache?
Cache: A small
size, expensive,
memory which has
high-speed access
is located between
CPU and RAM
(large memory size,
cheaper, and lower-
speed
Memory).
Program in main
memory is divided
into the same size
blocks. A cache
line is tight fit a
memory block.
Overview of
cache design
parameters
+ 20
Lệnh <opcode, addr> ban đầu trong bộ nhớ chính (phấn addr là địa
chỉ trong bộ nhớ). CPU lấy lệnh trong cache, data của lệnh cũng ở
trong cache. Như vậy thành phần addr ban đầu cần phải được MMU
hiệu chỉnh thành addr trong cache (nên được gọi là địa chỉ ảo)
+ MMU ĐÃchuyển add trong mem thành add trong
cache phù hợp nên CPU sẽ truy xuất trực tiếp cache
Logical
and
Physical
Caches
CPU truy xuất
cache nên addr
cần phù hợp
theo địa chỉ
của cache
MMU CHƯA chuyển add trong mem thành add trong cache phù hợp nên
khi CPU truy xuất cache, CPU cần nhờ MMU tính toán lại địa chỉ phù hợp
22
Ưu điểm: Khi thay block, chỉ một line Nhược điểm: Thông tin trong tag
được thay Chỉ chép 1 line ra sẽ nhiều bit hơn Giảm hiệu suất
memory rồi nạp 1 block vào cache lưu trữ của cache
Chi phí hoán đổi thấp.
+ 25
Two situations:
Cache hit: Accessed address exists in cache
Cache miss: Accessed address does not exist in cache. The memory
block containing it must be loaded to the cache
Once the cache has been filled, when a new block is brought into
the cache, one of the existing blocks must be replaced
For direct mapping there is only one possible line for any
particular block and no choice is possible
For the associative and set-associative techniques a replacement
algorithm is needed
To achieve high speed, an algorithm must be implemented in
hardware
Đọc NOTE để có thêm lới giải thích
+ 27
First-in-first-out (FIFO)
Replace that block in the set that has been in the cache longest
Easily implemented as a round-robin or circular buffer technique
If the old block in the cache has not been More than one device may have
altered then it may be overwritten with a new
block without first writing out the old block access to main memory
If at least one write operation has been A more complex problem occurs when
performed on a word in that line of the cache multiple processors are attached to the same
then main memory must be updated by writing bus and each processor has its own local
the line of cache out to the block of memory cache - if a word is altered in one cache it could
before bringing in the new block conceivably invalidate a word in other caches
+ 29
Multilevel Caches
As logic density has increased it has become possible to have a cache on the
same chip as the processor
The on-chip cache reduces the processor’s external bus activity and speeds up
execution time and increases overall system performance
When the requested instruction or data is found in the on-chip cache, the bus access is
eliminated
On-chip cache accesses will complete appreciably faster than would even zero-wait
state bus cycles
During this period the bus is free to support other transfers
Two-level cache:
Internal cache designated as level 1 (L1)
External cache designated as level 2 (L2)
Potential savings due to the use of an L2 cache depends on the hit rates in both
the L1 and L2 caches
The use of multilevel caches complicates all of the design issues related to
caches, including size, replacement algorithm, and write policy
Hit Ratio (L1 & L2) For 8 Kbyte and 16Kbyte L1 33
+ 34
Trend is toward split caches at the L1 and unified caches for higher levels
Cache
Memory
Chapter 4