0% found this document useful (0 votes)
69 views

CS 152 Computer Architecture and Engineering Lecture 7 - Memory Hierarchy-II

CS 152 Computer Architecture and Engineering Lecture 7 - Memory Hierarchy-II Krste Asanovic Electrical Engineering and Computer Sciences University of California at Berkeley. Dynamic RAM (DRAM) is main form of main memory storage in use today - holds values on small capacitors, need refreshing (hence dynamic) Static RAM (SRAM) is faster but more expensive - used to build on-chip memory for caches caches exploit two forms of predictability in memory reference streams.

Uploaded by

sam_almasry
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
69 views

CS 152 Computer Architecture and Engineering Lecture 7 - Memory Hierarchy-II

CS 152 Computer Architecture and Engineering Lecture 7 - Memory Hierarchy-II Krste Asanovic Electrical Engineering and Computer Sciences University of California at Berkeley. Dynamic RAM (DRAM) is main form of main memory storage in use today - holds values on small capacitors, need refreshing (hence dynamic) Static RAM (SRAM) is faster but more expensive - used to build on-chip memory for caches caches exploit two forms of predictability in memory reference streams.

Uploaded by

sam_almasry
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 27

CS 152 Computer Architecture and Engineering Lecture 7 - Memory Hierarchy-II

Krste Asanovic
Electrical Engineering and Computer Sciences University of California at Berkeley http://www.eecs.berkeley.edu/~krste http://inst.eecs.berkeley.edu/~cs152

Last time in Lecture 6


Dynamic RAM (DRAM) is main form of main memory storage in use today
Holds values on small capacitors, need refreshing (hence dynamic) Slow multi-step access: precharge, read row, read column

Static RAM (SRAM) is faster but more expensive


Used to build on-chip memory for caches

Caches exploit two forms of predictability in memory reference streams


Temporal locality, same location likely to be accessed again soon Spatial locality, neighboring location likely to be accessed soon

Cache holds small set of values in fast memory (SRAM) close to processor
Need to develop search scheme to find values in cache, and replacement policy to make space for newly accessed locations
2/17/2009 CS152-Spring09 2

Placement Policy
Block Number
1111111111 2222222222 33 0123456789 0123456789 0123456789 01

Memory

Set Number

01234567

Cache
Fully Associative anywhere (2-way) Set Associative anywhere in set 0 (12 mod 4) Direct Mapped only into block 4 (12 mod 8)
3

block 12 can be placed


2/17/2009

CS152-Spring09

Direct-Mapped Cache
Tag t V Tag Index k
Block Offset

Data Block

2k lines t =

HIT
2/17/2009 CS152-Spring09

Data Word or Byte

2-Way Set-Associative Cache


Tag t Index
Block Offset

k V Tag Data Block

V Tag Data Block

t = = Data Word or Byte HIT


2/17/2009 CS152-Spring09

Fully Associative Cache


V Tag t = Data Block

Tag

= HIT

Block Offset

b
2/17/2009

Data Word or Byte


CS152-Spring09

Replacement Policy
In an associative cache, which block from a set should be evicted when the set becomes full? Random
Least Recently Used (LRU)
LRU cache state must be updated on every access true implementation only feasible for small sets (2-way) pseudo-LRU binary tree often used for 4-8 way used in highly associative caches

First In, First Out (FIFO) a.k.a. Round-Robin


Not Least Recently Used (NLRU) FIFO with exception for most recently used block or blocks

This is a second-order effect. Why?


2/17/2009 7

CS152-Spring09

Block Size and Spatial Locality


Block is unit of transfer between the cache and memory Tag Split CPU address Word0 Word1 Word2 Word3 4 word block, b=2

block address

offsetb

b bits 32-b bits 2b = block size a.k.a line size (in bytes) Larger block size has distinct hardware advantages
less tag overhead exploit fast burst transfers from DRAM exploit fast burst transfers over wide busses

What are the disadvantages of increasing block size?


2/17/2009 CS152-Spring09 8

CPU-Cache Interaction
(5-stage pipeline)

0x4
Add

E M A we addr Primary Data rdata Cache hit? wdata wdata

nop
PC addr inst hit?

IR D

Decode, Register Fetch

ALU

PCen

Primary Instruction Cache

MD1

MD2

Stall entire CPU on data cache miss To Memory Control Cache Refill Data from Lower Levels of Memory Hierarchy

2/17/2009

CS152-Spring09

Improving Cache Performance


Average memory access time = Hit time + Miss rate x Miss penalty To improve performance: reduce the hit time reduce the miss rate reduce the miss penalty What is the simplest design strategy?

2/17/2009

CS152-Spring09

10

Serial-versus-Parallel Cache and Memory access


is HIT RATIO: Fraction of references in cache 1 - is MISS RATIO: Remaining references
Addr Addr

Processor
Data

CACHE
Data

Main Memory

Average access time for serial search:


Addr

tcache + (1 - ) tmm e
Main Memory

Processor
Data

CACHE
Data

Average access time for parallel search:

tmm e

tcache + (1 - ) high

Savings are usually small, tmm >> tcache , hit ratio e

High bandwidth required for memory path Complexity of handling parallel paths can slow tcache
2/17/2009 CS152-Spring09

Causes for Cache Misses


Compulsory: first-reference to a block a.k.a. cold
- misses that would occur even with infinite cache

start misses

Capacity: cache is too small to hold all data needed by the program
- misses that would occur even under perfect replacement policy

Conflict: misses that occur because of collisions due to block-placement strategy

- misses that would not occur with full associativity

2/17/2009

CS152-Spring09

12

Effect of Cache Parameters on Performance


Larger cache size

Higher associativity

Larger block size

2/17/2009

CS152-Spring09

13

Write Policy Choices


Cache hit:
write through: write both cache & memory
generally higher traffic but simplifies cache coherence

write back: write cache only (memory is written only when the entry is evicted)
a dirty bit per block can further reduce the traffic

Cache miss:
no write allocate: only write to main memory write allocate (aka fetch on write): fetch into cache

Common combinations:

2/17/2009

write through and no write allocate write back with write allocate
CS152-Spring09 14

Write Performance
Tag t V Tag Index k
Block Offset

b Data 2k lines t = WE

HIT
2/17/2009

Data Word or Byte


CS152-Spring09 15

Reducing Write Hit Time


Problem: Writes take two cycles in memory stage, one cycle for tag check plus one cycle for data write if hit Solutions:
Design data RAM that can perform read and write in one cycle, restore old value after tag miss Fully-associative (CAM Tag) caches: Word line only enabled if hit Pipelined writes: Hold write data for store in single buffer ahead of cache, write cache data during next stores tag check
2/17/2009 CS152-Spring09 16

CS152 Administrivia

2/17/2009

CS152-Spring09

17

Pipelining Cache Writes


Address and Store Data From CPU

Tag

Index

Store Data
Delayed Write Data

Delayed Write Addr.


Load/Store

=?

Tags
=?

S L

Data
1 0

Hit?

Load Data to CPU

Data from a store hit written into data portion of cache during tag access of subsequent store
2/17/2009 CS152-Spring09

18

Write Buffer to Reduce Read Miss Penalty


CPU
RF Data Cache Unified L2 Cache
Write buffer

Evicted dirty lines for writeback cache OR All writes in writethru cache

Processor is not stalled on writes, and read misses can go ahead of write to main memory
Problem: Write buffer may hold updated value of location needed by a read miss Simple scheme: on a read miss, wait for the write buffer to go empty Faster scheme: Check write buffer addresses against read miss addresses, if no match, allow read miss to go ahead of writes, else, return value in write buffer
2/17/2009 CS152-Spring09 19

Block-level Optimizations
Tags are too large, i.e., too much overhead
Simple solution: Larger blocks, but miss penalty could be large.

Sub-block placement (aka sector cache)


A valid bit added to units smaller than full block, called sub-blocks Only read a sub-block on a miss If a tag matches, is the word in the cache?

100 300 204


2/17/2009

1 1 0

1 1 1

1 0 0

1 0 1
20

CS152-Spring09

Set-Associative RAM-Tag Cache


Tag Status Data Tag Status Data

Not energy-efficient
A tag and data word is read from every way

Two-phase approach
=? =?

First read tags, then just read data from selected way More energy-efficient Doubles latency in L1 OK, for L2 and above, why?

Tag

Index

Offset

2/17/2009

CS152-Spring09

21

Multilevel Caches
A memory cannot be large and fast Increasing sizes of cache at each level

CPU

L1$

L2$

DRAM

Local miss rate = misses in cache / accesses to cache Global miss rate = misses in cache / CPU memory accesses Misses per instruction = misses in cache / number of instructions

2/17/2009

CS152-Spring09

22

A Typical Memory Hierarchy c.2008


Split instruction & data primary caches (on-chip SRAM) L1 Instruction Cache L1 Data Cache Large unified secondary cache (on-chip SRAM) Multiple interleaved memory banks (off-chip DRAM) Memory Unified L2 Cache Memory Memory Memory

CPU
RF Multiported register file (part of CPU)

2/17/2009

CS152-Spring09

23

Presence of L2 influences L1 design


Use smaller L1 if there is also L2
Trade increased L1 miss rate for reduced L1 hit time and reduced L1 miss penalty Reduces average access energy

Use simpler write-through L1 with on-chip L2


Write-back L2 cache absorbs write traffic, doesnt go off-chip At most one L1 miss request per L1 access (no dirty victim write back) simplifies pipeline control Simplifies coherence issues Simplifies error recovery in L1 (can use just parity bits in L1 and reload from L2 when parity error detected on L1 read)

2/17/2009

CS152-Spring09

24

Inclusion Policy
Inclusive multilevel cache:
Inner cache holds copies of data in outer cache External access need only check outer cache Most common case

Exclusive multilevel caches:


Inner cache may hold data not in outer cache Swap lines between inner/outer caches on miss Used in AMD Athlon with 64KB primary and 256KB secondary cache

Why choose one type or the other?

2/17/2009

CS152-Spring09

25

Itanium-2 On-Chip Caches


(Intel/HP, 2002) Level 1: 16KB, 4-way s.a., 64B line, quad-port (2 load+2 store), single cycle latency Level 2: 256KB, 4-way s.a, 128B line, quad-port (4 load or 4 store), five cycle latency Level 3: 3MB, 12-way s.a., 128B line, single 32B port, twelve cycle latency
2/17/2009 CS152-Spring09 26

Acknowledgements
These slides contain material developed and copyright by:
Arvind (MIT) Krste Asanovic (MIT/UCB) Joel Emer (Intel/MIT) James Hoe (CMU) John Kubiatowicz (UCB) David Patterson (UCB)

MIT material derived from course 6.823 UCB material derived from course CS252

2/17/2009

CS152-Spring09

27

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy