0% found this document useful (0 votes)
2 views175 pages

Lecture Notes on Code Optimization

This document outlines the principles and techniques of code optimization in compiler design, emphasizing the importance of improving performance, memory utilization, and scalability. It covers various optimization methods such as peephole optimization, loop optimization, and redundancy elimination, along with factors influencing optimization like CPU architecture and memory hierarchy. The document also discusses the trade-offs involved in optimization efforts, balancing compilation time with performance improvements.

Uploaded by

mjakalightskin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views175 pages

Lecture Notes on Code Optimization

This document outlines the principles and techniques of code optimization in compiler design, emphasizing the importance of improving performance, memory utilization, and scalability. It covers various optimization methods such as peephole optimization, loop optimization, and redundancy elimination, along with factors influencing optimization like CPU architecture and memory hierarchy. The document also discusses the trade-offs involved in optimization efforts, balancing compilation time with performance improvements.

Uploaded by

mjakalightskin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 175

Code Optimization

1
Learning Objectives
By the end of this lesson, students should be able to:
1. Understand the principles of code optimization and how they enhance the
efficiency of compiled programs.
2. Classify different optimization techniques, including peephole optimization,
local optimization, and loop optimization.
3. Apply common optimization transformations, such as redundancy
elimination, code motion, and dead code elimination.
4. Analyze factors influencing code optimization, including CPU architecture,
memory hierarchy, and cache optimization.
5. Implement optimization strategies in compiler design to improve execution
speed and resource efficiency.
6. Evaluate the trade-offs of code optimization, balancing compilation effort
with performance gains.

2
Why Study Code Optimization in
Compiler Construction and Design?
Studying code optimization is crucial in computer science and software engineering for
several reasons:
1. Performance Improvement: Optimized code runs faster, reducing execution time
and improving system efficiency.
2. Memory Utilization: Optimization techniques help reduce memory usage, enabling
better resource allocation.
3. Scalability and Portability: Efficiently optimized code performs better across different
architectures, ensuring scalability.
4. Real-world Applications: Modern applications, including embedded systems, high-
performance computing, and cloud computing, demand optimized code for efficiency.
5. Reduces Redundancy: Optimization eliminates unnecessary computations and
improves instruction scheduling.
6. Understanding Compiler Efficiency: Helps students comprehend how compilers
transform high-level code into machine-efficient instructions.

3
Outline
n Introduction
n Classifications of Optimization techniques
n Factors influencing Optimization
n Themes behind Optimization Techniques
n Optimizing Transformations

• Example
• Details of Optimization Techniques

4
Introduction
n Concerns with machine-independent code
optimization
§ 90-10 rule: execution spends 90% time in
10% of the code.
§ It is moderately easy to achieve 90% optimization.
The rest 10% is very difficult.
§ Identification of the 10% of the code is not possible
for a compiler – it is the job of a profiler.
n In general, loops are the hot-spots

5
Introduction
n Criterion of code optimization
¨ Must preserve the semantic equivalence of the
programs
¨ The algorithm should not be modified
¨ Transformation, on average should speed up the
execution of the program
¨ Worth the effort: Intellectual and compilation effort
spend on insignificant improvement.
Transformations are simple enough to have a good effect

6
Introduction
n Optimization can be done in almost all
phases of compilation.
Source Front Inter. Code target
code end code generator code

Profile and Loop, proc Reg usage,


optimize calls, addr instruction
(user) calculation choice,
improvement peephole opt
(compiler) (compiler)

7
Introduction
n Organization of an optimizing compiler

Control
Data flow
flow Transformation
analysis
analysis

Code optimizer

8
Classifications of Optimization
techniques
§ Peephole optimization
§ Local optimizations
§ Global Optimizations
§ Inter-procedural
§ Intra-procedural

§ Loop optimization

9
Factors influencing Optimization
n The target machine: machine dependent factors
can be parameterized to compiler for fine tuning
n Architecture of Target CPU:
¨ Number of CPU registers
¨ RISC vs CISC
¨ Pipeline Architecture
¨ Number of functional units
n Machine Architecture
¨ Cache Size and type
¨ Cache/Memory transfer rate

10
Themes behind Optimization
Techniques
n Avoid redundancy: something already computed
need not be computed again
n Smaller code: less work for CPU, cache, and memory!
n Less jumps: jumps interfere with code pre-fetch
n Code locality: codes executed close together in time is
generated close together in memory – increase locality
of reference
n Extract more information about code: More info –
better code generation

11
Redundancy elimination
n Redundancy elimination = determining that two
computations are equivalent and eliminating one.
n There are several types of redundancy elimination:
¨ Value numbering
n Associates symbolic values to computations and identifies
expressions that have the same value
¨ Common subexpression elimination
n Identifies expressions that have operands with the same name
¨ Constant/Copy propagation
n Identifies variables that have constant/copy values and uses the
constants/copies in place of the variables.
¨ Partial redundancy elimination
n Inserts computations in paths to convert partial redundancy to
full redundancy.
12
Optimizing Transformations
n Compile time evaluation
n Common sub-expression elimination
n Code motion
n Strength Reduction
n Dead code elimination
n Copy propagation
n Loop optimization
¨ Induction variables and strength reduction

13
Compile-Time Evaluation
n Expressions whose values can be pre-
computed at the compilation time
n Two ways:
¨ Constant folding
¨ Constant propagation

14
Compile-Time Evaluation
n Constant folding: Evaluation of an
expression with constant operands to
replace the expression with single value
n Example:
area := (22.0/7.0) * r ** 2

area := 3.14286 * r ** 2

15
Compile-Time Evaluation
n Constant Propagation: Replace a
variable with constant which has been
assigned to it earlier.
n Example:
pi := 3.14286
area = pi * r ** 2
area = 3.14286 * r ** 2

16
Constant Propagation
n What does it mean?
¨ Given an assignment x = c, where c is a constant,
replace later uses of x with uses of c, provided there
are no intervening assignments to x.
n Similar to copy propagation
n Extra feature: It can analyze constant-value conditionals to
determine whether a branch should be executed or not.

n When is it performed?
¨ Early in the optimization process.
n What is the result?
¨ Smaller code
¨ Fewer registers

17
Common Sub-expression
Evaluation
n Identify common sub-expression present in different
expression, compute once, and use the result in all the
places.
¨ The definition of the variables involved should not change

Example:
a := b * c temp := b * c
… a := temp
… …
x := b * c + 5 x := temp + 5

18
Common Subexpression Elimination
n Local common subexpression elimination
¨ Performed within basic blocks
¨ Algorithm sketch:
n Traverse BB from top to bottom
n Maintain table of expressions evaluated so far
¨ if any operand of the expression is redefined, remove it from
the table
n Modify applicable instructions as you go
¨ generate temporary variable, store the expression in it and use
the variable next time the expression is encountered.

t=a+b
x=a+b x=t
... ...
y=a+b y=t 19
Common Subexpression Elimination
t1 = a + b
c=a+b c = t1
d=m*n t2 = m * n
e=b+d d = t2
f=a+b t3 = b + d
g=-b e = t3
h=b+a f = t1
a=j+a g = -b
k=m*n h = t1 /* commutative */
j=b+d a=j+a
a=-b k = t2
if m * n go to L j = t3
a = -b
if t2 go to L

the table contains quintuples:


(pos, opd1, opr, opd2, tmp)
20
Common Subexpression Elimination

n Global common subexpression


elimination
¨ Performed on flow graph
¨ Requires available expression information
n In addition to finding what expressions are
available at the endpoints of basic blocks, we
need to know where each of those expressions
was most recently evaluated (which block and
which position within that block).

21
Common Sub-expression
Evaluation
1 x:=a+b

“a + b” is not a
common sub-
2 a:= b 3 expression in 1 and 4

z : = a + b + 10 4

None of the variable involved should be modified in any path

22
Code Motion
n Moving code from one part of the program
to other without modifying the algorithm
¨ Reduce size of the program
¨ Reduce execution frequency of the code
subjected to movement

23
Code Motion
1. Code Space reduction: Similar to common
sub-expression elimination but with the
objective to reduce code size.
Example: Code hoisting
temp : = x ** 2
if (a< b) then if (a< b) then
z := x ** 2 z := temp
else else
y := x ** 2 + 10 y := temp + 10

“x ** 2“ is computed once in both cases, but the code size in the


second case reduces.
24
Code Motion
2 Execution frequency reduction: reduce execution
frequency of partially available expressions
(expressions available atleast in one path)

Example:
if (a<b) then if (a<b) then
z=x*2 temp = x * 2
z = temp
else else
y = 10 y = 10
temp = x * 2
g=x*2 g = temp;

25
Code Motion
nMove expression out of a loop if the
evaluation does not change inside the
loop.
Example:
while ( i < (max-2) ) …
Equivalent to:
t := max - 2
while ( i < t ) …

26
Code Motion
n Safety of Code movement
Movement of an expression e from a basic block bi to
another block bj, is safe if it does not introduce any
new occurrence of e along any path.

Example: Unsafe code movement


temp = x * 2
if (a<b) then if (a<b) then
z=x*2 z = temp
else else
y = 10 y = 10

27
Strength Reduction
n Replacement of an operator with a less costly one.
Example:
temp = 5;
for i=1 to 10 do for i=1 to 10 do
… …
x=i*5 x = temp
… …
temp = temp + 5
end end

• Typical cases of strength reduction occurs in address


calculation of array references.
• Applies to integer expressions involving induction
variables (loop optimization)
28
Dead Code Elimination
n Dead Code are portion of the program which will
not be executed in any path of the program.
¨ Can be removed
n Examples:
¨ No control flows into a basic block
¨ A variable is dead at a point -> its value is not used
anywhere in the program
¨ An assignment is dead -> assignment assigns a value
to a dead variable

29
Dead Code Elimination
“x” is dead variable
x=y-5
Definition of “x” is dead

• Beware of side effects in code during


dead code elimination

30
Dead Code Elimination
• Examples:

DEBUG:=0
if (DEBUG) print Can be
eliminated

31
Copy Propagation
n What does it mean?
¨ Given an assignment x = y, replace later uses of x
with uses of y, provided there are no intervening
assignments to x or y.
n When is it performed?
¨ Atany level, but usually early in the
optimization process.
n What is the result?
¨ Smaller code

32
Copy Propagation
n f := g are called copy statements or copies
n Use of g for f, whenever possible after copy
statement

Example:
x[i] = a; x[i] = a;
sum = x[i] + a; sum = a + a;

n May not appear to be code improvement, but


opens up scope for other optimizations.

33
Local Copy Propagation
n Local copy propagation
¨Performed within basic blocks
¨Algorithm sketch:
n traverseBB from top to bottom
n maintain table of copies encountered so far

n modify applicable instructions as you go

34
Loop Optimization
n Decrease the number if instruction in the
inner loop
n Even if we increase no of instructions in
the outer loop
n Techniques:
¨ Code motion
¨ Induction variable elimination
¨ Strength reduction

35
Peephole Optimization

n Passover generated code to examine


a few instructions, typically 2 to 4
¨ Redundant instruction Elimination: Use
algebraic identities
¨ Flowof control optimization: removal of
redundant jumps
¨ Use of machine idioms

36
Redundant instruction elimination
n Redundant load/store: see if an obvious replacement is possible
MOV R0, a
MOV a, R0
Can eliminate the second instruction without needing any global
knowledge of a
n Unreachable code: identify code which will never be executed:
#define DEBUG 0
if( DEBUG) { if (0 != 1) goto L2
print debugging info print debugging info
}
L2:

37
Algebraic identities
n Worth recognizing single instructions with a constant operand:
A * 1 = A
A * 0 = 0
A / 1 = A
A * 2 = A + A
More delicate with floating-point
n Strength reduction:
A ^ 2 = A * A

38
Objective
n Why would anyone write X * 1?
n Why bother to correct such obvious junk code?
n In fact one might write
#define MAX_TASKS 1
...
a = b * MAX_TASKS;
n Also, seemingly redundant code can be produced
by other optimizations. This is an important effect.

39
Replace Multiply by Shift
n A := A * 4;
¨ Can be replaced by 2-bit left shift (signed/unsigned)
¨ But must worry about overflow if language does
n A := A / 4;
¨ If unsigned, can replace with shift right
¨ But shift right arithmetic is a well-known problem
¨ Language may allow it anyway (traditional C)

40
The right shift problem
n Arithmetic Right shift:
¨ shift right and use sign bit to fill most significant
bits
-5 111111...1111111011
SAR 111111...1111111101
which is -3, not -2
¨ in most languages -5/2 = -2

41
Addition chains for multiplication

n If multiply is very slow (or on a machine with no multiply


instruction like the original SPARC), decomposing a
constant operand into sum of powers of two can be
effective:
X * 125 = x * 128 - x*4 + x
¨ two shifts, one subtract and one add, which may be
faster than one multiply
¨ Note similarity with efficient exponentiation method

42
Folding Jumps to Jumps

n A jump to an unconditional jump can copy the target


address
JNE lab1
...
lab1: JMP lab2
Can be replaced by:
JNE lab2
As a result, lab1 may become dead (unreferenced)

43
Jump to Return
n A jump to a return can be replaced by a
return
JMP lab1
...
lab1: RET
¨ Can be replaced by
RET
lab1 may become dead code

44
Usage of Machine idioms
n Use machine specific hardware instruction
which may be less costly.

i := i + 1
ADD i, #1 INC i

45
Local Optimization

46
Optimization of Basic Blocks
n Many structure preserving transformations
can be implemented by construction of
DAGs of basic blocks

47
DAG representation
of Basic Block (BB)
n Leaves are labeled with unique identifier (var
name or const)
n Interior nodes are labeled by an operator symbol
n Nodes optionally have a list of labels (identifiers)
n Edges relates operands to the operator (interior
nodes are operator)
n Interior node represents computed value
¨ Identifier in the label are deemed to hold the value

48
Example: DAG for BB
t1
t1 := 4 * i *

4 i
t1 := 4 * i
t3 := 4 * i
t2 := t1 + t3 if (i <= 20)goto L1
+ t2 <= (L1)
* t1, t3
i 20

4 i
49
Construction of DAGs for BB
n I/p: Basic block, B
n O/p: A DAG for B containing the following
information:
1) A label for each node
2) For leaves the labels are ids or consts
3) For interior nodes the labels are operators
4) For each node a list of attached ids (possible
empty list, no consts)

50
Construction of DAGs for BB
n Data structure and functions:
¨ Node:
1) Label: label of the node
2) Left: pointer to the left child node
3) Right: pointer to the right child node
4) List: list of additional labels (empty for leaves)
¨ Node (id): returns the most recent node created for
id. Else return undef
¨ Create(id,l,r): create a node with label id with l as
left child and r as right child. l and r are optional
params.

51
Construction of DAGs for BB
n Method:
For each 3AC, A in B
A if of the following forms:
1. x := y op z
2. x := op y
3. x := y
1. if ((ny = node(y)) == undef)
ny = Create (y);
if (A == type 1)
and ((nz = node(z)) == undef)
nz = Create(z);

52
Construction of DAGs for BB
2. If (A == type 1)
Find a node labelled ‘op’ with left and right as ny and nz
respectively [determination of common sub-expression]
If (not found) n = Create (op, ny, nz);
If (A == type 2)
Find a node labelled ‘op’ with a single child as ny
If (not found) n = Create (op, ny);
If (A == type 3) n = Node (y);
3. Remove x from Node(x).list
Add x in n.list
Node(x) = n;

53
Example: DAG construction
from BB
t1 := 4 * i

* t1

4 i

54
Example: DAG construction
from BB
t1 := 4 * i
t2 := a [ t1 ]

[] t2

* t1

a 4 i

55
Example: DAG construction
from BB
t1 := 4 * i
t2 := a [ t1 ]
t3 := 4 * i

[] t2

* t1, t3

a 4 i

56
Example: DAG construction
from BB
t1 := 4 * i
t2 := a [ t1 ]
t3 := 4 * i
t4 := b [ t3 ]

t4 [] [] t2

* t1, t3

b a 4 i

57
Example: DAG construction
from BB
t1 := 4 * i
t2 := a [ t1 ]
t3 := 4 * i + t5
t4 := b [ t3 ]
t5 := t2 + t4
t4 [] [] t2

* t1, t3

b a 4 i

58
Example: DAG construction
from BB
t1 := 4 * i
t2 := a [ t1 ]
t3 := 4 * i + t5,i
t4 := b [ t3 ]
t5 := t2 + t4
i := t5 t4 [] [] t2

* t1, t3

b a 4 i

59
DAG of a Basic Block
n Observations:
¨ A leaf node for the initial value of an id
¨ A node n for each statement s
¨ The children of node n are the last definition
(prior to s) of the operands of n

60
Optimization of Basic Blocks
n Common sub-expression elimination: by
construction of DAG
¨ Note:for common sub-expression elimination,
we are actually targeting for expressions that
compute the same value.

a := b + c
b := b – d Common expressions
c := c + d But do not generate the
e := b + c same result

61
Optimization of Basic Blocks
n DAG representation identifies expressions
that yield the same result
+ e
a := b + c
b := b – d
c := c + d
+ a - b + c
e := b + c

b0 c0 d0

62
Optimization of Basic Blocks
n Dead code elimination: Code generation
from DAG eliminates dead code.
c +
a := b + c
a := b + c
b := a – d ×b,d - d := a - d
d := a – d
c := d + c
c := d + c a +
d0
b is not live
b0 c0

63
Loop Optimization

64
Loop Optimizations
n Most important set of optimizations
¨ Programs are likely to spend more time in
loops
n Presumption: Loop has been identified
n Optimizations:
¨ Loop invariant code removal
¨ Induction variable strength reduction
¨ Induction variable reduction

65
Loops in Flow Graph
n Dominators:
A node d of a flow graph G dominates a node n, if
every path in G from the initial node to n goes through
d.

Represented as: d dom n

Corollaries:
Every node dominates itself.
The initial node dominates all nodes in G.
The entry node of a loop dominates all nodes in the
loop.
66
Loops in Flow Graph
n Each node n has a unique immediate dominator
m, which is the last dominator of n on any path
in G from the initial node to n.
(d ≠ n) && (d dom n) → d dom m
n Dominator tree (T):
A representation of dominator information of
flow graph G.
n The root node of T is the initial node of G
n A node d in T dominates all node in its sub-tree

67
Example: Loops in Flow Graph
1 1

2 3
2 3

4
4
5 6
5 6 7
7

8 9
8 9

Flow Graph Dominator Tree


68
Loops in Flow Graph
n Natural loops:
1. A loop has a single entry point, called the “header”.
Header dominates all node in the loop
2. There is at least one path back to the header from
the loop nodes (i.e. there is at least one way to
iterate the loop)

n Natural loops can be detected by back edges.


n Back edges: edges where the sink node (head) dominates
the source node (tail) in G

69
Natural loop construction
n Construction of natural loop for a back edge
Input: A flow graph G,
A back edge n → d
Output: The set loop consisting of all nodes in
the natural loop of n → d
Method:
stack := є ; loop := {d};
insert(n); Function: insert (m)
if !(m є loop)
while (stack not empty)
loop := loop U {m}
m := stack.pop();
stack.push(m)
for each predecessor p of m do
insert(p)

70
Inner loops
n Property of natural loops:
¨ If two loops l1 and l2, do not have the same
header,
n l1 and l2 are disjoint.
n One is an inner loop of the other.
n Inner loop: loop that contains no other
loop.
n Loops which do not have the same header.

71
Inner loops
n Loops having the same header:
It is difficult to conclude
B1 which one of {B1, B2, B3}
and {B1, B2, B4} is the inner
Loop without detailed analysis
B2 of code.

Assumption:
B3 B4 When two loops have the same
header they are treated as a single
Loop.
72
Reducible Flow Graphs
n A flow graph G is reducible iff we can
partition the edges in two disjoint sets,
often referred as forward edges and back
edges, with the following two properties:
1. The forward edges form an acyclic graph in
which every node is reachable from the
initial node of G
2. The back edges consists only of edges
whose heads dominate their tails

73
Example: Reducible Flow Graphs
1
1
2 3
2 3
4
Irreducible Flow Graph
5 6
The cycle (2,3) can be entered
7 at two different places, nodes
2 and 3
8 9
• No back edges, and
Reducible Flow Graph • The graph is not acyclic
74
Reducible Flow Graphs
n Key property of a reducible flow graph
for loop analysis:
¨ A set of nodes informally regarded as a
loop, must contain a back edge.

Optimization techniques like code motion,


induction variable removal, etc cannot be
directly applied to irreducible graphs.

75
Loop Optimization
n Loop interchange: exchange inner loops with
outer loops
n Loop splitting: attempts to simplify a loop or
eliminate dependencies by breaking it into
multiple loops which have the same bodies but
iterate over different contiguous portions of the
index range.
¨ A useful special case is loop peeling - simplify a loop
with a problematic first iteration by performing that
iteration separately before entering the loop.

76
Loop Optimization
n Loop fusion: two adjacent loops would iterate
the same number of times, their bodies can be
combined as long as they make no reference to
each other's data
n Loop fission: break a loop into multiple loops
over the same index range but each taking only
a part of the loop's body.
n Loop unrolling: duplicates the body of the loop
multiple times

77
Loop Optimization
Header

n Pre-Header: loop L

¨ Targeted to hold statements


that are moved out of the loop
¨ A basic block which has only
the header as successor
Pre-header
¨ Control flow that used to enter
the loop from outside the loop,
Header
through the header, enters the
loop from the pre-header loop L

78
Characteristic of a loop
n Nesting depth:
¨ depth(outerloop) = 1
¨ depth(parent or containing loop) + 1

n Trip count
¨ How many time a loop is iterated.
¨ for(int i=0; i<100; i++) => trip count = 100

79
Loop Invariant Code Removal
n Move out to pre-header the statements
whose source operands do not change
within the loop.
¨ Be careful with the memory operations
¨ Be careful with statements which are
executed in some of the iterations

80
Loop Invariant Code Removal
n Rules: A statement S: x:=y op z is loop invariant:
¨y and z not modified in loop body
¨ S is the only statement to modify x
¨ For all uses of x, x is in the available def set.
¨ For all exit edge from the loop, S is in the available
def set of the edges.
¨ If S is a load or store (mem ops), then there is no
writes to address(x) in the loop.

81
Loop Invariant Code Removal
n Loop invariant code removal can be done without
available definition information.

Rules that need change:


n For all use of x is in the n Approx of First rule:
available definition set ¨ d dominates all uses of x
n For all exit edges, if x is n Approx of Second rule
live on the exit edges, is ¨ d dominates all exit basic
in the available definition blocks where x is live
set on the exit edges

82
Loop Induction Variable
n Induction variables are variables such that every
time they change value, they are incremented or
decremented.
¨ Basic induction variable: induction variable whose
only assignments within a loop are of the form:
i = i +/- C, where C is a constant.
¨ Primary induction variable: basic induction variable
that controls the loop execution
(for i=0; i<100; i++)
i (register holding i) is the primary induction variable.
¨ Derived induction variable: variable that is a linear
function of a basic induction variable.
83
Loop Induction Variable
n Basic: r4, r7, r1 r1 = 0
r7 = &A
n Primary: r1 Loop: r2 = r1 * 4
r4 = r7 + 3
n Derived: r2 r7 = r7 + 1
r10 = *r2
r3 = *r4
r9 = r1 * r3
r10 = r9 >> 4
*r2 = r10
r1 = r1 + 4
If(r1 < 100) goto Loop

84
Induction Variable Strength
Reduction
n Create basic induction variables from derived
induction variables.
n Rules: (S: x := y op z)
¨ op is *, <<, +, or –
¨ y is a induction variable
¨ z is invariant
¨ No other statement modifies x
¨ x is not y or z
¨ x is a register

85
Induction Variable Strength
Reduction
n Transformation:
Insert the following into the bottom of pre-header:
new_reg = expression of target statement S
if (opcode(S)) is not add/sub, insert to the bottom of the
preheader
new_inc = inc(y,op,z) Function: inc()
else
new_inc = inc(x) Calculate the amount of inc
for 1st param.
Insert the following at each update of y
new_reg = new_reg + new_inc
Change S: x = new_reg

86
Example: Induction Variable
Strength Reduction
new_reg = r4 * r9
new_inc = r9

r5 = r4 - 3 r5 = r4 - 3
r4 = r4 + 1 r4 = r4 + 1

new_reg += new_inc
r7 = r4 *r9
r7 = new_reg

r6 = r4 << 2 r6 = r4 << 2

87
Induction Variable Elimination
n Remove unnecessary basic induction variables from the
loop by substituting uses with another basic induction
variable.
n Rules:
¨ Find two basic induction variables, x and y
¨ x and y in the same family
n Incremented at the same place
¨ Increments are equal
¨ Initial values are equal
¨ x is not live at exit of loop
¨ For each BB where x is defined, there is no use of x between the
first and the last definition of y

88
Example: Induction Variable
Elimination
r1 = 0 r2 = 0
r2 = 0

r1 = r1 - 1 r2 = r2 - 1
r2 = r2 -1

r9 = r2 + r4 r7 = r1 * r9 r9 = r2 + r4 r7 = r2 * r9

r4 = *(r1) r4 = *(r2)

*r2 = r7 *r7 = r2

89
Induction Variable Elimination
n Variants:
1. Trivial: induction variable that are never used except to
Complexity of elimination

increment themselves and not live at the exit of loop


2. Same increment, same initial value (discussed)
3. Same increment, initial values are a known constant offset from
one another
4. Same increment, nothing known about the relation of initial
value
5. Different increments, nothing known about the relation of initial
value

¨ 1,2 are basically free


¨ 3-5 require complex pre-header operations

90
Example: Induction Variable
Elimination
n Case 4: Same increment, unknown initial value
For the induction variable that we are eliminating, look at each non-
incremental use, generate the same sequence of values as before.
If that can be done without adding any extra statements in the loop
body, then the transformation can be done.

rx := r2 –r1 + 8

r4 := r2 + 8 r4 := r1 + rx
r3 := r1 + 4 r3 := r1 = 4
. .
. .
r1 := r1 + 4 r1 := r1 + 4
r2 := r2 + 4
91
Loop Unrolling
n Replicate the body of a loop (N-1) times,
resulting in total N copies.
¨ Enable overlap of operations from different iterations
¨ Increase potential of instruction level parallelism (ILP)

n Variants:
¨ Unroll multiple of known trip counts
¨ Unroll with remainder loop
¨ While loop unroll

92
Global Data Flow
Analysis

93
Global Data Flow Analysis
n Collect information about the whole program.
n Distribute the information to each block in the
flow graph.

n Data flow information: Information collected by


data flow analysis.
n Data flow equations: A set of equations solved
by data flow analysis to gather data flow
information.

94
Data flow analysis
n IMPORTANT!
¨ Data flow analysis should never tell us that a
transformation is safe when in fact it is not.
¨ When doing data flow analysis we must be
n Conservative
¨ Do not consider information that may not preserve the
behavior of the program
n Aggressive
¨ Try to collect information that is as exact as possible, so
we can get the greatest benefit from our optimizations.

95
Global Iterative Data Flow Analysis
n Global:
¨ Performed on the flow graph
¨ Goal = to collect information at the beginning
and end of each basic block
n Iterative:
¨ Construct data flow equations that describe
how information flows through each basic
block and solve them by iteratively
converging on a solution.

96
Global Iterative Data Flow Analysis
n Components of data flow equations
¨ Sets containing collected information
n in set: information coming into the BB from outside
(following flow of data)
n gen set: information generated/collected within the BB
n kill set: information that, due to action within the BB, will
affect what has been collected outside the BB
n out set: information leaving the BB
¨ Functions (operations on these sets)
n Transfer functions describe how information changes as it
flows through a basic block
n Meet functions describe how information from multiple
paths is combined.

97
Global Iterative Data Flow Analysis
n Algorithm sketch
¨ Typically, a bit vector is used to store the information.
n For example, in reaching definitions, each bit position corresponds
to one definition.
¨ We use an iterative fixed-point algorithm.
¨ Depending on the nature of the problem we are solving, we
may need to traverse each basic block in a forward (top-down)
or backward direction.
n The order in which we "visit" each BB is not important in terms of
algorithm correctness, but is important in terms of efficiency.
¨ In & Out sets should be initialized in a conservative and
aggressive way.

98
Global Iterative Data Flow Analysis

Initialize gen and kill sets


Initialize in or out sets (depending on "direction")
while there are no changes in in and out sets {
for each BB {
apply meet function
apply transfer function
}
}

99
Typical problems
n Reaching definitions
¨ For each use of a variable, find all definitions that
reach it.
n Upward exposed uses
¨ For each definition of a variable, find all uses that it
reaches.
n Live variables
¨ For a point p and a variable v, determine whether v is
live at p.
n Available expressions
¨ Findall expressions whose value is available at
some point p.
100
Global Data Flow Analysis
n A typical data flow equation:
out[ S ]  gen[ S ] (in[ S ]  kill[ S ])
S: statement
in[S]: Information goes into S
kill[S]: Information get killed by S
gen[S]: New information generated by S
out[S]: Information goes out from S

101
Global Data Flow Analysis
n The notion of gen and kill depends on the
desired information.
n In some cases, in may be defined in terms of out
- equation is solved as analysis traverses in the
backward direction.
n Data flow analysis follows control flow graph.
¨ Equations are set at the level of basic blocks, or even
for a statement

102
Points and Paths
n Point within a basic block:
¨ A location between two consecutive statements.
¨ A location before the first statement of the basic block.
¨ A location after the last statement of the basic block.
n Path: A path from a point p1 to pn is a sequence
of points p1, p2, … pn such that for each i : 1 ≤ i
≤ n,
¨ pi is a point immediately preceding a statement and
pi+1 is the point immediately following that statement
in the same block, or
¨ pi is the last point of some block and pi+1 is first point
in the successor block.
103
Example: Paths and Points
d1: i := m – 1
d2: j := n B1
d3: a := u1
Path:
p3
d4: i := i + 1 B2 p1, p2, p3, p4,
p4 p5, p6 … pn
p5
p6
d5: j := j - 1 B3

B4

p1 pn
d6: a := u2 B5 B6
p2
104
Reaching Definition
n Definition of a variable x is a statement that assigns or
may assign a value to x.
¨ Unambiguous Definition: The statements that certainly assigns a
value to x
n Assignments to x
n Read a value from I/O device to x
¨ Ambiguous Definition: Statements that may assign a value to x
n Call to a procedure with x as parameter (call by ref)
n Call to a procedure which can access x (x being in the scope of the
procedure)
n x is an alias for some other variable (aliasing)
n Assignment through a pointer that could refer x

105
Reaching Definition
n A definition d reaches a point p
¨ if there is a path from the point immediately
following d to p and
¨ d is not killed along the path (i.e. there is not
redefinition of the same variable in the path)
n A definition of a variable is killed between
two points when there is another definition
of that variable along the path.

106
Example: Reaching Definition
d1: i := m – 1
d2: j := n B1
d3: a := u1 Definition of i (d1)
reaches p1
p1
p2
d4: i := i + 1 B2 Killed as d4, does
not reach p2.
d5: j := j - 1 B3
Definition of i (d1)
does not reach B3,
B4 B4, B5 and B6.

d6: a := u2 B5 B6
107
Reaching Definition
n Non-Conservative view: A definition might reach
a point even if it might not.
¨ Only unambiguous definition kills a earlier definition
¨ All edges of flow graph are assumed to be traversed.

if (a == b) then a = 2
else if (a == b) then a = 4
The definition “a=4” is not reachable.

Whether each path in a flow graph is taken is an undecidable


problem

108
Data Flow analysis of a
Structured Program
n Structured programs have well defined
loop constructs – the resultant flow graph
is always reducible.
¨ Without loss of generality we only consider
while-do and if-then-else control constructs
S → id := E│S ; S
│ if E then S else S │ do S while E
E → id + id │ id
The non-terminals represent regions.
109
Data Flow analysis of a
Structured Program
n Region: A graph G’= (N’,E’) which is
portion of the control flow graph G.
¨ The set of nodes N’ is in G’ such that
n N’ includes a header h
n h dominates all node in N’

¨ The set of edges E’ is in G’ such that


n All edges a → b such that a,b are in N’

110
Data Flow analysis of a
Structured Program

n Region consisting of a statement S:


¨ Control can flow to only one block outside the region
n Loop is a special case of a region that is strongly
connected and includes all its back edges.
n Dummy blocks with no statements are used as
technical convenience (indicated as open
circles)

111
Data Flow analysis of a Structured Program:
Composition of Regions

S1
S → S1 ; S2

S2

112
Data Flow analysis of a Structured Program:
Composition of Regions

if E goto S1

S → if E then S1 else S2
S1 S2

113
Data Flow analysis of a Structured Program:
Composition of Regions

S1
S → do S1 while E

if E goto S1

114
Data Flow Equations
n Each region (or NT) has four attributes:
¨ gen[S]: Set of definitions generated by the
block S.
If a definition d is in gen[S], then d reaches the
end of block S.
¨ kill[S]: Set of definitions killed by block S.
If d is in kill[S], d never reaches the end of block S. Every
path from the beginning of S to the end S must have a
definition for a (where a is defined by d).

115
Data Flow Equations
¨ in[S]:The set of definition those are live at
the entry point of block S.
¨ out[S]: The set of definition those are live at
the exit point of block S.
n The data flow equations are inductive or
syntax directed.
¨ gen and kill are synthesized attributes.
¨ in is an inherited attribute.

116
Data Flow Equations
n gen[S] concerns with a single basic block.
It is the set of definitions in S that reaches
the end of S.
n In contrast out[S] is the set of definitions
(possibly defined in some other block) live
at the end of S considering all paths
through S.

117
Data Flow Equations
Single statement

gen[ S ]  {d }
kill[ S ]  Da  {d }

S d: a := b + c

out[ S ]  gen[ S ] (in[ S ]  kill[ S ])

Da: The set of definitions in the program for variable a

118
Data Flow Equations
Composition

gen[ S ]  gen[ S 2 ] ( gen[ S1 ]  kill[ S 2 ])


kill[ S ]  kill[ S 2 ] (kill[ S1 ]  gen[ S 2 ])
S1
S
in[ S1 ]  in[ S ]
in[ S2 ]  out[ S1 ] S2
out[ S ]  out[ S2 ]

119
Data Flow Equations
if-then-else
gen[ S ]  gen[ S1 ] gen[ S2 ]
kill[ S ]  kill[ S1 ] kill[ S2 ]

S S1 S2

in[ S1 ]  in[ S ]
in[ S 2 ]  in[ S ]
out[ S ]  out[ S1 ] out[ S 2 ]

120
Data Flow Equations
Loop

gen[ S ]  gen[ S1 ]
kill[ S ]  kill[ S1 ]

S S1

in[ S1 ]  in[ S ] gen[ S1 ]


out[ S ]  out[ S1 ]

121
Data Flow Analysis
n The attributes are computed for each region.
The equations can be solved in two phases:
¨ gen and kill can be computed in a single pass of a
basic block.
¨ in and out are computed iteratively.
n Initial condition for in for the whole program is 
n In can be computed top- down
n Finally out is computed

122
Dealing with loop
n Due to back edge, in[S] cannot be used as
in [S1]
n in[S1] and out[S1] are interdependent.
n The equation is solved iteratively.
n The general equations for in and out:
in[ S ]   (out[Y ] : Y is a predecessor of S)
out[ S ]  gen[ S ] (in[ S ]  kill[ S ])
123
Reaching definitions
n What is safe?
¨ To assume that a definition reaches a
point even if it turns out not to.
¨ The computed set of definitions reaching a
point p will be a superset of the actual set
of definitions reaching p
¨ Goal : make the set of reaching definitions
as small as possible (i.e. as close to the
actual set as possible)
124
Reaching definitions
n How are the gen and kill sets defined?
¨ gen[B] = {definitions that appear in B and
reach the end of B}
¨ kill[B] = {all definitions that never reach
the end of B}
n What is the direction of the analysis?
¨ forward
¨ out[B] = gen[B]  (in[B] - kill[B])

125
Reaching definitions
n What is the confluence operator?
¨ union
¨ in[B] =  out[P], over the predecessors P of
B
n How do we initialize?
¨ start small
n Why? Because we want the resulting set to be as
small as possible
¨ for each block B initialize out[B] = gen[B]

126
Computation of gen and kill sets

for each basic block BB do


gen(BB) =  ; kill(BB) =  ;
for each statement (d: x := y op z) in sequential order in BB, do
kill(BB) = kill(BB) U G[x];
G[x] = d;
endfor
gen(BB) = U G[x]: for all id x
endfor

127
Computation of in and out sets
for all basic blocks BB in(BB) = 
for all basic blocks BB out(BB) = gen(BB)
change = true
while (change) do
change = false
for each basic block BB, do
old_out = out(BB)
in(BB) = U(out(Y)) for all predecessors Y of BB
out(BB) = gen(BB) + (in(BB) – kill(BB))
if (old_out != out(BB)) then change = true
endfor
endfor

128
Live Variable (Liveness) Analysis
n Liveness: For each point p in a program and each
variable y, determine whether y can be used before
being redefined, starting at p.

n Attributes
¨ use = set of variable used in the BB prior to its definition
¨ def = set of variables defined in BB prior to any use of the
variable
¨ in = set of variables that are live at the entry point of a BB
¨ out = set of variables that are live at the exit point of a BB

129
Live Variable (Liveness) Analysis
n Data flow equations:
in[ B]  use[ B] (out[ B]  def [ B])
out[ B]   in[ S ]
S  succ ( B )
¨ 1st Equation: a var is live, coming in the block, if either
n it is used before redefinition in B
or
n it is live coming out of B and is not redefined in B
¨ 2ndEquation: a var is live coming out of B, iff it is live
coming in to one of its successors.

130
Example: Liveness
r2, r3, r4, r5 are all live as they
r1 = r2 + r3 are consumed later, r6 is dead
r6 = r4 – r5 as it is redefined later

r4 is dead, as it is redefined.
So is r6. r2, r3, r5 are live
r4 = 4
r6 = 8

r6 = r2 + r3
r7 = r4 – r5 What does this mean?
r6 = r4 – r5 is useless,
it produces a dead value !!
Get rid of it!
131
Computation of use and def sets

for each basic block BB do


def(BB) =  ; use(BB) =  ;
for each statement (x := y op z) in sequential order, do
for each operand y, do
if (y not in def(BB))
use(BB) = use(BB) U {y};
endfor
def(BB) = def(BB) U {x};
endfor

def is the union of all the LHS’s


use is all the ids used before defined
132
Computation of in and out sets
for all basic blocks BB
in(BB) = ;

change = true;
while (change) do
change = false
for each basic block BB do
old_in = in(BB);
out(BB) = U{in(Y): for all successors Y of BB}
in(X) = use(X) U (out(X) – def(X))
if (old_in != in(X)) then change = true
endfor
endfor

133
DU/UD Chains
n Convenient way to access/use reaching
definition information.
n Def-Use chains (DU chains)
¨ Givena def, what are all the possible
consumers of the definition produced
n Use-Def chains (UD chains)
¨ Given a use, what are all the possible
producers of the definition consumed

134
Example: DU/UD Chains
1: r1 = MEM[r2+0]
2: r2 = r2 + 1 DU Chain of r1:
3: r3 = r1 * r4 (1) -> 3,4
(4) ->5

DU Chain of r3:
(3) -> 11
4: r1 = r1 + 5 7: r7 = r6 (5) -> 11
5: r3 = r5 – r1 8: r2 = 0 (12) -> UD Chain of r1:
6: r7 = r3 * 2 9: r7 = r7 + 1 (12) -> 11

UD Chain of r7:
(10) -> 6,9
10: r8 = r7 + 5
11: r1 = r3 – r8
12: r3 = r1 * 2
135
Some-things to Think About
n Liveness and Reaching definitions are basically the
same thing!
¨ All dataflow is basically the same with a few parameters
n Meaning of gen/kill (use/def)
n Backward / Forward
n All paths / some paths (must/may)
¨ So far, we have looked at may analysis algorithms
¨ How do you adjust to do must algorithms?
n Dataflow can be slow
¨ How to implement it efficiently?
¨ How to represent the info?

136
Generalizing Dataflow Analysis
n Transfer function
¨ How information is changed by BB
out[BB] = gen[BB] + (in[BB] – kill[BB]) forward analysis
in[BB] = gen[BB] + (out[BB] – kill[BB]) backward analysis
n Meet/Confluence function
¨ How information from multiple paths is combined
in[BB] = U out[P] : P is pred of BB forward analysis
out[BB] = U in[P] : P is succ of BB backward analysis

137
Generalized Dataflow Algorithm
change = true;
while (change)
change = false;
for each BB
apply meet function
apply transfer function
if any changes  change = true;

138
Example: Liveness by upward
exposed uses
for each basic block BB, do
gen[ BB ]  
kill[ BB ]  
for each operation (x := y op z) in reverse order in BB, do
gen[ BB ]  gen[ BB ]  {x}
kill[ BB ]  kill[ BB ]{x}
for each source operand of op, y, do
gen[ BB ]  gen[ BB ]{ y}
kill[ BB ]  kill[ BB ]  { y}
endfor
endfor
endfor
139
Beyond Upward Exposed Uses
n Upward exposed defs n Downward exposed defs
in = U(out(pred))
¨ in = gen + (out – kill)
¨
¨ out = U(in(succ)) ¨ out = gen + (in - kill)
¨ Walk ops reverse order ¨ Walk in forward order
n gen += {dest} kill += {dest} n gen += {dest}; kill += {dest};

n Downward exposed uses


¨ in = U(out(pred))
¨ out = gen + (in - kill)
¨ Walk in forward order
n gen += {src}; kill -= {src};
n gen -= {dest}; kill += {dest};

140
All Path Problem
n Up to this point
¨ Any path problems (maybe relations)
n Definition reaches along some path
n Some sequence of branches in which def reaches
n Lots of defs of the same variable may reach a point
¨ Use of Union operator in meet function
n All-path: Definition guaranteed to reach
¨ Regardless of sequence of branches taken, def reaches
¨ Can always count on this
¨ Only 1 def can be guaranteed to reach
¨ Availability (as opposed to reaching)
n Available definitions
n Available expressions (could also have reaching expressions,
but not that useful)

141
Reaching vs Available Definitions
1: r1 = r2 + r3 1,2 reach
2: r6 = r4 – r5 1,2 available

1,2 reach 3: r4 = 4
1,2 available 4: r6 = 8

1,3,4 reach
1,3,4 available
5: r6 = r2 + r3
6: r7 = r4 – r5 1,2,3,4 reach
1 available
142
Available Definition Analysis
(Adefs)
n A definition d is available at a point p if along all paths
from d to p, d is not killed
n Remember, a definition of a variable is killed between 2 points when
there is another definition of that variable along the path
¨ r1 = r2 + r3 kills previous definitions of r1
n Algorithm:
¨ Forward dataflow analysis as propagation occurs from defs
downwards
¨ Use the Intersect function as the meet operator to guarantee the
all-path requirement
¨ gen/kill/in/out similar to reaching defs
n Initialization of in/out is the tricky part

143
Compute Adef gen/kill Sets

for each basic block BB do


gen(BB) =  ; kill(BB) =  ;
for each statement (d: x := y op z) in sequential order in BB, do
kill(BB) = kill(BB) U G[x];
G[x] = d;
endfor
gen(BB) = U G[x]: for all id x
endfor

Exactly the same as Reaching defs !!

144
Compute Adef in/out Sets
U = universal set of all definitions in the prog
in(0) = 0; out(0) = gen(0)
for each basic block BB, (BB != 0), do
in(BB) = 0; out(BB) = U – kill(BB)

change = true
while (change) do
change = false
for each basic block BB, do
old_out = out(BB)
in(BB) =  out(Y) : for all predecessors Y of BB
out(BB) = GEN(X) + (IN(X) – KILL(X))
if (old_out != out(X)) then change = true
endfor
endfor

145
Available Expression Analysis
(Aexprs)
n An expression is a RHS of an operation
¨ Ex: in “r2 = r3 + r4” “r3 + r4” is an expression
n An expression e is available at a point p if along all paths
from e to p, e is not killed.
n An expression is killed between two points when one of
its source operands are redefined
¨ Ex: “r1 = r2 + r3” kills all expressions involving r1
n Algorithm:
¨ Forward dataflow analysis
¨ Use the Intersect function as the meet operator to guarantee the
all-path requirement
¨ Looks exactly like adefs, except gen/kill/in/out are the RHS’s of
operations rather than the LHS’s
146
Available Expression
n Input: A flow graph with e_kill[B] and e_gen[B]
n Output: in[B] and out[B]
n Method:
foreach basic block B
in[B1] :=  ; out[B1] := e_gen[B1];
out[B] = U - e_kill[B];
change=true
while(change)
change=false;
for each basic block B,
in[B] :=  out[P]: P is pred of B
old_out := out[B];
out[B] := e_gen[B] (in[B] – e_kill[B])
if (out[B] ≠ old_out[B]) change := true;
147
Efficient Calculation of Dataflow
n Order in which the basic blocks are visited
is important (faster convergence)
n Forward analysis – DFS order
¨ Visit
a node only when all its predecessors
have been visited
n Backward analysis – PostDFS order
¨ Visit
a node only when all of its successors
have been visited

148
Representing Dataflow Information

n Requirements – Efficiency!
¨ Large amount of information to store
¨ Fast access/manipulation
n Bitvectors
¨ General strategy used by most compilers
¨ Bit positions represent defs (rdefs)
¨ Efficient set operations: union/intersect/isone
¨ Used for gen, kill, in, out for each BB

149
Optimization using Dataflow
n Classes of optimization
1. Classical (machine independent)
n Reducing operation count (redundancy elimination)
n Simplifying operations
2. Machine specific
n Peephole optimizations
n Take advantage of specialized hardware features
3. Instruction Level Parallelism (ILP) enhancing
n Increasing parallelism
n Possibly increase instructions

150
Types of Classical Optimizations

n Operation-level – One operation in isolation


¨ Constantfolding, strength reduction
¨ Dead code elimination (global, but 1 op at a time)

n Local – Pairs of operations in same BB


¨ May or may not use dataflow analysis
n Global – Again pairs of operations
¨ Pairs of operations in different BBs
n Loop – Body of a loop

151
Constant Folding
n Simplify operation based on values of target operand
¨ Constant propagation creates opportunities for this
n All constant operands
¨ Evaluate the op, replace with a move
n r1 = 3 * 4  r1 = 12
n r1 = 3 / 0  ??? Don’t evaluate excepting ops !, what about FP?
¨ Evaluate conditional branch, replace with BRU or noop
n if (1 < 2) goto BB2  goto BB2
n if (1 > 2) goto BB2  convert to a noop Dead code
n Algebraic identities
¨ r1 = r2 + 0, r2 – 0, r2 | 0, r2 ^ 0, r2 << 0, r2 >> 0  r1 = r2
¨ r1 = 0 * r2, 0 / r2, 0 & r2  r1 = 0
¨ r1 = r2 * 1, r2 / 1  r1 = r2

152
Strength Reduction
n Replace expensive ops with cheaper ones
¨ Constant propagation creates opportunities for this
n Power of 2 constants
¨ Mult by power of 2: r1 = r2 * 8  r1 = r2 << 3
¨ Div by power of 2: r1 = r2 / 4  r1 = r2 >> 2
¨ Rem by power of 2: r1 = r2 % 16  r1 = r2 & 15
n More exotic
¨ Replace multiply by constant by sequence of shift and adds/subs
n r1 = r2 * 6
¨ r100 = r2 << 2; r101 = r2 << 1; r1 = r100 + r101
n r1 = r2 * 7
¨ r100 = r2 << 3; r1 = r100 – r2

153
Dead Code Elimination
n Remove statement d: x := y op z whose
result is never consumed.
n Rules:
¨ DU chain for d is empty
¨ y and z are not live at d

154
Constant Propagation
n Forward propagation of moves/assignment
of the form
d: rx := L where L is literal

¨ Replacement of “rx” with “L” wherever


possible.
¨ d must be available at point of replacement.

155
Forward Copy Propagation
n Forward propagation of RHS of
assignment or mov’s.
r1 := r2 r1 := r2
. .
. .
. .
r4 := r1 + 1 r4 := r2 + 1

¨ Reduce chain of dependency


¨ Possibly create dead code

156
Forward Copy Propagation
n Rules:
Statement dS is source of copy propagation
Statement dT is target of copy propagation
¨ dS is a mov statement
¨ src(dS) is a register
¨ dT uses dest(dS)
¨ dS is available definition at dT
¨ src(dS) is a available expression at dT

157
Backward Copy Propagation
n Backward propagation of LHS of an assignment.
dT: r1 := r2 + r3  r4 := r2 + r3
r5 := r1 + r6  r5 := r4 + r6
dS: r4 := r1  Dead Code
n Rules:
¨ dT and dS are in the same basic block
¨ dest(dT) is register
¨ dest(dT) is not live in out[B]
¨ dest(dS) is a register
¨ dS uses dest(dT)
¨ dest(dS) not used between dT and dS
¨ dest(dS) not defined between d1 and dS
¨ There is no use of dest(dT) after the first definition of dest(dS)

158
Local Common Sub-Expression
Elimination

dS: r1 := r2 + r3
n Benefits:
¨ Reduced computation dT: r4 := r2 + r3
¨ Generates mov statements,
which can get copy propagated
n Rules:
¨ dS and dT has the same
expression dS: r1 := r2 + r3
¨ src(dS) == src(dT) for all sources r100 := r1
¨ For all sources x, x is not
redefined between dS and dT dT: r4 := r100

159
Global Common Sub-Expression
Elimination

n Rules:
¨ dS and dT has the same expression
¨ src(dS) == src(dT) for all sources of dS and dT
¨ Expression of dS is available at dT

160
Unreachable Code Elimination

Mark initial BB visited entry


to_visit = initial BB
while (to_visit not empty)
current = to_visit.pop() bb1 bb2
for each successor block of current
Mark successor as visited; bb3 bb4
to_visit += successor
endfor
endwhile bb5
Eliminate all unvisited blocks
Which BB(s) can be deleted?

161
Review Questions
Q1: Explain the 90-10 rule in code optimization. Why is it important?
Answer: The 90-10 rule states that 90% of a program’s execution time is
spent in only 10% of its code. This highlights the importance of focusing
optimization efforts on identifying and improving these performance-critical
sections, such as loops.

Q2: How does loop optimization improve program efficiency?


Answer: Loop optimization techniques such as loop unrolling, loop invariant
code motion, and strength reduction help by:
1. Reducing the number of instructions executed inside loops.
2. Improving data locality, thus enhancing CPU cache performance.
3. Minimizing control overhead and branch mispredictions.

162
Review Questions
Q3: What is peephole optimization, and how does it enhance
performance?

Answer: Peephole optimization is a local optimization technique that


examines small sequences of code (typically 2-4 instructions) and
replaces inefficient patterns with more efficient alternatives. It improves
performance by:
1. Removing redundant instructions.
2. Simplifying algebraic expressions (e.g., replacing x * 2 with x + x).
3. Eliminating unnecessary jumps or loads.

163
Review Questions
Q4: What are induction variables, and how does strength reduction optimize them?

Answer: Induction variables are loop control variables that change in a predictable
manner. Strength reduction optimizes them by replacing expensive operations (e.g.,
multiplication) with cheaper ones (e.g., addition). Example:
for (i = 0; i < n; i++) {
x = i * 5;
}
Optimized to:
temp = 0;
for (i = 0; i < n; i++) {
x = temp;
temp += 5;
}

164
Review Questions
Q5: Why is dead code elimination important in compiler
optimization?

Answer: Dead code elimination removes statements that do


not affect the program’s final output. This reduces:
1. Code size and memory usage.
2. Execution time by eliminating redundant computations.
3. Complexity, making debugging and maintenance easier.

165
Review Questions
Q6: How does the concept of "available expressions" help in common
subexpression elimination?

Answer: Available expressions analysis identifies computations that can be


reused instead of recomputed.
Example:
a = b + c;
x = b + c + d;
Optimized to:
temp = b + c;
a = temp;
x = temp + d;

166
CASE STUDIES

167
Case Study 1: Optimizing Redundant Computations

Problem: Solution:
Consider the following code: Using common subexpression
elimination, we rewrite it as:
int square(int x) { int main() {
return x * x; int temp = square(5);
} int a = temp;
int main() { int b = temp;
}
int a = square(5);
This avoids redundant
int b = square(5); computations and improves
} execution speed.

168
Case Study 2: Loop Invariant Code Motion

Problem: Solution:
The expression 10 * a does
Identify optimizations in
not change inside the loop.
this loop: Moving it outside:
for (int i = 0; i < n; i++) { int x = 10 * a;
int x = 10 * a; for (int i = 0; i < n; i++) {
arr[i] = x + i;
arr[i] = x + i;
}
} This reduces unnecessary recomputation
and speeds up execution.

169
Case Study 3: Strength Reduction in Induction Variables

Problem: Solution:
Replace multiplication with
Convert multiplication in addition:
a loop into an addition: int temp = 0;
for (int i = 0; i < n; i++) {
for (int i = 0; i < n; i++) {
arr[i] = temp;
arr[i] = i * 4; temp += 4;
} }
This reduces costly multiplications
to cheap additions.

170
Case Study 4: Removing Dead Code

Problem: Solution:
Identify dead code in: Remove the redundant
assignment:
int main() { int main() {
int x = 5; int y = 10;
int y = 10; printf("%d", y);
x = 20; // Previous value }
of x is never used. This eliminates unused code,
reducing memory usage and
printf("%d", y); improving performance.
}

171
Lecture Summary
1. Code optimization improves the efficiency, speed, and
scalability of compiled programs.
2. Techniques include peephole optimization, loop
optimizations, common subexpression elimination, and
dead code elimination.
3. Optimization must preserve program semantics while
improving execution time.
4. Factors such as CPU architecture, memory hierarchy,
and cache behavior influence optimization effectiveness.

172
Takeaways or Lessons Learnt
1. Optimizing code is crucial for high-performance
computing.
2. Loop transformations play a significant role in enhancing
program execution speed.
3. Redundancy elimination reduces computation and
memory overhead.
4. Compiler optimizations can dramatically affect execution
time and resource utilization.
5. Trade-offs exist—not all optimizations provide significant
benefits.

173
Assignments
Assignment 1: Optimization of a Code Snippet

Identify and apply at least four optimization techniques to the following code.
int sum_of_squares(int n) {
int sum = 0;
for (int i = 0; i < n; i++) {
sum += i * i;
}
return sum;
}
1. Explain the optimizations performed.
2. Implement the optimized version.
3. Compare execution times.

174
Assignments
Assignment 2: Case Study on Compiler Optimization
Techniques

Select any modern compiler (e.g., GCC, LLVM) and:


1. Investigate at least three optimization techniques it uses.
2. Provide real-world examples of how those techniques
improve execution speed.
3. Explain the trade-offs involved in those optimizations.

175

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy