Compiler Design Chapter-3
Compiler Design Chapter-3
Compiler Design Chapter-3
Syntax analysis
1
Outline
Introduction
Context free grammar (CFG)
Derivation
Parse tree
Ambiguity
Left recursion
Left factoring
Top-down parsing
• Recursive Descent Parsing (RDP)
• Non-recursive predictive parsing
– First and follow sets
– Construction of a predictive parsing table
2
Outline
LR(1) grammars
Syntax error handling
Error recovery in predictive parsing
Panic mode error recovery strategy
Bottom-up parsing (LR(k) parsing)
Stack implementation of shift/reduce parsing
Conflict during shift/reduce parsing
LR parsers
Constructing SLR parsing tables
Canonical LR parsing
LARL (Reading assignment)
Yacc
3
Introduction
Syntax: the way in which tokens are put together to
form expressions, statements, or blocks of statements.
The rules governing the formation of statements in a
programming language.
Syntax analysis: the task concerned with fitting a
sequence of tokens into a specified syntax.
Parsing: To break a sentence down into its component
parts with an explanation of the form, function, and
syntactical relationship of each part.
The syntax of a programming language is usually given
by the grammar rules of a context free grammar (CFG).
4
Parser
Parse tree
next char next token
lexical Syntax
analyzer analyzer
get next
char get next
token
Source
Program
symbol
table
Lexical Syntax
(Contains a record Error
Error
for each identifier)
5
Introduction…
The syntax analyzer (parser) checks whether a given
source program satisfies the rules implied by a CFG
or not.
If it satisfies, the parser creates the parse tree of that
program.
Otherwise, the parser gives the error messages.
A CFG:
gives a precise syntactic specification of a
programming language.
A grammar can be directly converted into a parser by
some tools (yacc).
6
Introduction…
The parser can be categorized into two groups:
Top-down parser
The parse tree is created top to bottom, starting from
the root to leaves.
Bottom-up parser
The parse tree is created bottom to top, starting from
the leaves to root.
Both top-down and bottom-up parser scan the input
from left to right (one symbol at a time).
Efficient top-down and bottom-up parsers can be
implemented by making use of context-free- grammar.
LL for top-down parsing
LR for bottom-up parsing
7
Context free grammar (CFG)
A context-free grammar is a specification for the
syntactic structure of a programming language.
Context-free grammar has 4-tuples:
G = (T, N, P, S) where
T is a finite set of terminals (a set of tokens)
N is a finite set of non-terminals (syntactic variables)
P is a finite set of productions of the form
8
Example: grammar for simple arithmetic
expressions
9
Notational Conventions Used
Terminals:
Lowercase letters early in the alphabet, such as a, b, c.
Operator symbols such as +, *, and so on.
Punctuation symbols such as parentheses, comma, and so
on.
The digits 0,1,. . . ,9.
Boldface strings such as id or if, each of which represents
a single terminal symbol.
Non-terminals:
Uppercase letters early in the alphabet, such as A, B, C.
The letter S is usually the start symbol.
Lowercase, italic names such as expr or stmt.
Uppercase letters may be used to represent non-terminals
for the constructs.
• expr, term, and factor are represented by E, T, F
10
Notational Conventions Used…
Grammar symbols
Uppercase letters late in the alphabet, such as X, Y, Z, that is, either non-
terminals or terminals.
Strings of terminals.
Lowercase letters late in the alphabet, mainly u,v,x,y ∈ T*
Strings of grammar symbols.
Lowercase Greek letters, α, β, γ ∈ (N∪T)*
A set of productions A α1, A α2, . . . , A αk with a common head A (call
them A-productions), may be written
A α1 | α2 |…| αk
α1, α2,. . . , αk the alternatives for A.
The head of the first production is the start symbol.
EE+T|E-TIT
TT*FIT/FIF
F ( E ) | id
11
Derivation
A derivation is a sequence of replacements of structure names
by choices on the right hand sides of grammar rules.
Example: E → E + E | E – E | E * E | E / E | -E
E→(E)
E → id
12
Derivation…
In general The one-step derivation is defined by
α A β ⇒ α γ β if there is a production rule A → γ in our
grammar
Where α and β are arbitrary strings of terminal and non-
terminal symbols.
α1=> α2=>….=> αn (αn is derived from α1 or α1 derives αn)
13
Derivation…
We will see that the top-down parser try to find the left-most
derivation of the given source program.
We will see that the bottom-up parser try to find right-most
derivation of the given source program in the reverse order.
14
Parse tree
A parse tree is a graphical representation of a
derivation
It filters out the order in which productions are applied
to replace non-terminals.
E E E E E
E
- E - E - E - E
- E
( E ) ( E ) ( E )
( E )
E + E E + E
This is a top-down derivation
because we start building the id id
parse tree at the top parse tree
16
E E E E E E
- E - E - E - E
- E
( E ) ( E ) ( E ) ( E )
E + E E + E E + E
id id id
Exercise
a) Using the grammar below, draw a parse tree for the
following string:
( ( id . id ) id ( id ) ( ( ) ) )
S→E
E → id
|(E.E)
|(L)
|()
L→LE
|E
b) Give a rightmost derivation for the string given in (a).
18
Ambiguity
A grammar produces more than one parse tree for a
sentence is called as an ambiguous grammar.
• produces more than one leftmost derivation or
• more than one rightmost derivation for the same
sentence.
19
Ambiguity: Example
Example: The arithmetic expression grammar
E → E + E | E * E | ( E ) | id
permits two distinct leftmost derivations for the
sentence id + id * id:
(a) (b)
E => E + E E => E * E
=> id + E => E + E * E
=> id + E * E => id + E * E
=> id + id * E => id + id * E
=> id + id * id => id + id * id
20
Ambiguity: example
E E + E | E E | ( E ) | - E | id
Construct parse tree for the expression: id + id id
E E E E
E + E E + E E + E
E E id E E
id id
E E E E
E E E E E E
E + E E + E id
Which parse tree is correct?
id id
21
Ambiguity: example…
E E + E | E E | ( E ) | - E | id
id E E
A grammar that produces more than one
id id
parse tree for any input sentence is said
to be an ambiguous grammar. E
E + E
E E id
id id
22
Ambiguity: QUIZ (10%)
Consider the context- free grammar G.
S (L)|a
L L,S|S with the input string ((a,a),a,(a))
To add precedence
Create a non-terminal for each level of precedence
Isolate the corresponding part of the grammar
Force the parser to recognize high precedence sub expressions first
For algebraic expressions
Multiplication and division, first (level one)
Subtraction and addition, next (level two)
To add association
Left-associative : The next-level (higher) non-terminal places at the
last of a production
24
Elimination of ambiguity
To disambiguate the grammar :
E E + E | E E | ( E ) | id
EE+T|T id + id * id
TTF|F
F ( E ) | id
25
Left Recursion
EE+T|T
Consider the grammar: TTF|F
F ( E ) | id
26
Elimination of Left recursion
A grammar is left recursive, if it has a non-terminal A
such that there is a derivation
A=>+Aα for some string α.
Top-down parsing methods cannot handle left-recursive
grammars.
so a transformation that eliminates left-recursion is
needed.
If there is a production A → Aα | β it can be replaced
with a sequence of two productions (nonleft- recursive
productions)
A β A′
A′ α A′ | ε
without changing the set of strings derivable from A.
27
Elimination of Left recursion…
Example :
Consider the following grammar for arithmetic expressions:
EE+T|T
TTF|F
F ( E ) | id
First eliminate the left recursion for E as
E → TE’
E’ → +TE’ |ε
Then eliminate for T as
T → FT’
T’→ *FT’ | ε
Thus the obtained grammar after eliminating left recursion is
E → TE’
E’ → +TE’ |ε
T → FT’
T’ → *FT’ | ε
F → (E) | id
28
Elimination of Left recursion…
Generally, we can eliminate immediate left
recursion from them by the following technique.
First we group the A-productions as:
29
Eliminating left-recursion algorithm
30
Example Left Recursion Elim.
A→BC|a
Choose arrangement : A,B,C
B → CA | Ab
C → AB | CC |a
A→BC|a
i=1: nothing to do
B → CA B’ | abB’
i=2, j=1: B → CA | A b B’ → CbB’ | ε
⇒ B → CA | BC b | a b
C → abB’CBC’|a B C’ | a C’
⇒(imm) B → CA B’ | abB’ C’ → AB’CBC’ | CC’ |ε
B’ → CbB’ | ε
i=3, j=1: C → A B | CC | a
⇒ C → B C B | a B | CC | a
i=3, j=2: C → B C B | a B | CC | a
⇒ C A B’ CB | abB’CB | aB | CC | a
⇒ (imm) C → abB’CBC’ | a B C’ | a C’
C’ → AB’CBC’ | CC’ |ε
31
Eliminating left-recursion (more)
Example: Given: S Aa | b
A Ac |Sd |ε
Substitute the S productions in A Sd to obtain the
following productions:
A Ac | Aad | bd |ε
Eliminating the immediate left recursion among the A
productions yields the following grammar:
S Aa | b
A bdA’ | A’
A’ cA’ | adA’ |ε
32
Left factoring
When a non-terminal has two or more productions
whose right-hand sides start with the same grammar
symbols, the grammar is not LL(1) and cannot be used
for predictive parsing
A predictive parser (a top-down parser without
backtracking) insists that the grammar must be left-
factored.
33
Left factoring…
When processing α we do not know whether to expand A
to αβ1 or to αβ2, but if we re-write the grammar as
follows:
A αA’
A’ β1 | β2 so, we can immediately expand A to αA’.
35
Syntax analysis
Every language has rules that prescribe the syntactic
structure of well formed programs.
The syntax can be described using Context Free
Grammars (CFG) notation.
38
RDP…
Example: G: S cAd
A ab|a
Draw the parse tree for the input string cad using
the above method.
39
Exercise
Using the grammar below, draw a parse tree for the
following string using RDP algorithm:
( ( id . id ) id ( id ) ( ( ) ) )
S→E
E → id
|(E.E)
|(L)
|()
L→LE
|E
40
Non-recursive predictive parsing
It is possible to build a non-recursive parser by explicitly
maintaining a stack.
This method uses a parsing table that determines the
next production to be applied.
x=a=$ id + id id $ OUTPUT:
INPUT:
x=a≠$
X is non-terminal E
T E’
E
Predictive Parsing
STACK:
$
E Program
$
PARSING NON-
TERMINAL id +
INPUT SYMBOL
* ( ) $
TABLE: E E TE’ E TE’
E’ E’ +TE’ E’ E’
T T FT’ T FT’
T’ T’ T’ *FT’ T’ T’
F F id F (E) 41
Non-recursive predictive parsing…
The input buffer contains the string to be parsed
followed by $ (the right end marker)
The stack contains a sequence of grammar symbols
with $ at the bottom.
Initially, the stack contains the start symbol of the
grammar followed by $.
The parsing table is a two dimensional array M[A, a]
where A is a non-terminal of the grammar and a is a
terminal or $.
The parser program behaves as follows.
The program always considers
X, the symbol on top of the stack and
a, the current input symbol.
42
Predictive Parsing…
There are three possibilities:
1. x = a = $ : the parser halts and announces a successful
completion of parsing
2. x = a ≠ $ : the parser pops x off the stack and advances
the input pointer to the next symbol
3. X is a non-terminal : the program consults entry M[X, a]
which can be an X-production or an error entry.
If M[X, a] = {X uvw}, X on top of the stack will be replaced
by uvw (u at the top of the stack).
As an output, any code associated with the X-production can
be executed.
If M[X, a] = error, the parser calls the error recovery method.
43
Predictive Parsing algorithm
set ip to point to the first symbol of w;
set X to the top stack symbol;
while ( X ≠ $ ) { /* stack is not empty */
if ( X is a ) pop the stack and advance ip;
else if ( X is a terminal ) error();
else if ( M[X, a] is an error entry ) error();
else if ( M[X,a] = X Y1Y2 … Yk ) {
output the production X Y1Y2 … Yk;
pop the stack;
push Yk, Yk-1,. . . , Y1 onto the stack, with Y1 on top;
}
set X to the top stack symbol;
}
44
A Predictive Parser table
E TE’
E’ +TE’ |
T FT’
Grammar: T’ FT’ |
F ( E ) | id
45
Predictive Parsing Simulation
INPUT: id + id id $ OUTPUT:
E
T E’
T
E
Predictive Parsing
STACK:
E’
$ Program
$
PARSING NON-
TERMINAL id +
INPUT SYMBOL
* ( ) $
TABLE: E E TE’ E TE’
E’ E’ +TE’ E’ E’
T T FT’ T FT’
T’ T’ T’ *FT’ T’ T’
F F id F (E) 46
Predictive Parsing Simulation…
INPUT: id + id id $ OUTPUT:
E
T E’
Predictive Parsing
STACK: T
F
Program F T’
T’
E’
E’
$
$
INPUT: id + id id $ OUTPUT:
E
T E’
Predictive Parsing
STACK: id
T
F
Program F T’
T’
E’
E’
$ id
$
INPUT: id + id id $ OUTPUT:
E
T E’
Predictive Parsing
STACK: T’
E’
Program F T’
E’
$
$ id
T FT’ id F T’
F id
id F T’
T’ FT’
F id id
T’ When Top(Stack) = input = $
E’ the parser halts and accepts the
input string.
50
Non-recursive predictive parsing…
Example: G:
E TR
R +TR Input: 1+2
R -TR
Rε
T 0|1|…|9
X|a 0 1 … 9 + - $
51
Non-recursive predictive parsing…
52
FIRST and FOLLOW
53
Construction of a predictive parsing table
FIRST
First(α) = set of terminals that begin the strings
derived from α.
If α => ε in zero or more steps, ε is in first(α).
First(X) where X is a grammar symbol can be found
using the following rules:
1- If X is a terminal, then first(x) = {x}
2- If X is a non-terminal: two cases
54
Construction of a predictive parsing table…
FOLLOW
Follow(A) = set of terminals that can appear
immediately to the right of A in some sentential
form.
1- Place $ in follow(A), where A is the start symbol.
57
Exercise
Consider the following grammar over the alphabet
{ g,h,i,b}
A BCD
B bB | ε
C Cg | g | Ch | i
D AB | ε
Fill in the table below with the FIRST and FOLLOW sets for
the non-terminals in this grammar:
58
Construction of predictive parsing table
Input Grammar G
Output Parsing table M
For each production of the form A α of the
grammar do:
• For each terminal a in first(α), add A α to
M[A, a]
• If ε Є first(α), add A α to M[A, b] for each b
in follow(A)
• If ε Є first(α) and $ Є follow(A), add A α to
M[A, $]
• Make each undefined entry of M be an error.
59
Example:
60
Non-recursive predictive parsing…
Exercise 1:
Consider the following grammars G, Construct the predictive parsing table
and parse the input symbols: id + id * id
FIRST(E)=FIRST(T)=FIRST(F)={(,id}
E TE’ FIRST(E’)={+,ε}
E’ +TE’ | FIRST(T’)={*,ε}
T FT’
T’ FT’ | FOLLOW(E)=FOLLOW(E’)={$,)}
F ( E ) | id FOLLOW(T)=FOLLOW(T’)={+,$,)}
FOLLOW(F)={*,+,$,)}
62
LL(k) Parser
This parser parses from left to right, and does a
leftmost-derivation. It looks up 1 symbol ahead to
choose its next action. Therefore, it is known as
a LL(1) parser.
64
Non- LL(1) Grammar: Examples
65
LL(1) Grammars…
Exercise: Consider the following grammar G:
A’ A
A xA | yA |y
a) Find FIRST and FOLLOW sets for G:
b) Construct the LL(1) parse table for this grammar.
c) Explain why this grammar is not LL(1).
d) Transform the grammar into a grammar that is
LL(1).
e) Give the parse table for the grammar created in
(d).
66
A’A
AxA | yA | y x y $
A’ A’A A’A
A AxA AyA
FIRST(A)=FIRST(A’)={x,y}
Ay
FOLLOW(A)=FOLLOW(A’)={$}
Now G is LL(1)
Not LL(1): Multiply
x y $ defined entry in [A,y]
A’ A’A A’A
A AxA AyA’’
A’’ A’’A A’’A A’’ε Left factor
FIRST(A’)=FIRST(A)={x,y} A’A
FIRST(A’’)={x,y,ε} AxA | yA’’
FOLLOW(A)=FOLLOW(A’)=FOLLOW(A’’)={$} A’’A | ε
67
LL(1) Grammar: Exercise
Given G:
FIRST(S)={i,a}
S iEtSS’ | a FIRST(E)={b}
S’ eS | ε FIRST(S’)={e,ε}
Eb FOLLOW(S)=FOLLOW(S’)={$,e}
FOLLOW(E)={t}
No: Multiply a b e i t $
defined table S Sa SiEtSS’
entry S’ S’eS S’ε
S’ε
E Eb
ibtaea
68
Exercises
69
Exercises
70
Exercises
3. Given the following grammar:
program procedure STMT–LIST
STMT–LIST STMT STMT–LIST | STMT
STMT do VAR = CONST to CONST begin STMT–LIST end
| ASSN–STMT
Show the parse tree for the following code fragment:
procedure
do i=1 to 100 begin
ASSN –STMT
ASSN-STMT
end
ASSN-STMT
71
Exercises
72
Syntax error handling
Common programming errors can occur at many
different levels:
Lexical errors include misspellings of identifiers,
keywords, or operators: E.g., ebigin instead of begin
Syntactic errors include misplaced semicolons ; or adding
or missing of braces { }, case without switch…
Semantic errors include type mismatches between
operators and operands. a return statement in a Java
method with result type void. Operator applied to
incompatible operand
Logical errors can be anything from incorrect reasoning.
E.g, assignment operator = instead of the comparison
operator ==
73
Syntax error handling…
The error handler should be written with the
following goals in mind:
74
Syntax error handling…
75
Error recovery in predictive parsing
An error can be detected in predictive parsing:
When the terminal on top of the stack does not
match the next input symbol or
When there is a non-terminal A on top of the stack
and a is the next input symbol and M[A, a] = error.
Panic mode error recovery method
Synchronization tokens and scan
76
Panic mode error recovery strategy
Primary error situation occurs with a non-terminal A
on the top of the stack and the current input token
is not in FIRST A (or FOLLOW (A), ε € FIRST (A))
Solution
Build the set of synchronizing tokens directly into
the LL(1) parsing table.
Possible alternatives
1. Pop A from the stack
2. Successively pop tokens from the input until a token
is seen for which we can restart the parse.
77
Panic mode error recovery…
Choose alternative 1 – If the current input token is $ or is in FOLLOW (A)
(synch)
Chose alternative 2 – If the current input token is not $ and is not in FIRST
(A) υ FOLLOW (A). (scan)
Example: Using FOLLOW and FIRST symbols as synchronizing tokens, the
parse table for grammar G:
E TE’ FIRST(E)=FIRST(T)=FIRST(F)={(,id}
E’ +TE’ | FIRST(E’)={+,ε}
FIRST(T’)={*,ε} FOLLOW(E)=FOLLOW(E’)={$,)}
T FT’ FOLLOW(T)=FOLLOW(T’)={+,$,)}
T’ FT’ | FOLLOW(F)={*,+,$,)}
F ( E ) | id
Bottom-up parsers:
• build the nodes on the bottom of the parse tree first.
• Suitable for automatic parser generation, handle a larger
class of grammars.
examples: shift-reduce parser (or LR(k) parsers)
79
Bottom-Up Parser
A bottom-up parser, or a shift-reduce parser, begins
at the leaves and works up to the top of the tree.
S aABe
Consider the Grammar: A Abc | b
B d
80
Bottom-Up Parser: Simulation
INPUT: a b b c d e $ OUTPUT:
Production
S aABe
Bottom-Up Parsing
A Abc
Program
Ab
Bd
81
Bottom-Up Parser: Simulation
INPUT: a b b c d e $ OUTPUT:
Production
S aABe
Bottom-Up Parsing
A Abc Program A
Ab
Bd b
82
Bottom-Up Parser: Simulation
INPUT: a A b c d e $ OUTPUT:
Production
S aABe
Bottom-Up Parsing
A Abc Program A
Ab
Bd b
83
Bottom-Up Parser: Simulation
INPUT: a A b c d e $ OUTPUT:
Production
S aABe
Bottom-Up Parsing
A Abc Program A
Ab
Bd b
84
Bottom-Up Parser: Simulation
INPUT: a A b c d e $ OUTPUT:
Production
A
S aABe
Bottom-Up Parsing
A Abc Program A b c
Ab
Bd b
85
Bottom-Up Parser: Simulation
INPUT: a A d e $ OUTPUT:
Production
A
S aABe
Bottom-Up Parsing
A Abc Program A b c
Ab
Bd b
86
Bottom-Up Parser: Simulation
INPUT: a A d e $ OUTPUT:
Production
A B
S aABe
Bottom-Up Parsing
A Abc Program A b c d
Ab
Bd b
87
Bottom-Up Parser: Simulation
INPUT: a A B e $ OUTPUT:
Production
A B
S aABe
Bottom-Up Parsing
A Abc Program A b c d
Ab
Bd b
88
Bottom-Up Parser: Simulation
INPUT: a A B e $ OUTPUT:
S
Production e
a A B
S aABe
Bottom-Up Parsing
A Abc Program A b c d
Ab
Bd b
89
Bottom-Up Parser: Simulation
INPUT: S $ OUTPUT:
S
Production e
a A B
S aABe
Bottom-Up Parsing
A Abc Program A b c d
Ab
Bd b
91
Stack implementation of shift/reduce
parsing
In LR parsing the two major problems are:
locate the substring that is to be reduced
locate the production to use
92
Stack implementation of shift/reduce parsing…
93
Example: An example of the operations of a
shift/reduce parser
G: E E + E | E*E | (E) | id
94
Conflict during shift/reduce parsing
Grammars for which we can construct an LR(k) parsing
table are called LR(k) grammars.
Most of the grammars that are used in practice are
LR(1).
There are two types of conflicts in shift/reduce parsing:
shift/reduce conflict: when we have a situation where
the parser knows the entire stack content and the next k
symbols but cannot decide whether it should shift or
reduce. Ambiguity
reduce/reduce conflict: when the parser cannot decide
which of the several productions it should use for a
reduction.
ET
E id with an id on the top of stack
T id
95
LR parser
input a1 … ai … an
Stack $
Sm
Xm
Sm-1 LR Output
Xm-1 Parsing program
…
S0
$ ACTION GOTO
96
LR parser…
The LR(k) stack stores strings of the form: S0X0S1X1…
XmSm where
• Si is a new symbol called state that summarizes the
information contained in the stack
• Sm is the state on top of the stack
• Xi is a grammar symbol
The parser program decides the next step by using:
• the top of the stack (Sm),
• the input symbol (ai), and
• the parsing table which has two parts: action and goto.
• then consulting the entry ACTION[Sm , ai] in the parsing
action table
97
Structure of the LR Parsing Table
The parsing table consists of two parts:
• a parsing-action function ACTION and
• a goto function GOTO.
The ACTION function takes as arguments a state i and a
terminal a (or $, the input endmarker).
The value of ACTION[i, a] can have one of four forms:
Shift j, where j is a state. The action taken by the parser shifts input a
on the top of the stack, but uses state j to represent a.
Reduce A β, The action of the parser reduces β on the top of the
stack to head A.
Accept, The parser accepts the input and finishes parsing.
Error, The parser discovers an error
GOTO function, defined on sets of items, to states.
GOTO[Ii, A] = Ij, then GOTO maps a state i and a non-terminal A to state
j.
98
LR parser configuration
Behavior of an LR parser describe the complete state of the parser.
A configuration of an LR parser is a pair:
(S0 X1 S1 X2 S2… Xm Sm , ai ai+1 … an $)
inputs
stack
This configuration represents the right-sentential form
(X1 X2 … Xm , ai ai+1,…, an $)
Xi is the grammar symbol Note: S0 is on the top of the stack
represented by state Si. at the beginning of parsing
99
Behavior of LR parser
The parser program decides the next step by using:
• the top of the stack (Sm),
• the input symbol (ai), and
• the parsing table which has two parts: action and goto.
• then consulting the entry ACTION[Sm , ai] in the parsing
action table
100
Behavior of LR parser…
2. Action[Sm, ai] = reduce A β: the parser pops the first 2r
symbols off the stack, where r = |β| (at this point, Sm-r will
be the state on top of the stack), entering the
configuration,
(S0 X1 S1 X2 S2 … Xm-r Sm-r A S, ai ai+1 … an $)
101
LR-parsing algorithm.
let a be the first symbol of w$;
while(1) { /* repeat forever */
let s be the state on top of the stack;
if ( ACTION[S, a] = shift t ) {
push t onto the stack;
let a be the next input symbol;
} else if ( ACTION[S, a] = reduce A β ) {
pop IβI symbols off the stack;
let state t now be on top of the stack;
push GOTO[t, A] onto the stack;
output the production A β;
} else if ( ACTION[S, a] = accept ) break; /* parsing is done */
else call error-recovery routine;
}
102
LR parser…
103
State action goto
id + * ( ) $ E T F
0 S5 S4 1 2 3
1 S6 accept
2 R2 S7 R2 R2
3 R4 R4 R4 R4
4 S5 S4 8 2 3
5 R6 R6 R6 R6
6 S5 S4 9 3
7 S5 S4 10
8 S6 S11
9 R1 S7 R1 R1
10 R3 R3 R3 R3
11 R5 R5 R5 R5
Legend: Si means shift to state i,
Rj means reduce production by j 104
LR parser…
Example: The following example shows how a shift/reduce parser parses
an input string w = id * id + id using the parsing table shown above.
3-105
LR Parser: Simulation
Input
S
t
LR Parsing
a Output
Program
c
k
action goto
106
LR Parser: Simulation
107
GRAMMAR:
(1) E E+T
(2)
(3)
ET
T TF
LR Parser: Simulation
OUTPUT:
(4) T F
(5) F (E)
INPUT: id id + id $
(6) F id
LR Parsing
STACK: E
0
Program
LR Parsing
STACK: E
5
Program
id
0
State action goto
id + * ( ) $ E T F F
0 s5 s4 1 2 3
1 s6 acc
2 r2 s7 r2 r2
id
3 r4 r4 r4 r4
4 s5 s4 8 2 3
5 r6 r6 r6 r6
6 s5 s4 9 3
7 s5 s4 10
8 s6 s11
9 r1 s7 r1 r1
10 r3 r3 r3 r3
109
11 r5 r5 r5 r5
GRAMMAR:
(1)
(2)
E E+T
ET
LR Parser: Simulation
(3) T TF
OUTPUT:
(4) T F
(5) F (E)
INPUT: id id + id $
(6) F id
LR Parsing
STACK: 0
Program
LR Parsing
STACK: E
3
Program
F
0 T
State action goto
id + * ( ) $ E T F F
0 s5 s4 1 2 3
1 s6 acc
2 r2 s7 r2 r2
id
3 r4 r4 r4 r4
4 s5 s4 8 2 3
5 r6 r6 r6 r6
6 s5 s4 9 3
7 s5 s4 10
8 s6 s11
9 r1 s7 r1 r1
10 r3 r3 r3 r3
111
11 r5 r5 r5 r5
GRAMMAR:
(1)
(2)
E E+T
ET LR Parser: Simulation
(3) T TF
OUTPUT:
(4) T F
(5) F (E)
INPUT: id id + id $
(6) F id
LR Parsing
STACK: 0
Program
T
State action goto
id + * ( ) $ E T F F
0 s5 s4 1 2 3
1 s6 acc
2 r2 s7 r2 r2
id
3 r4 r4 r4 r4
4 s5 s4 8 2 3
5 r6 r6 r6 r6
6 s5 s4 9 3
7 s5 s4 10
8 s6 s11
9 r1 s7 r1 r1
10 r3 r3 r3 r3
112
11 r5 r5 r5 r5
GRAMMAR:
(1)
(2)
E E+T
ET LR Parser: Simulation
(3) T TF
OUTPUT:
(4) T F
(5) F (E)
INPUT: id id + id $
(6) F id
LR Parsing
STACK: E
2
Program
T
0 T
State action goto
id + * ( ) $ E T F F
0 s5 s4 1 2 3
1 s6 acc
2 r2 s7 r2 r2
id
3 r4 r4 r4 r4
4 s5 s4 8 2 3
5 r6 r6 r6 r6
6 s5 s4 9 3
7 s5 s4 10
8 s6 s11
9 r1 s7 r1 r1
10 r3 r3 r3 r3
113
11 r5 r5 r5 r5
GRAMMAR:
(1)
(2)
E
E
E+T
T LR Parser: Simulation
(3) T TF
OUTPUT:
(4) T F
(5) F (E)
INPUT: id id + id $
(6) F id
LR Parsing
STACK: E
7
Program
2 T
T State action goto
0 id + * ( ) $ E T F F
0 s5 s4 1 2 3
1 s6 acc
2 r2 s7 r2 r2
id
3 r4 r4 r4 r4
4 s5 s4 8 2 3
5 r6 r6 r6 r6
6 s5 s4 9 3
7 s5 s4 10
8 s6 s11
9 r1 s7 r1 r1
10 r3 r3 r3 r3
11 r5 r5 r5 r5 114
GRAMMAR:
(1)
(2)
E
E
E+T
T LR Parser: Simulation
(3) T TF
OUTPUT:
(4) T F
(5) F (E)
INPUT: id id + id $
(6) F id
LR Parsing
STACK: E
5
Program
id
7 T F
State action goto
2 id + * ( ) $ E T F F id
T 0 s5 s4 1 2 3
1 s6 acc
0 2 r2 s7 r2 r2
id
3 r4 r4 r4 r4
4 s5 s4 8 2 3
5 r6 r6 r6 r6
6 s5 s4 9 3
7 s5 s4 10
8 s6 s11
9 r1 s7 r1 r1
10 r3 r3 r3 r3
115
11 r5 r5 r5 r5
GRAMMAR:
(1)
(2)
E
E
E+T
T LR Parser: Simulation
(3) T TF
OUTPUT:
(4) T F
(5) F (E)
INPUT: id id + id $
(6) F id
LR Parsing
STACK: E
7
Program
2 T F
T State action goto
0 id + * ( ) $ E T F F id
0 s5 s4 1 2 3
1 s6 acc
2 r2 s7 r2 r2
id
3 r4 r4 r4 r4
4 s5 s4 8 2 3
5 r6 r6 r6 r6
6 s5 s4 9 3
7 s5 s4 10
8 s6 s11
9 r1 s7 r1 r1
10 r3 r3 r3 r3
116
11 r5 r5 r5 r5
GRAMMAR:
(1)
(2)
E
E
E+T
T
LR Parser: Simulation
(3) T TF
OUTPUT:
(4) T F
(5) F (E)
INPUT: id id + id $
(6) F id
LR Parsing
STACK: 10
E T
Program
F
7 T F
State action goto
2 id + * ( ) $ E T F F id
T 0 s5 s4 1 2 3
1 s6 acc
0 2 r2 s7 r2 r2
id
3 r4 r4 r4 r4
4 s5 s4 8 2 3
5 r6 r6 r6 r6
6 s5 s4 9 3
7 s5 s4 10
8 s6 s11
9 r1 s7 r1 r1
10 r3 r3 r3 r3
117
11 r5 r5 r5 r5
GRAMMAR:
(1)
(2)
E E+T
ET LR Parser: Simulation
(3) T TF
OUTPUT:
(4) T F
(5) F (E)
INPUT: id id + id $
(6) F id
LR Parsing
STACK: 0 T
Program
T F
State action goto
id + * ( ) $ E T F F id
0 s5 s4 1 2 3
1 s6 acc
2 r2 s7 r2 r2
id
3 r4 r4 r4 r4
4 s5 s4 8 2 3
5 r6 r6 r6 r6
6 s5 s4 9 3
7 s5 s4 10
8 s6 s11
9 r1 s7 r1 r1
10 r3 r3 r3 r3
118
11 r5 r5 r5 r5
GRAMMAR:
(1)
(2)
E E+T
ET
LR Parser: Simulation
(3) T TF
OUTPUT:
(4) T F
(5) F (E)
INPUT: id id + id $
(6) F id
E
LR Parsing
STACK: 2 T
Program
T
0 T F
State action goto
id + * ( ) $ E T F F id
0 s5 s4 1 2 3
1 s6 acc
2 r2 s7 r2 r2
id
3 r4 r4 r4 r4
4 s5 s4 8 2 3
5 r6 r6 r6 r6
6 s5 s4 9 3
7 s5 s4 10
8 s6 s11
9 r1 s7 r1 r1
10 r3 r3 r3 r3
119
11 r5 r5 r5 r5
GRAMMAR:
(1)
(2)
E E+T
ET
LR Parser: Simulation
(3) T TF
OUTPUT:
(4) T F
(5) F (E)
INPUT: id id + id $
(6) F id
E
LR Parsing
STACK: 0 T
Program
T F
State action goto
id + * ( ) $ E T F F id
0 s5 s4 1 2 3
1 s6 acc
2 r2 s7 r2 r2
id
3 r4 r4 r4 r4
4 s5 s4 8 2 3
5 r6 r6 r6 r6
6 s5 s4 9 3
7 s5 s4 10
8 s6 s11
9 r1 s7 r1 r1
10 r3 r3 r3 r3
120
11 r5 r5 r5 r5
GRAMMAR:
(1)
(2)
E E+T
E’ T
LR Parser: Simulation
(3) T TF
OUTPUT:
(4) T F
(5) F (E)
INPUT: id id + id $
(6) F id
E
LR Parsing
STACK: 1 T
Program
E
0 T F
State action goto
id + * ( ) $ E T F F id
0 s5 s4 1 2 3
1 s6 acc
2 r2 s7 r2 r2
id
3 r4 r4 r4 r4
4 s5 s4 8 2 3
5 r6 r6 r6 r6
6 s5 s4 9 3
7 s5 s4 10
8 s6 s11
9 r1 s7 r1 r1
10 r3 r3 r3 r3
121
11 r5 r5 r5 r5
GRAMMAR:
(1)
(2)
E
E
E+T
T
LR Parser: Simulation
(3) T TF
OUTPUT:
(4) T F
(5) F (E)
INPUT: id id + id $
(6) F id
E
LR Parsing
STACK: 6 T
Program
+
1 T F
E State action goto
0 id + * ( ) $ E T F F id
0 s5 s4 1 2 3
1 s6 acc
2 r2 s7 r2 r2
id
3 r4 r4 r4 r4
4 s5 s4 8 2 3
5 r6 r6 r6 r6
6 s5 s4 9 3
7 s5 s4 10
8 s6 s11
9 r1 s7 r1 r1
10 r3 r3 r3 r3
122
11 r5 r5 r5 r5
GRAMMAR:
(1) E E+T
(2)
(3)
E
T
T
TF
LR Parser: Simulation
OUTPUT:
(4) T F
(5) F (E)
INPUT: id id + id $
(6) F id
E
LR Parsing
STACK: 5 T F
Program
id
6 T F id
+ State action goto
1 id + * ( ) $ E T F F id
E 0 s5 s4 1 2 3
1 s6 acc
0 2 r2 s7 r2 r2
id
3 r4 r4 r4 r4
4 s5 s4 8 2 3
5 r6 r6 r6 r6
6 s5 s4 9 3
7 s5 s4 10
8 s6 s11
9 r1 s7 r1 r1
10 r3 r3 r3 r3
123
11 r5 r5 r5 r5
GRAMMAR:
(1) E E+T
(2)
(3)
E
T
T
TF
LR Parser: Simulation
OUTPUT:
(4) T F
(5) F (E)
INPUT: id id + id $
(6) F id
E
LR Parsing
STACK: 6 T F
Program
+
1 T F id
E State action goto
0 id + * ( ) $ E T F F id
0 s5 s4 1 2 3
1 s6 acc
2 r2 s7 r2 r2
id
3 r4 r4 r4 r4
4 s5 s4 8 2 3
5 r6 r6 r6 r6
6 s5 s4 9 3
7 s5 s4 10
8 s6 s11
9 r1 s7 r1 r1
10 r3 r3 r3 r3
124
11 r5 r5 r5 r5
GRAMMAR:
(1) E E+T
(2)
(3)
E
T
T
TF
LR Parser: Simulation
OUTPUT:
(4) T F
(5) F (E)
INPUT: id id + id $
(6) F id
E T
LR Parsing
STACK: 3 T F
Program
F
6 T F id
+ State action goto
1 id + * ( ) $ E T F F id
E 0 s5 s4 1 2 3
1 s6 acc
0 2 r2 s7 r2 r2
id
3 r4 r4 r4 r4
4 s5 s4 8 2 3
5 r6 r6 r6 r6
6 s5 s4 9 3
7 s5 s4 10
8 s6 s11
9 r1 s7 r1 r1
10 r3 r3 r3 r3
125
11 r5 r5 r5 r5
GRAMMAR:
(1) E E+T
(2)
(3)
E
T
T
TF
LR Parser: Simulation
OUTPUT:
(4) T F
(5) F (E)
INPUT: id id + id $
(6) F id
E
LR Parsing
STACK: 6 T F
Program
+
1 T F id
E State action goto
0 id + * ( ) $ E T F F id
0 s5 s4 1 2 3
1 s6 acc
2 r2 s7 r2 r2
id
3 r4 r4 r4 r4
4 s5 s4 8 2 3
5 r6 r6 r6 r6
6 s5 s4 9 3
7 s5 s4 10
8 s6 s11
9 r1 s7 r1 r1
10 r3 r3 r3 r3
126
11 r5 r5 r5 r5
GRAMMAR:
(1) E E+T
(2)
(3)
E
T
T
TF
LR Parser: Simulation
OUTPUT:
(4) T F
(5) F (E) E
INPUT: id id + id $
(6) F id
E + T
LR Parsing
STACK: 9 T F
Program
T
6 T F id
+ State action goto
1 id + * ( ) $ E T F F id
E 0 s5 s4 1 2 3
1 s6 acc
0 2 r2 s7 r2 r2
id
3 r4 r4 r4 r4
4 s5 s4 8 2 3
5 r6 r6 r6 r6
6 s5 s4 9 3
7 s5 s4 10
8 s6 s11
9 r1 s7 r1 r1
10 r3 r3 r3 r3
127
11 r5 r5 r5 r5
GRAMMAR:
(1) E E+T
(2)
(3)
ET
T TF
LR Parser: Simulation
OUTPUT:
(4) T F
(5) F (E) E
INPUT: id id + id $
(6) F id
E + T
LR Parsing
STACK: 0 T F
Program
T F id
State action goto
id + * ( ) $ E T F F id
0 s5 s4 1 2 3
1 s6 acc
2 r2 s7 r2 r2
id
3 r4 r4 r4 r4
4 s5 s4 8 2 3
5 r6 r6 r6 r6
6 s5 s4 9 3
7 s5 s4 10
8 s6 s11
9 r1 s7 r1 r1
10 r3 r3 r3 r3
128
11 r5 r5 r5 r5
GRAMMAR:
(1) E E+T
(2)
(3)
E
T
T
TF
LR Parser: Simulation
OUTPUT:
(4) T F
(5) F (E) E
INPUT: id id + id $
(6) F id
E + T
LR Parsing
STACK: 1 T F
Program
E
0 T F id
State action goto
id + * ( ) $ E T F F id
0 s5 s4 1 2 3
1 s6 acc
2 r2 s7 r2 r2
id
3 r4 r4 r4 r4
4 s5 s4 8 2 3
5 r6 r6 r6 r6
6 s5 s4 9 3
7 s5 s4 10
8 s6 s11
9 r1 s7 r1 r1
10 r3 r3 r3 r3
129
11 r5 r5 r5 r5
Constructing SLR parsing tables
This method is the simplest of the three methods
used to construct an LR parsing table.
It is called SLR (simple LR) because it is the
easiest to implement.
However, it is also the weakest in terms of the
number of grammars for which it succeeds.
A parsing table constructed by this method is
called SLR table.
A grammar for which an SLR table can be
constructed is said to be an SLR grammar.
130
Constructing SLR parsing tables…
LR (0) item
An LR (0) item (item for short) is a production of a
grammar G with a dot at some position of the right
side.
For example for the production A X Y Z we have
four items:
A.XYZ
AX.YZ
AXY.Z
A X Y Z.
For the production A ε we only have one item:
A .
131
Constructing SLR parsing tables…
An item indicates what is the part of a production that
we have seen and what we hope to see.
The central idea in the SLR method is to construct,
from the grammar, a deterministic finite automaton
to recognize viable prefixes.
A viable prefix is a prefix of a right sentential form
that can appear on the stack of a shift/reduce parser.
• If you have a viable prefix in the stack it is possible
to have inputs that will reduce to the start symbol.
• If you don’t have a viable prefix on top of the stack
you can never reach the start symbol; therefore you
have to call the error recovery procedure.
132
Constructing SLR parsing tables…
The closure operation
133
Constructing SLR parsing tables…
Example G1’:
E’ E
EE+T
ET
TT*F
TF
F (E)
F id
I = {[E’ .E]}
Closure (I) = {[E’ .E], [E .E + T], [E .T], [T
.T * F], [T .F], [F .(E)], [F .id]}
134
Constructing SLR parsing tables…
The Goto operation
The second useful function is Goto (I, X) where I is a
set of items and X is a grammar symbol.
Goto (I, X) is defined as the closure of all items
[A αX.β] such that [A α.Xβ] is in I.
Example:
I = {[E’ E.], [E E . + T]}
Then
goto (I, +) = {[E E +. T], [T .T * F], [T .F],
[F .(E)] [F .id]}
135
Constructing SLR parsing tables…
The set of Items construction
Below is given an algorithm to construct C, the
canonical collection of sets of LR (0) items for
augmented grammar G’.
Procedure Items (G’);
Begin
C := {Closure ({[S’ . S]})}
Repeat
For Each item of I in C and each grammar symbol X such
that Goto (I, X) is not empty and not in C do
Add Goto (I, X) to C;
Until no more sets of items can be added to C
End
136
Constructing SLR parsing tables…
Example: Construction of the set of Items for the
augmented grammar above G1’.
I0 = {[E’ .E], [E .E + T], [E .T], [T .T * F],
[T .F], [F .(E)], [F .id]}
I1 = Goto (I0, E) = {[E’ E.], [E E. + T]}
I2 = Goto (I0, T) = {[E T.], [T T. * F]}
I3 = Goto (I0, F) = {[T F.]}
I4 = Goto (I0, () = {[F (.E)], [E .E + T], [E .T],
[T .T * F], [T .F], [F . (E)], [F .id]}
I5 = Goto (I0, id) = {[F id.]}
I6 = Goto (I1, +) = {[E E + . T], [T .T * F], [T .F],
[F .(E)], [F .id]}
137
I7 = Goto (I2, *) = {[T T * . F], [F .(E)],
[F .id]}
I8 = Goto (I4, E) = {[F (E.)], [E E . + T]}
Goto(I4,T)={[ET.], [TT.*F]}=I2;
Goto(I4,F)={[TF.]}=I3;
Goto (I4, () = I4;
Goto (I4, id) = I5;
I9 = Goto (I6, T) = {[E E + T.], [T T . * F]}
Goto (I6, F) = I3;
Goto (I6, () = I4;
Goto (I6, id) = I5;
I10 = Goto (I7, F) = {[T T * F.]}
Goto (I7, () = I4;
Goto (I7, id) = I5;
I11= Goto (I8, )) = {[F (E).]}
Goto (I8, +) = I6;
Goto (I9, *) = I7;
138
LR(0) automation
Action of
shift/reduce
parser on
input: id*id
139
SLR table construction algorithm
1. Construct C = {I0, I1, ......, IN} the collection of the
set of LR (0) items for G’.
2. State i is constructed from Ii and
a) If [A α.aβ] is in Ii and Goto (Ii, a) = Ij (a is a
terminal) then action [i, a]=shift j
b) If [A α.] is in Ii then action [i, a] = reduce A α for
a in Follow (A) for A ≠ S’
c) If [S’ S.] is in Ii then action [i, $] = accept.
141
SLR table construction method…
Example: Construct the SLR parsing table for the
grammar G1’
Follow (E) = {+, ), $} Follow (T) = {+, ), $, *}
Follow (F) = {+, ), $,*}
E’ E
1 EE+T
2 ET
3 TT*F
4 TF
5 F (E)
6 F id
By following the method we find the Parsing table used
earlier.
142
State action goto
id + * ( ) $ E T F
0 S5 S4 1 2 3
1 S6 accept
2 R2 S7 R2 R2
3 R4 R4 R4 R4
4 S5 S4 8 2 3
5 R6 R6 R6 R6
6 S5 S4 9 3
7 S5 S4 10
8 S6 S11
9 R1 S7 R1 R1
10 R3 R3 R3 R3
11 R5 R5 R5 R5
Legend: Si means shift to state i,
Rj means reduce production by j 143
SLR parsing table
Exercise: Construct the SLR parsing table for
the following grammar:/* Grammar G2’ */
S’ S
SL=R
SR
L *R
L id
RL
144
Answer
C = {I0, I1, I2, I3, I4, I5, I6, I7, I8, I9}
I0 = {[S’ .S], [S .L = R], [S .R], [L .*R],
[L .id], [R .L]}
I1 = goto (I0, S) = {[S’ S.]}
I2 = goto (I0, L) = {[S L . = R], [R L . ]}
I3 = goto (I0, R) = {[S R . ]}
I4 = goto (I0, *) ={[L * . R] [L .*R], [L .id],
[R .L]}
I5 = goto (I0, id) ={[L id . ]}
I6 = goto (I2, =) ={[S L = . R], [R . L ], [L .*R],
[L .id]}
I7 = goto (I4, R) ={[L * R . ]}
145
I8 = goto (I4, L) ={[R L . ]}
goto (I4, *) = I4
goto (I4, id) = I5
I9 = goto (I6, R) ={[S L = R .]}
goto (I6, L) = I8
goto (I6, *) = I4
goto (I6, id) = I5
Follow (S) = {$} Follow (R) = {$, =} Follow (L) = {$, =}
We have shift/reduce conflict since = is in Follow (R)
and R L. is in I2 and Goto (I2, =) = I6
Every SLR(1) grammar is unambiguous, but there are many
unambiguous grammars that are not SLR(1).
G2’ is not an ambiguous grammar. However, it is not SLR. This is
because the SLR parser is not powerful enough to remember enough
left context to decide whether to shift or reduce when it sees an =.
146
LR parsing: Exercise
Given the following Grammar:
(1) S A
(2) S B
(3) A a A b
(4) A 0
(5) B a B b b
(6) B 1
Construct the SLR parsing table.
Write the action of an LR parse for the following string
aa1bbbb
147
Canonical LR parsing
It is possible to hold more information in the
state to rule out some of invalid reductions.
By splitting states when necessary, we indicate
which symbol can exactly follow a handle.
148
Canonical LR(1) parsing…
The closure operation
I is a set of LR (1) items
Closure (I) is found using the following algorithm:
SetOfftems CLOSURE(I) {
repeat
for ( each item [A α.Bβ, a] in I )
for ( each production B γ in G' )
for ( each terminal b in FIRST(βa) )
add [B . γ,b ] to set I ;
until no more items are added to I;
return I;
}
149
Canonical LR(1) parsing…
The closure operation: Example
This example uses Grammar G2’
Closure {[S’ .S, $]} = {[S’ .S, $], [S .L = R, $],
[S .R, $], [L .*R, =], [L .id, =],
[R .L, $], [L .*R, $], [L .id, $]}
S’ S
First ($) = {$} SL=R
First (= R $) = {=} SR
L *R
First (=) = {=} L id
RL
150
Canonical LR(1) parsing…
The Goto operation
Goto (I , X) is defined as the closure of all items
[A αX.β, a] such that [A α .Xβ, a] is in I.
SetOfftems GOTO(I, X) {
initialize J to be the empty set;
for ( each item [A α.Xβ, a] in I )
add item [A αX.β, a] to set J;
return CLOSURE(J);
}
Example:
Goto (I0, S) = {[S’ S., $]}
151
Canonical LR(1) parsing…
The set of Items construction
152
Canonical LR(1) set of items for G2’.
C = {I0, I1, I2, I3, I4, I5, I6, I7, I8, I9}
I0 = {[S’ .S, $], [S .L = R, $], [S .R, $],
[L .*R, =|$], [L .id, =|$], [R .L, $]}
I1 = goto (I0, S) = {[S’ S., $]}
I2 = goto (I0, L) = {[S L . = R, $], [R L., $]}
I3 = goto (I0, R) = {[S R., $]}
I4 = goto (I0, *) ={[L * . R, =|$], [L .*R, =|$],
[L .id, =|$], [R .L, =|$]}
I5 = goto (I0, id) ={[L id ., =|$]}
I6 = goto (I2, =) ={[S L = . R, $], [R . L, $ ],
[L .*R, $], [L .id,$]}
I7 = goto (I4, R) ={[L * R ., =|$]}
153
Canonical LR(1) set of items for G2’
I8 = goto (I4, L) ={[R L ., =|$ ]}
goto (I4, *) = I4
goto (I4, id) = I5
I9 = goto (I6, R) ={[S L = R ., $]}
I10 = goto (I6, L) ={[R L ., $ ]}
I11 = goto (I6, *) ={[L * . R, $], [L .*R, $],
[L .id, $], [R .L, $]}
I12 = goto (I6, id) ={[L id ., $]}
goto (I11, *) = I11
goto (I11, id) = I12
goto (11, L) = I10
I13 = goto (I11, R) = {[L *R., $]}
154
Canonical LR parsing…
Construction of LR parsing table
1. Construct C = {I0, I1, .... In} the collection of LR (1) items for G’
2. State i of the parser is constructed from state Ii. The parsing
actions for state i are determined as follows:
a. If [A α.aβ, b] is in Ii and Goto (Ii, a) = Ij (a is a terminal)
then action [i, a]=shift j
b. If [A α., a] is in Ii and A≠ S’ then action [i, a] = reduce Aα
c. If [S S’., $] is in Ii then action [i, $] = accept.
If there is a conflict, the grammar is not LR (1).
3. If goto (Ii, A) = Ij, then goto [i, A] = j
4. All entries not defined by (2) and (3) are made error
5. The initial state is the set constructed from the item
[S’.S, $]
155
The Parser Generator: Yacc
Yacc stands for "yet another compiler-compiler".
Yacc: a tool for automatically generating a parser
given a grammar written in a yacc specification (.y
file)
Yacc parser – calls lexical analyzer to collect
tokens from input stream
Tokens are organized using grammar rules
When a rule is recognized, its action is executed
Note
lex tokenizes the input and yacc parses the tokens,
taking the right actions, in context.
156
Scanner, Parser, Lex and Yacc
157
Yacc
Yacc
specification
y.tab.c
Yacc.y Yacc compiler
y.tab.c a.out
C compiler
Input Output
stream
a.out stream
158
Yacc…
There are four steps involved in creating a compiler in
Yacc:
1. Generate a parser from Yacc by running Yacc over
the grammar file.
2. Specify the grammar:
– Write the grammar in a .y file (also specify the actions here
that are to be taken in C).
– Write a lexical analyzer to process input and pass tokens to
the parser. This can be done using Lex.
– Write a function that starts parsing by calling yyparse().
– Write error handling routines (like yyerror()).
3. Compile code produced by Yacc as well as any other
relevant source files.
4. Link the object files to appropriate libraries for the
executable parser.
159
Yacc Specification
As with Lex, a Yacc program is also divided into three
sections separated by double percent signs.
A yacc specification consists of three parts:
yacc declarations, and C declarations within %{ %}
%%
translation rules
%%
user-defined auxiliary procedures
The translation rules are productions with actions
production1 {semantic action1}
production2 {semantic action2}
…
productionn {semantic actionn} 160
Writing a Grammar in Yacc
Productions in Yacc are of the form:
161
Synthesized Attributes
Semantic actions may refer to values of the synthesized
attributes of terminals and non-terminals in a
production:
X : Y1 Y2 Y3 … Yn { action }
$$ refers to the value of the attribute of X
$i refers to the value of the attribute of Yi
For example
factor : ‘(’ expr ‘)’ { $$=$2; }
factor.val=x
$$=$2
( expr.val=x )
162
Lex Yacc interaction
yacc y.tab.c
Yacc
specification y.tab.h
Yacc.y compiler
Lex Lex
lex.l lex.yy.c
and token definitions compiler
y.tab.h
lex.yy.c C a.out
y.tab.c compiler
163
Lex Yacc interaction…
yyparse()
calc.y input
y.tab.c
Yacc
y.tab.h a.out
gcc
Lex
calc.l lex.yy.c
Compiled
yylex() output
164
Lex Yacc interaction…
If lex is to return tokens that yacc will process, they
have to agree on what tokens there are. This is
done as follows:
The yacc file will have token definitions
%token INTEGER
in the definitions section.
When the yacc file is translated with yacc -d, a header file
y.tab.h is created that has definitions like
#define INTEGER 258
This file can then be included in both the lex and yacc
program.
The lex file can then call return INTEGER, and the yacc
program can match on this token.
165
Example : Simple calculator: yacc file
%{
int types for attributes
#include <stdio.h>
void yyerror(char *);
and yylval
#define YYSTYPE int Grammar rules
%}
%token INTEGER
action
%%
program:
program expr '\n' { printf("%d\n", $2); }
|
; The value of
expr:
INTEGER { $$=$1;}
LHS (expr)
| expr '+' expr { $$ = $1 + $3; }
| expr '-' expr { $$ = $1 - $3; }
;
%%
void yyerror(char *s) { The value of
fprintf(stderr, "%s\n", s);} tokens on RHS
int main(void) { Stored in yylval
yyparse();
return 0;}
Lexical analyzer invoked by
the parser 166
Example : Simple calculator: lex file
%{ The lex program matches
#include <stdio.h> Numbers and operators
#include "y.tab.h" and returns them
extern int ;
yylval Generated by yacc, contains
%}
#define INTEGER 256
%%
[0-9]+ {yylval=atoi(yytext); Defined in y.tab.c
return INTEGER;
}
[-+*/\n] return *yytext; Place the integer value
[ \t] ;/*Skip white space*/ In the stack
. yyerror("invalid character");
%%
int yywrap(void){
return 1;
} operators will
be returned
167
Lex and Yacc: compile and run
[compiler@localhost yacc]$ vi calc.l
[compiler@localhost yacc]$ vi calc.y
[compiler@localhost yacc]$ yacc -d calc.y
yacc: 4 shift/reduce conflicts.
[compiler@localhost yacc]$ lex calc.l
[compiler@localhost yacc]$ ls
a.out calc.l calc.y lex.yy.c typescript y.tab.c y.tab.h
[compiler@localhost yacc]$ gcc y.tab.c lex.yy.c
[compiler@localhost yacc]$ ls
a.out calc.l calc.y lex.yy.c typescript y.tab.c y.tab.h
[compiler@localhost yacc]$ ./a.out
2+3
5
23+8+
Invalid charachter
syntax error
168
Example : Simple calculator: yacc file– option2
%{
#include<stdlib.h>
#include<stdio.h>
%}
%token INTEGER;
%%
Program :
program expr '\n' {printf("%d\n ", $2);}
|
;
expr : expr '+' mulexpr {$$=$1 + $3;}
|expr '-' mulexpr {$$=$1 - $3;}
|mulexpr {$$=$1;}
;
mulexpr : mulexpr '*' term {$$=$1 * $3;}
| mulexpr '/' term {$$=$1 / $3;}
|term {$$=$1;}
;
term :
'(' expr ')' {$$=$2;}
| INTEGER {$$=$1;}
;
%%
169
Example : Simple calculator: yacc file– option2
170
Calculator 2: Example– yacc file
%{
#include<stdio.h> user: 3 * (4 + 5)
sym holds the calc: 27
int sym[26];
%} value of the user: x = 3 * (4 + 5)
%token INTEGER VARIABLE associated user: y=5
%left '+' '-' variable user: x
%left '*' '/'
calc: 27
%% associative and user: y
program : precedence rules calc:
program statement '\n' 5
| user: x + 2*y
; calc: 37
statement :
expression {printf("%d\n", $1);}
|VARIABLE '=' expression {sym[$1]= $3;}
;
expression :
INTEGER {$$=$1;}
|VARIABLE {$$=sym[$1];}
|expression '+' expression {$$=$1 + $3;}
|expression '-' expression {$$=$1 - $3;}
|expression '*' expression {$$=$1 * $3;}
|expression '/' expression {$$=$1 * $3;}
| '(' expression ')' {$$=$2;}
;
%% 171
Calculator 2: Example– yacc file
172
Calculator 2: Example– lex file
%{
#include<stdio.h> The lexical
#include<stdlib.h> analyzer returns
#include "y.tab.h“ variables and
void yyerror(char *);
extern int yylval; integers
%}
%%
[a-z] { yylval=*yytext;
return VARIABLE; For variables
} yylval specifies an
[0-9]+ { yylval=atoi(yytext);
return INTEGER; index to the
} symbol table sym.
[-+*/()=\n] return *yytext;
[ \t] ; /*Skip white space*/
. yyerrror(" Invalid character ");
%%
int yywrap(void)
{
return 1;
}
173
Conclusions
Yacc and Lex are very helpful for building
the compiler front-end
A lot of time is saved when compared to
hand-implementation of parser and scanner
They both work as a mixture of “rules” and
“C code”
C code is generated and is merged with the
rest of the compiler code
Assignment on Syntax analyzer
175
Calculator program
Expand the calculator program so that the new
calculator program is capable of processing:
user: 3 * (4 + 5)
user: x = 3 * (4 + 5)
user: y = 5
user: x + 2*y
2^3/6
sin(1) + cos(PI)
tan
log
factorial
176
CFG for MINI Language and LR(1)
parser
Write a CFG for the MINI language specifications.
Transform your CFG into:
Predictive parser (LL(1)).
- Compute FIRST, FOLLOW sets for the grammar and
create the Parsing table (manually).
Bottom up parser (LR(1)).
177