Unit Iii Context-Free Grammar and Languages: 3.1.1. Definition

Download as pdf or txt
Download as pdf or txt
You are on page 1of 29

59

UNIT III
CONTEXT-FREE GRAMMAR AND LANGUAGES
Context-Free Grammar (CFG) – Parse Trees – Ambiguity in grammars
and languages – Definition of the Pushdown automata – Languages of a
Pushdown Automata – Equivalence of Pushdown automata and CFG,
Deterministic Pushdown Automata.

3.1. CONTEXT FREE GRAMMAR(CFG)


3.1.1. Definition:
A context free grammar is a finite set of variables each of which
represents a language. The languages represented by the variable are described
recursively in terms of each other and primitive symbols called terminals. The
rules relating the variables are called production. A Context Free Grammar(CFG)
is denoted by,
G = (V, T, P, S)
Where, V – Variables
T – Terminals
P – Finite set of Productions of the form A → α where, A is a
variable and α is a
string of symbols.
S – Start symbol.
A CFG is represented in Backus-Naur Form(BNF). For example
consider the grammar,
<expression> → <expression> + <expression>
<expression> → <expression> * <expression>
<expression> → (<expression>)
<expression> → id
Here <expression> is the variable and the terminals are +, *, (, ), id. The
first two productions say that an expression can be composed of two expressions
connected by an addition or multiplication sign. The third production says that
an expression may be another expression surrounded by paranthesis. The last
says a single operand is an expression.
60

Ex.1:
A CFG, G = (V, T, P, S) whose productions are given by,
A → Ba
B→b
Soln:
A → Ba
→ ba ∴ which produces the string ba.

3.1.2. Derivations Using a Grammar:


While inferring whether the given input string belongs to the given CFG,
we can have two approaches,
• Using rules from body of head.
• Using rules from head to body.
First approach is called by the name recursive inference. Here we take
strings from each variables, concatenate them in proper order and infer that the
resulting strings is in the language of the variable in the head.
Another approach is called derivation. Here we use the productions
from head to body. We expand the start symbol using one of its productions till
it reached the given string.

PROBLEMS:
Ex.1:
Obtain the derivation for L, with production for the string 01c10.
E→c
E → 0E0
E → 1E1
Soln:
E → 0E0
→ 01E10 [ E → 1E1 ]
→ 01c10 [E→c]

Ex.2:
Obtain the derivation for L, with production for the strings aaababbb and
abbaab.
S → aSb | aAb
A → bAa | ba
61

Soln:
(i) S → aSb
→ aaSbb [ S → aSb ]
→ aaaAbbb [ S → aAb ]
→ aaababbb [ A → ba ]

(ii) S → aAb
→ abAab [ A → bAa ]
→ abbaab [ A → ba ]

3.1.3. Leftmost and Rightmost Derivations:


Leftmost Derivation:
If at each step in a derivation, a production is applied to the leftmost
variable then it is called leftmost derivation.

Ex.1:
Let G = ( V, T, P, S ), V = {E}, T = {+, *, id}, S = E, P is given by,
E → E + E | E * E | id
Construct leftmost derivation for id+id*id.
Soln:
lm
E ⇒ E+E
⇒ id+E [E → id]
⇒ id+E*E [E → E*E]
⇒ id+id*E [E → id]
⇒ id+id*id [E → id]

Rightmost Derivation:
If at each step in a derivation a production is applied to the rightmost
variable, then it is called rightmost derivation.

Ex.1:
Let G = ( V, T, P, S ), V = {E}, T = {+, *, id}, S = E, P is given by,
E → E + E | E * E | id
Construct rightmost derivation for id+id*id.
Soln:
62

rm
E ⇒ E*E
⇒ E*id [E → id]
⇒ E+E*id [E → E+E]
⇒ E+id*id [E → id]
⇒ id+id*id [E → id]

3.2. PARSE TREES or (Derivation Trees)


The derivations can be represented by trees using “parse trees”.
3.2.1. Constructing Parse Trees:
Let G = (V, T, P, S) be a grammar. The parse trees for ‘G’ is a trees with
following conditions.
1. Each interior node is labeled by a variable in V.
2. Each leaf is labeled by either a variable, a terminal or ε.
3. If an interior node is labeled A, and its children are labeled X1, X2,…….
Xk respectively from left, then A → X1, X2, ………. Xk is a production
in P.
4. If A → ε then A is considered to be the label.

Ex.1:
Construct a parse tree for the grammar, E → E + E | E * E | (E) | id, for
the string id*id+id
Soln:
Derivation: Parse Tree:

Ex.2:
Construct a parse tree for the grammar, P → ε | 0 | 1 | 0P0 | 1P1, A parse
tree showing the derivation of a string 0110.
63

Soln:
Derivation: Parse Tree:

3.2.2. Yield of a Parse Tree:


The string obtained by concatenating the leaves of a parse tree from the
left is called yield of a parse tree. The yield is always derived from the root. The
root is the start symbol.

Ex.1:
For the grammar G is defined by the production,
S→A|B
A → 0A | ε
B → 0B | 1B | ε
Find the parse tree for the yields (i) 1001 (ii) 00101
Soln:
(i) 1001 (ii) 00101
Derivation: Derivation:
S⇒ B S⇒ B
⇒ 1B [B→1B] ⇒ 0B [B→0B]
⇒ 10B [B→0B] ⇒ 00B [B→0B]
⇒ 100B [B→0B] ⇒ 001B [B→1B]
⇒ 1001B [B→1B] ⇒ 0010B [B→0B]
⇒ 1001 [B→ε] ⇒ 00101B [B→1B]
⇒ 00101 [B→ε]
64

Parse Tree: Parse Tree:

3.2.3. Relationship Between the Derivation Trees and Derivation:


Theorem:
*
Let G = (V, T, P, S) be a CFG. Then S ⇒ α if and only if there is a
derivation tree in grammar G with yield α.

Proof:
Suppose there is a parse tree with root S and yield α, then there is a
leftmost derivation,
*
S ⇒ α in G.
lm
*
To prove, S ⇒ α in G. Let us prove this theorem by induction on height of the
lm

tree.
Basis:
If the height of parse tree is ‘1’ then the tree must be of the form given in
figure below with root ‘S’ and yield ‘α’.

This means that ‘S’ has only leaves and no subtrees. This is possible
only with the production.
S ⇒ α in G
S ⇒ α is an one step leftmost derivation.
lm
65

Induction:
If the height of the parse tree is in the parse tree must look like in.

α = α1α2 ……… αk. Where X1, X2, ….. Xk are all the subtrees of S.
*
Assume there exist a leftmost derivation S ⇒ α for every parse tree of height
less than ‘n’. Consider a parse tree of height ‘n’. Let the leftmost derivation be,
*
S ⇒ X1X2, ….. Xk
lm

The Xi’s may be either terminals or variables.


(i) If Xi is a terminal then Xi = αi
(ii) If Xi is a variable then it must be the root of some sub-tree with yield
αi of height less than ‘n’.
By applying inductive hypothesis, there is a leftmost derivation,
*
Xi ⇒ αi
lm

S ⇒ X1X2, ….. Xk
lm

If Xi is a terminal then no change.


*
S ⇒ α1α2 ……… αi Xi+1 ….. Xk
lm

If Xi is a variable then derive the string Xi to αi as,


Xi ⇒ w1 ⇒ w2 ⇒ …… ⇒ αi
lm

Therefore,
*
S ⇒ α1α2 ……… αi-1 Xi Xi+1 ….. Xk
lm
*
⇒ α1α2 ……… αi-1 w1 Xi+1 ….. Xk
lm
*
⇒ α1α2 ……… αi-1 w2 Xi+1 ….. Xk
lm
66

*
⇒ α1α2 ……… αi-1 αi Xi+1 ….. Xk
lm

By repeating the process we can get,


*
S ⇒ α1α2 ……… αk [ ∴ α = α1α2 ……… αk ]
lm

Thus proved.

3.3. AMBIGUITY IN GRAMMARS AND LANGUAGES


Sometimes there is an occurrence of ambiguous sentence in a language
we are using. Like that in CFG there is a possibility of having two derivations
for the same string.
3.3.1. Ambiguous Grammars:
A CFG, G = (V, T, P, S) is said to be ambiguous, if there is atleast one
string ‘w’ has two different parse trees.

PROBLEMS:

Ex.1:
Construct ambiguous grammar for the grammar, E → E + E | E * E | (E) |
id, and generate a string id+id*id.

Soln:

Derivation1: Derivation2:
E ⇒ E+E E ⇒ E*E
⇒ id+E [E → id] ⇒ E+E*E [E → E+E]
⇒ id+E*E [E → E*E] ⇒ id*E+E [E → id]
⇒ id+id*E [E → id] ⇒ id*id+E [E → id]
⇒ id+id*id [E → id] ⇒ id*id+id [E → id]
67

Parse Tree1: Parse Tree2:

Ex.2:
Construct ambiguous grammar for the grammar,
E → I | E+E | E*E | (E)
I → a | b | Ia | Ib | I0 | I1
and generate a string a+a*a.

Soln:

Derivation1: Derivation2:

E ⇒ E+E E ⇒ E*E
⇒ I+E [E → I] ⇒ E*I [E → I]
⇒ a+E [I → a] ⇒ E*a [I → a]
⇒ a+E*E [E → E*E] ⇒ E+E*a [E → E+E]
⇒ a+I*E [E → I] ⇒ E+I*a [E → I]
⇒ a+a*E [I → a] ⇒ E+a*a [I → a]
⇒ a+a*I [E → I] ⇒ I+a*a [E → I]
⇒ a+a*a [I → a] ⇒ a+a*a [I → a]
68

Parse Tree1: Parse Tree2:

3.3.2. Unambiguous:
If each string has atmost one parse tree in the grammar, then the
grammar is unambiguous.

3.3.4. Leftmost Derivations as a way to Express Ambiguity:


A grammar is said to be ambiguous, if it has more than one leftmost
derivation.

Ex:
Construct ambiguous grammar for the grammar,
E → I | E+E | E*E | (E)
I → a | b | Ia | Ib | I0 | I1
and generate two leftmost derivation for a string a+a*a.
Soln:
Derivation1: Derivation2:
rm rm
E ⇒ E+E E ⇒ E*E
⇒ I+E [E → I] ⇒ E+E*E [E →
E+E]
⇒ a+E [I → a] ⇒ I+E*E [E → I]
⇒ a+E*E [E → E*E] ⇒ a+E*E [I → a]
⇒ a+I*E [E → I] ⇒ a+I*E [E → I]
⇒ a+a*E [I → a] ⇒ a+a*E [I → a]
⇒ a+a*I [E → I] ⇒ a+a*I [E → I]
⇒ a+a*a [I → a] ⇒ a+a*a [I → a]
69

Parse Tree1: Parse Tree2:

3.3.5. Inherent Ambiguity:


A CFL L is said to be inherently ambiguous, if every grammar for the
language must be ambiguous.

Ex:
Show that the language is inherent ambiguous L = {anbncmdm | n≥1,
m≥1} ∪ { anbmcmdn | n≥1, m≥1} then the production P is given by,
S → AB | C
A → aAb |ab C → aCd |aDd
B → cBd | cd D → bDc | bc
Soln:
L is a context free language. It separate sets of productions to generate
two kinds of strings in L. This grammar is ambiguous For ex, the string
aabbccdd has two leftmost derivations.
Derivation1: Derivation2:
rm rm
S ⇒ AB S⇒ C
⇒ aAbB ⇒ aCd
⇒ aabbB ⇒ aaDdd
⇒ aabbcBd ⇒ aabDcdd
⇒ aabbccdd ⇒ aabbccdd

Parse Tree1: Parse Tree2:


70

3.4. DEFINITION OF THE PUSH DOWN AUTOMATA


A PushDown Automata(PDA) is essentially a finite automata with
control of both an input tape and a stack on which it can store a string of stack
symbols. With the help of a stack the pushdown automata can remember an
infinite amount of information.

3.4.1. Model of PDA:


• The PDA consists of a finite set of states, a finite set of input symbols
and a finite set of pushdown symbols.
• The finite control has control of both the input tape and pushdown store.
• In one transition of the PDA,
o The control head reads the input symbol, then goto the new state.
o Replaces the symbol at the top of the stack by any string.
3.4.2. Definition of PDA:
A PDA consists of seven tuples.
P = (Q, Σ, Γ, δ, q0, Z0, F)
71

where, Q – A finite set of states.


Σ – A finite set of input symbols.
Γ – A finite set of stack symbols.
δ - The transition function. Formally, δ takes a argument a triple
δ(q, a, X),
where, - ‘q’ is a state in Q
- ‘a’ is either an input symbol in Σ or a = ε.
- ‘X’ is a stack symbol, that is a member of Γ.
Q0 – The start state.
Z0 – The start symbol of the stack.
F – The set of accepting states or final states.

Ex:
Mathematical model of a PDA for the language, L = {wwR | w is in
(0+1)* }, then PDA for L can be described as, P = ({q0, q1, q2}, {0, 1}, {0, 1, Z0},
δ, q0, Z0, {q2}), where δ is defined by the following rules;
1. δ(q0, 0, Z0) = {(q0, 0Z0)} and δ(q0, 1, Z0) = {(q0, 1Z0)}. One of these
rules applies initially, when we are in state q0 and we see the start symbol
Z0 at the top of the stack. We read the first input, and push it onto the
stack, leaving Z0 below to mark the bottom.
2. δ(q0, 0, 0) = {(q0, 00)}, δ(q0, 0, 1) = {(q0, 01)}, δ(q0, 1, 0) = {(q0, 10)}
and δ(q0, 1, 1) = {(q0, 11)}. These four similar rules allow us to stay in
state q0 and read inputs, pushing each onto the top of the stack and
leaving the pervious top stack symbol alone.
3. δ(q0, ε, Z0) = {(q1, Z0)}, δ(q0, ε, 0) = {(q1, 0)}, and δ(q0, ε, 1) = {(q1, 1)}.
These three rules allow P to go from state q0 to state q1 spontaneously
(on ε input), leaving intact whatever symbol is at the top of the stack.
4. δ(q1, 0, 0) = {(q1, ε)}, and δ(q1, 1, 1) = {(q1, ε)}. Now in state q1we can
match input symbols against the top symbols on the stack, and pop when
the symbols match.
5. δ(q1, ε, Z0) = {(q2, Z0)}. Finally, if we expose the bottom-of-stack
marker Z0 and we are in state q1, then we have found an input of the form
wwR. We go to state q2 and accept.

3.4.3. A Graphical Notation for DFA:


72

Sometimes, a diagram generalizing the transition diagram of a finite


automaton, will make aspects of the behavior of a given PDA clearer. A
transition diagram for PDA indicates,
(a) The nodes correspond to the states of the PDA.
(b) An arrow label start indicates, the start state and doubly circled states are
accepting, as for finite automata.
(c) An arc labeled a, X/α from state q to state ‘p’ means that δ(q, a, X)
contains the pair (p,α).

3.4.4. Instantaneous Descriptions(ID) of a PDA:


The ID is defined as a triple (q, w, γ), where,
q – Current state
w – String of input symbols
γ – String of stack symbols
Let P = (Q, Σ, Γ, δ, q0, Z0, F) be a PDA. Suppose δ(q, a, X) contains
(p,α). Then for all strings ‘w’ in Σ* and β in Γ*; (q, aw, β) ├ (p, w, αβ).

PROBLEMS:
Ex.1:
Construct PDA on the input strings 001010c010100, 001010c011100.
PDA can be described as, P = ({q1, q2}, {0, 1, c}, {R, B, G}, δ, q1, R, {q2}),
where δ is,
δ(q1, 0, R) = (q1, BR) δ(q1, c, R) = (q2, R)
δ(q1, 1, R) = (q1, GR) δ(q1, c, B) = (q2, B)
δ(q1, 0, B) = (q1, BB) δ(q1, c, G) = (q2, G)
δ(q1, 1, B) = (q1, GB) δ(q2, 0, B) = (q2, ε)
δ(q1, 0, G) = (q1, BG) δ(q2, 1, G) = (q2, ε)
73

δ(q1, 1, G) = (q1, GG) δ(q2, ε, R) = (q2, ε)


check whether the string is accepted or not?
Soln:
• w1 = 001010c010100
(q1, 001010c010100, R) ├ (q1, 01010c010100, BR)
├ (q1, 1010c010100, BBR)
├ (q1, 010c010100, GBBR)
├ (q1, 10c010100, BGBBR)
├ (q1, 0c010100, GBGBBR)
├ (q1, c010100, BGBGBBR)
├ (q2, 010100, BGBGBBR)
├ (q2, 10100, GBGBBR)
├ (q2, 0100, BGBBR)
├ (q2, 100, GBBR)
├ (q2, 00, BBR)
├ (q2, 0, BR)
├ (q2, ε, R)
├ (q2, ε, ε)
∴ The string is accepted.

• w2 = 001010c011100
(q1, 001010c011100, R) ├ (q1, 01010c011100, BR)
├ (q1, 1010c011100, BBR)
├ (q1, 010c011100, GBBR)
├ (q1, 10c011100, BGBBR)
├ (q1, 0c011100, GBGBBR)
├ (q1, c011100, BGBGBBR)
├ (q2, 011100, BGBGBBR)
├ (q2, 11100, GBGBBR)
├ (q2, 1100, BGBBR)
∴ There is no transition for (q2, 1, B). So the string is not accepted.

3.5. LANGUAGES OF A PUSH DOWN AUTOMATA


There are two ways to accept a string a PDA,
(a) Accept by final state that is, reach the final state from the start state.
(b) Accept by an empty stack that is, after consuming input, the stack is
empty and current state could be a final state or non-final state.
74

Both methods are equivalent. One method can be converted to another


method and vice versa.

3.5.1. Acceptance by final state:


Let M = (Q, Σ, Γ, δ, q0, Z0, F) be a PDA. The languages accepted by a
final state is defined as,
L(M) = {w | (q0, w, Z0) ├* (q, ε, α), where q ∈ F and α ∈ Γ*.
It means that, from the current station q0 after scanning the input string
‘w’, the PDA enters into a final state ‘q’ leaving the input tape empty. Here
contents of the stack is irrelevant.

3.5.2. Acceptance by Empty Stack:


For each PDA P = (Q, Σ, Γ, δ, q0, Z0, F), language accepted by empty
stack can be defined as,
N(P) = {w | (q0, w, Z0) ├* (q, ε, ε), where q ∈ Q.
It means that when the string ‘w’ is accepted by an empty stack, the final
state is irrelevant, the input tape should be empty and stack also should be empty.

3.5.3. From Empty Stack to Final State:


Theorem:
If L = N(PN) for some PDA PN = (Q, Σ, Γ, δ, q0, Z0), then there is a PDA
PF such that L = L(PF).
Proof:
Initially, change the stack content from Z0 to Z0X0. So consider a new
stack start symbol X0 for the PDA PF. Also need a new start state P0, which is the
initial state of PF. It is to push Z0 the start symbol of PN, onto the top of the stack
and enter state q0. Finally, we need another new state Pf., that is the accepting
state of PF.
The specification of PF is as follows:
P = (Q ∪ {P0, Pf}, Σ, Γ ∪ {X0}, δF, P0, X0, {Pf})
where δF is defined by,
1. δF(P0, ε, X0) = {(q0, Z0X0)}. In its start state, PF makes a spontaneous
transition to the start state of PN, pushing its start symbol Z0 onto the
stack.
2. For all states ‘q’ in Q, inputs ‘a’ in Σ or a = ε, and stack symbols Y in Γ,
δF(q, a, Y) = δN (q, a, Y).
3. δF(q, ε, X0) = (Pf, ε) for every state ‘q’ in Q.
75

We must show that ‘w’ is in L(PF) if and only if ‘w’ is in N(PN). The
moves of the PDA PF to accept a string ‘w’ can be written as,
(P0, w, X0) ├ (q0, w, Z0X0) ├* (q0, ε, X0) ├ (Pf, ε, ε).
PF PF PF
Thus PF accepts ‘w’ by final state.

3.5.4. From Final State to Empty Stack:


Theorem:
Let L be L(PF) for some PDA, PF = (Q, Σ, Γ, δ, q0, Z0, F). Then there is a
PDA PN such that L = N(PN).
Proof:
Initially, change the stack content from Z0 to Z0X0. So we also need a
start state P0, and final state P, which is the start and final of PN.
The specification of PN is as follows:
PN = (Q ∪ {P0, P}, Σ, Γ ∪ {X0}, δN, P0, X0)
where δN is defined by,
1. δN(P0, ε, X0) = {(q0, Z0X0)} to change the stack content initially.
2. δN(q, a, Y) = δF(q, a, Y), for all states ‘q’ in Q, inputs ‘a’ in Σ or a = ε,
and stack symbols Y in Γ.
3. δN(q, ε, Y) = (P, ε), for all accepting states ‘q’ in F and stack symbols Y
in Γ or Y = X0.
4. δN(P, ε, Y) = (P, ε), for all stack symbols Y in Γ or Y = X0, to pop the
remaining stack contents.

Suppose (Q0, w, Z0) ├* (q, ε, α) for some accepting state ‘q’ and stack
PF
string α. Then PN can do the following:
(P0, w, X0) ├ (q0, w, Z0X0) ├* (q, ε, αX0) ├* (P, ε, ε).
PN PN PN

PROBLEMS:
Ex.1:
Construct a PDA that accepts the given language, L = {xmyn | n<m}.
Soln:
Language L accepted by the strings are, L = {xxy, xxxy, xxxyy,
xxxxyy,……}
First find the grammar for that language. The grammar for the language
can be,
76

S → xSy | xS | x
The corresponding PDA for the above grammar is,
P = (Q, Σ, Γ, δ, q0, Z0, F)
where, Q = {q}
Σ = {x, y}
Γ = {S, x, y}
q0 = {q}
Z0 = {S}
F =ф
and δ is defined as,
δ(q, ε, S) = {(q, xSy), (q, xS), (q, x)}
δ(q, x, x) = (q, ε)
δ(q, y, y) = (q, ε)
To prove the string xxxyy is accepted by PDA,
(q, xxxyy, S) ├ (q, xxxyy, xSy) ├ (q, xxyy, Sy) ├ (q, xxyy, xSyy) ├ (q, xyy,
Syy)
├ (q, xyy, xyy) ├ (q, yy, yy) ├ (q, y, y) ├ (q, ε, ε)
Hence the string is accepted.

Ex.2:
Construct a PDA that accepts the given language, L = {0n1n | n≥1}.
Soln:
Language L accepted by the strings are, L = {01, 0011, 000111,
00001111,……}
First find the grammar for that language. The grammar for the language
can be,
S → 0S1 | 0A1
A → 01 | ε
The corresponding PDA for the above grammar is,
P = (Q, Σ, Γ, δ, q0, Z0, F)
where, Q = {q}
Σ = {0, 1}
Γ = {S, A, 0, 1}
q0 = {q}
Z0 = {S}
F =ф
and δ is defined as,
77

δ(q, ε, S) = {(q, 0S1), (q, 0A1)}


δ(q, ε, A) = {(q, 01), (q, ε)}
δ(q, 0, 0) = (q, ε)
δ(q, 1, 1) = (q, ε)
To prove the string 000111 is accepted by PDA,
(q, 000111, S) ├ (q, 000111, 0S1) ├ (q, 00111, S1) ├ (q, 00111, 0S11) ├ (q,
0111, S11)
├ (q, 0111, 0A111)├ (q, 111, A111)├ (q, 111, 111)├ (q, 11, 11)
├ (q, 1, 1)
├ (q, ε, ε)

Hence the string is accepted.

3.6. EQUIVALENCE OF PUSHDOWN AUTOMATA AND CFG


3.6.1 From Grammars to PushDown Automata:
It is possible to convert a CFG to PDA and vice versa.
Input:
Context Free Grammar ‘G’.
Output:
PDA – P that simulates the leftmost derivations of G. Stack contains all
the symbols (variables as well as terminals) of CFG.
Let G = (V, T, P, S) be a CFG. The PDA which accepts L(G) is given
by,
P = ({q}, T, V ∪ T, δ, q, S, ф) where δ is defined by,
1. For each variable ‘A’ include a transition δ(q, ε, A) = (q, b) such that A
→ b is a
production of P.
2. For each terminal ‘a’ include a transition δ(q, a, a) = (q, ε).
PROBLEMS:
Ex.1:
Construct a PDA that accepts the language generated by the grammar,
S → aSbb | abb
Soln:
PDA – P is defined as follows:
P = (Q, Σ, Γ, δ, q0, Z0, F)
Where, Q = {q}
Σ = {a, b}
78

Γ = {S, a, b}
q0 = {q}
Z0 = {S}
F =ф
and δ is defined as,
δ(q, ε, S) = {(q, aSbb), (q, abb)}
δ(q, a, a) = (q, ε)
δ(q, b, b) = (q, ε)

Ex.2:
Construct a PDA equivalent to the CFG,
S → aABB | aAA
A → aBB | a
B → bBB |A
Soln:
PDA – P is defined as follows:
P = (Q, Σ, Γ, δ, q0, Z0, F)
Where, Q = {q}
Σ = {a, b}
Γ = {S, A, B, a, b}
q0 = {q}
Z0 = {S}
F =ф
and δ is defined as,
δ(q, ε, S) = {(q, aABB), (q, aAA)}
δ(q, ε, A) = {(q, aBB), (q, a)}
δ(q, ε, B) = {(q, bBB), (q, A)}
δ(q, a, a) = (q, ε)
δ(q, b, b) = (q, ε)
79

Ex.3:
Construct a PDA equivalent to the CFG,
E → I | E+E | E*E | (E)
I → a | b | Ia | Ib | I0 | I1
Soln:
PDA – P is defined as follows:
P = (Q, Σ, Γ, δ, q0, Z0, F)
Where, Q = {q}
Σ = {a, b, 0, 1, +, *, (, )}
Γ = {E, I, B, a, b, 0, 1, +, *, (, )}
q0 = {q}
Z0 = {E}
F =ф
and δ is defined as,
δ(q, ε, E) = {(q, I), (q, E+E), (q, E*E), (q, (E))}
δ(q, ε, I) = {(q, a), (q, b), (q, 0), (q, 1), (q, +), (q, *), (q, ( ), (q, ))}
δ(q, a, a) = (q, ε)
δ(q, b, b) = (q, ε)
δ(q, 0, 0) = (q, ε)
δ(q, 1, 1) = (q, ε)
δ(q, +, +) = (q, ε)
δ(q, *, *) = (q, ε)
δ(q, (, ( ) = (q, ε)
δ(q, ), ) ) = (q, ε)

3.6.2. From PDA’s to Grammars:


Theorem:
If L is N(M) for some PDA M, then L is a context free grammar.
Construction:
80

Let M = (Q, Σ, Γ, δ, q0, Z0, F) be the PDA . Let G = (V, T, P, S) be a


CFG.
Where, - V is the set of objects of the form [q, A, p], ‘q’ and ‘p’ in Q and A in Γ.
- New symbol S.
- P is the set of productions.
The productions are,
(1) S → [q0, Z0, q], q in Q
(2) If δ(q, a, A) = (q1, B1B2 ……… Bn) then,
[q, A, qm+1] = a[q1, B1, q2][q2, B2, q3] ………… [qm, Bm, qm+1], for
each ‘a’ in Σ ∪ {ε} and A, B1B2 ……. Bm in Γ.
Proof:
If m = 0; δ(q, a, A) = (q1, ε)
[q, A, q1] → a
*
Let ‘x’ be the input string, to show that, [q, A, p] ⇒ x, iff (q, x, A) ├* (p, ε, ε)
*
We show by induction on ‘i’ that, if (q, x, A) ├i (p, ε, ε) then [q, A, p] ⇒ x
Basis: when i = 1,
δ(q, x, A) = (p, ε)
here ‘x’ is a single input symbol. Thus [q, A, p] → x is a production of G.

Induction: when i > 1, let x = ay.


(q, ay, A) ├ (q1, ay, B1B2 ……..Bn)
The string ‘y’ can be written as y = y1y2 …..yn, where yj has the effect of
popping Bj from the stack possibly after a long sequence of moves.
Let y1 be the prefix of ‘y’ at the end of which the stack first becomes
short as n-1 symbols. Let y2 be the symbols of ‘y’ following y1 such that at the
end of y2 the stack is a short as n-2 symbols and so on. That is,
(q1, y1y2…….yn, B1B2…….Bn) ├ (q2, y2y3…….yn, B2B3…….Bn) ├ (q3,
y3y4…….yn, B3B4…….Bn)
There exist states q2, q3,……..qn+1.
Where qn+1 = p
(q1, y1, B1) ├ (q2, ε, ε)
(q2, y2, B2) ├ (q3, ε, ε)

(qj, yj, Bj) ├ (qj, ε, ε)


81

*
In CFG, [qj, Bj, qj+1] ⇒ yj
The original move,
(q, ay, A) ├ (q1, y, B1B2……Bn)
(q, ay1y2 ……. yn, A) ├ (q1, y1y2……yn, B1B2……..Bn)
CFG is,
[q, A, p] ⇒ a[q1, B1, q2] [q2, B2, q3] …….. [qn, Bn, qn+1]
*
[q, A, p] ⇒ ay1y2 ……. yn
*
[q, A, p] ⇒ ay
*
[q, A, p] ⇒ x iff (q, x, A) ├* (p, ε, ε), where qn+1 = p.

3.6.2.1. Algorithm for getting production rules of CFG:


1. The start symbol production can be, S → [q0, Z0, q]
where, q indicates the next state.
q0 is a start state
Z0 is a stack symbol
q and q0 ∈ Q
2. If there exist a move of PDA, δ(q, a, Z) = {(q’, ε)}, then the production
rule can be
written as, [q, Z, q’] → a
3. If there exist a move of PDA as, δ(q, a, Z) = {(qm, Z1Z2…….Zn)}, then
the production
rule can be written as, [q, Z, qm] → a[q1, Z1, q2] [q2, Z2, q3] [q3, Z3, q4]
…… [qm-1, Zn-1, qm]

PROBLEMS:
Ex.1:
Construct a CFG for the PDA, P = ({q0, q1}, {0, 1}, {S, A}, δ, q0, S,
{q1}), where δ is,
δ(q0, 1, S) = {(q0, AS)} δ(q0, 0, A) = {(q1, A)}
δ(q0, ε, S) = {(q0, ε)} δ(q1, 1, A) = {(q1, ε)}
δ(q0, 1, A) = {(q0, AA) δ(q1, 0, S) = {(q0, S)}
Soln:
82

CFG, G is defined as, G = (V, T, P, S)


Where, V = {[q0, S, q0], [q0, S, q1], [q1, S, q0], [q1, S, q1], [q0, A, q0], [q0,
A, q1],
[q1, A, q0], [q1, A, q1]}
T = {0, 1}
S = {S} [Start stack symbol]
To find production, P;
(1) Production for S,
S → [q0, S, q0]
S → [q0, S, q1] [q0 – Start state, S – Initial stack
symbol]
(2) δ(q0, 1, S) = {(q0, AS)} we get,
For q0, [q0, S, q0] → 1[q0, A, q0] [q0, S, q0]
[q0, S, q0] → 1[q0, A, q1] [q1, S, q0]

For q1, [q0, S, q1] → 1[q0, A, q0] [q0, S, q1]


[q0, S, q1] → 1[q0, A, q1] [q1, S, q1]
(3) δ(q0, ε, S) = {(q0, ε)}
[q0, S, q0] → ε
(4) δ(q0, 1, A) = {(q0, AA)
For q0, [q0, A, q0] → 1[q0, A, q0] [q0, A, q0]
[q0, A, q0] → 1[q0, A, q1] [q1, A, q0]

For q1, [q0, A, q1] → 1[q0, A, q0] [q0, A, q1]


[q0, A, q1] → 1[q0, A, q1] [q1, A, q1]
(5) δ(q0, 0, A) = {(q1, A)}
For q0, [q0, A, q0] → 0[q1, A, q0]
For q1, [q0, A, q1] → 0[q1, A, q1]
(6) δ(q1, 1, A) = {(q1, ε)}
[q1, A, q1] → 1
(7) δ(q1, 0, S) = {(q0, S)}
For q0, [q1, S, q0] → 0[q0, S, q0]
For q1, [q1, S, q1] → 0[q0, S, q1]

Since [q1, A, q0], [q1, A, q1] does not have any productions we can leave them.
After eliminating the unwanted productions,
S → [q0, S, q0]
83

[q0, S, q0] → 1[q0, A, q1] [q1, S, q0]


[q0, S, q0] → ε
[q0, A, q1] → 1[q0, A, q1] [q1, A, q1]
[q0, A, q1] → 0[q1, A, q1]
[q1, A, q1] → 1
[q1, S, q0] → 0[q0, S, q0]

Finally P is given by,


S → [q0, S, q0]
[q0, S, q0] → 1[q0, A, q1] [q1, S, q0] | ε
[q0, A, q1] → 1[q0, A, q1] [q1, A, q1] | 0[q1, A, q1]
[q1, A, q1] → 1
[q1, S, q0] → 0[q0, S, q0]

Ex.2:
Construct a CFG for the PDA, P = ({q0, q1}, {0, 1}, {X, Z0}, δ, q0, Z0,
{q1}), where δ is,
δ(q0, 0, Z0) = {(q0, XZ0)} δ(q1, 1, X) = {(q1, ε)}
δ(q0, 0, X) = {(q0, XX)} δ(q1, ε, X) = {(q1, ε)}
δ(q0, 1, X) = {(q1, ε) δ(q1, ε, Z0) = {(q1, ε)}
Soln:
CFG, G is defined as, G = (V, T, P, S)
Where, V = {[q0, X, q0], [q0, X, q1], [q1, X, q0], [q1, X, q1], [q0, Z0, q0],
[q0, Z0, q1],
[q1, Z0, q0], [q1, Z0, q1]}
T = {0, 1}
S = {S} [Start stack symbol]
To find production, P;
(1) Production for S,
S → [q0, Z0, q0]
S → [q0, Z0, q1] [q0 – Start state, Z0 – Initial stack
symbol]

(2) δ(q0, 0, Z0) = {(q0, XZ0)} we get,


For q0, [q0, Z0, q0] → 0[q0, X, q0] [q0, Z0, q0]
[q0, Z0, q0] → 0[q0, X, q1] [q1, Z0, q0]
For q1, [q0, Z0, q1] → 0[q0, X, q0] [q0, Z0, q1]
84

[q0, Z0, q1] → 0[q0, X, q1] [q1, Z0, q1]

(3) δ(q0, 0, X) = {(q0, XX)}


For q0, [q0, X, q0] → 0[q0, X, q0] [q0, X, q0]
[q0, X, q0] → 0[q0, X, q1] [q1, X, q0]
For q1, [q0, X, q1] → 0[q0, X, q0] [q0, X, q1]
[q0, X, q1] → 0[q0, X, q1] [q1, X, q1]

(4) δ(q0, 1, X) = {(q1, ε)


[q0, X, q1] → 1

(5) δ(q1, 1, X) = {(q1, ε)}


[q1, X, q1] → 1

(6) δ(q1, ε, X) = {(q1, ε)}


[q1, X, q1] → ε

(7) δ(q1, ε, Z0) = {(q1, ε)}


[q1, Z0, q1] → ε

After eliminating the unwanted productions, we get;


S → [q0, Z0, q1]
[q0, Z0, q1] → 0[q0, X, q1] [q1, Z0, q1]
[q0, X, q1] → 0[q0, X, q1] [q1, X, q1]
[q0, X, q1] → 1
[q1, X, q1] → 1
[q1, X, q1] → ε
[q1, Z0, q1] → ε

Finally P is given by,


S → [q0, Z0, q1]
[q0, Z0, q1] → 0[q0, X, q1] [q1, Z0, q1]
[q0, X, q1] → 0[q0, X, q1] [q1, X, q1] | 1
[q1, X, q1] → 1 | ε
[q1, Z0, q1] → ε
85

Ex.3:
Construct a CFG for the PDA, P = ({q0, q1}, {0, 1}, {X, Z0}, δ, q0, Z0,
{q1}), where δ is,
δ(q0, b, Z0) = {(q0, ZZ0)} δ(q0, ε, Z0) = {(q0, ε)}
δ(q0, b, Z) = {(q0, ZZ)} δ(q0, a, Z) = {(q1, Z)}
δ(q1, b, Z) = {(q1, ε) δ(q1, a, Z0) = {(q0, Z0)}

Soln:
CFG, G is defined as, G = (V, T, P, S)
Where, V = {[q0, Z0, q0], [q0, Z0, q1], [q1, Z0, q0], [q1, Z0, q1], [q0, Z, q0],
[q0, Z, q1],
[q1, Z, q0], [q1, Z, q1]}
T = {a, b}
S = {S} [Start stack symbol]
To find production, P;
(1) Production for S,
S → [q0, Z0, q0]
S → [q0, Z0, q1] [q0 – Start state, Z0 – Initial stack
symbol]

(2) δ(q0, b, Z0) = {(q0, ZZ0)} we get,


For q0, [q0, Z0, q0] → b[q0, Z, q0] [q0, Z0, q0]
[q0, Z0, q0] → b[q0, Z, q1] [q1, Z0, q0]

For q1, [q0, Z0, q1] → b[q0, Z, q0] [q0, Z0, q1]
[q0, Z0, q1] → b[q0, Z, q1] [q1, Z0, q1]

(3) δ(q0, b, Z) = {(q0, ZZ)}


For q0, [q0, Z, q0] → b[q0, Z, q0] [q0, Z, q0]
[q0, Z, q0] → b[q0, Z, q1] [q1, Z, q0]

For q1, [q0, Z, q1] → b[q0, Z, q0] [q0, Z, q1]


[q0, Z, q1] → b[q0, Z, q1] [q1, Z, q1]

(4) δ(q1, b, Z) = {(q1, ε)


[q1, Z, q1] → b
86

(5) δ(q0, ε, Z0) = {(q0, ε)}


[q0, Z0, q0] → ε

(6) δ(q0, a, Z) = {(q1, Z)}


For q0, [q0, Z, q0] → a[q1, Z, q0]
For q1, [q0, Z, q1] → a[q1, Z, q1]
(7) δ(q1, a, Z0) = {(q0, Z0)}
For q0, [q1, Z0, q0] → a[q0, Z0, q0]
For q1, [q1, Z0, q1] → a[q0, Z0, q1]

After eliminating the unwanted productions, we gt;


S → [q0, Z0, q0]
[q0, Z0, q0] → b[q0, Z, q1] [q1, Z0, q0]
[q0, Z0, q0] → ε
[q0, Z, q1] → b[q0, Z, q1] [q1, Z, q1]
[q0, Z, q1] → a[q1, Z, q1]
[q1, Z0, q0] → a[q0, Z0, q0]
[q1, Z, q1] → b

Finally P is given by,


S → [q0, Z0, q0]
[q0, Z0, q0] → b[q0, Z, q1] [q1, Z0, q0] | ε
[q0, Z, q1] → b[q0, Z, q1] [q1, Z, q1] | a[q1, Z, q1]
[q1, Z0, q0] → a[q0, Z0, q0]
[q1, Z, q1] → b

3.7. DETERMINISTIC PUSHDOWN AUTOMATA (DPDA)

3.7.1. Definition of a Deterministic PDA:


A PDA P = (Q, Σ, Γ, δ, q0, Z0, F) to be deterministic, if and only if the
following conditions are met;
(1) δ(q, a, X) has at most one member for any ‘q’ in Q, ‘a’ in Σ or a=ε and X in Γ.
(2) If δ(q, a, X) is nonempty, for some ‘a’ in Σ then δ(q, ε, X) must be empty.

3.7.2. Regular Languages and DPDA’s:


87

The DPDA’s accept a class of language that is between the regular


languages and the CFL’s. We shall first prove that the DPDA languages include
all the regular languages.
3.7.2.1. Theorem:

If L is a regular language, then L = L(P) for some DPDA P.


Proof:
A DPDA can simulate a deterministic finite automaton. The PDA keeps
some stack symbol Z0 on its stack because a PDA has to have a stack, but really
the PDA ignores its stack and just uses its state. Formally, let A = (Q, Σ, δA, q0,
F) be a DFA construct DPDA.
P = (Q, Σ, {Z0}, δP, q0, Z0, F)
By defining δP(q, a, Z0) = {(p, Z0)} for all states ‘p’ and ‘q’ in Q, such that δA(q,
a) = p.
∴ (q0, w, Z0) ├* (p, ε, Z0) if and only if δA(q0, w) = p.
P

3.7.3. DPDA’s and Context Free Languages:

The languages accepted by DPDA’s by final state properly include the


regular languages, but are properly included in the CFL’s.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy