0% found this document useful (0 votes)
217 views41 pages

Push Down Automata

The document discusses pushdown automata (PDA) and how they relate to different language hierarchies. It provides examples of how PDA can recognize languages that are not regular, such as {0n1n | n >= 0}. A PDA has a transition function that allows it to push and pop symbols from a stack, allowing it to "remember" an infinite amount of information unlike a finite state machine. The example computations show how a PDA uses its stack to keep track of the matching between 0s and 1s to determine if the string should be accepted or rejected.

Uploaded by

Video Trend
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
217 views41 pages

Push Down Automata

The document discusses pushdown automata (PDA) and how they relate to different language hierarchies. It provides examples of how PDA can recognize languages that are not regular, such as {0n1n | n >= 0}. A PDA has a transition function that allows it to push and pop symbols from a stack, allowing it to "remember" an infinite amount of information unlike a finite state machine. The example computations show how a PDA uses its stack to keep track of the matching between 0s and 1s to determine if the string should be accepted or rejected.

Uploaded by

Video Trend
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 41

Hierarchy of languages

Regular Languages  Finite State Machines, Regular Expression


Context Free Languages  Context Free Grammar, Push-down Automata

Non-Recursively Enumerable Languages

Recursively Enumerable Languages

Recursive Languages

Context-Free Languages

Regular Languages

1
Pushdown Automata (PDA)

• Informally:
– A PDA is an NFA-ε with a stack.
– Transitions are modified to accommodate stack operations.

• Questions:
– What is a stack?
– How does a stack help?

• A DFA can “remember” only a finite amount of information, whereas a PDA can
“remember” an infinite amount of (certain types of) information, in one memory-stack

2
• Example:

{0n1n | 0=<n} is not regular, but

{0n1n | 0nk, for some fixed k} is regular, for any fixed k.

• For k=3:
L = {ε, 01, 0011, 000111}
0 0 0
q0 q1 q2 q3

1 1 1 1
0/1 1 1
0/1 q7 q6 q5 q4
0
0
0
3
• In a DFA, each state remembers a finite amount of information.

• 
To get {0n1n | 0 n} with a DFA would require an infinite number of states
using the preceding technique.

• 
An infinite stack solves the problem for {0n1n | 0 n} as follows:
– Read all 0’s and place them on a stack
– Read all 1’s and match with the corresponding 0’s on the stack

• Only need two states to do this in a PDA

• Similarly for {0n1m0n+m | n,m0}

4
Formal Definition of a PDA

• A pushdown automaton (PDA) is a seven-tuple:

M = (Q, Σ, Г, δ, q0, z0, F)

Q A finite set of states


Σ A finite input alphabet
Г A finite stack alphabet
q0 The initial/starting state, q0 is in Q
z0 A starting stack symbol, is in Г // need not always remain at the bottom of stack
F A set of final/accepting states, which is a subset of Q
δ A transition function, where

δ: Q x (Σ U {ε}) x Г –> finite subsets of Q x Г*

5
• Consider the various parts of δ:

Q x (Σ U {ε}) x Г –> finite subsets of Q x Г*

– Q on the LHS means that at each step in a computation, a PDA must consider its’
current state.
– Г on the LHS means that at each step in a computation, a PDA must consider the
symbol on top of its’ stack.
– Σ U {ε} on the LHS means that at each step in a computation, a PDA may or may
not consider the current input symbol, i.e., it may have epsilon transitions.

– “Finite subsets” on the RHS means that at each step in a computation, a PDA may
have several options.
– Q on the RHS means that each option specifies a new state.
– Г* on the RHS means that each option specifies zero or more stack symbols that
will replace the top stack symbol, but in a specific sequence.

6
• Two types of PDA transitions:

δ(q, a, z) = {(p1,γ1), (p2,γ2),…, (pm,γm)}

– Current state is q
– Current input symbol is a
– Symbol currently on top of the stack z
– Move to state pi from q
– Replace z with γi on the stack (leftmost symbol on top)
– Move the input head to the next input symbol

a/z/ γ1 p1

q a/z/ γ2 p2
a/z/ γm
:

pm
7
• Two types of PDA transitions:

δ(q, ε, z) = {(p1,γ1), (p2,γ2),…, (pm,γm)}

– Current state is q
– Current input symbol is not considered
– Symbol currently on top of the stack z
– Move to state pi from q
– Replace z with γi on the stack (leftmost symbol on top)
– No input symbol is read

ε/z/ γ1 p1

q ε/z/ γ2 p2
ε/z/ γm
:

pm
8
• Example: 0n1n, n>=0
M = ({q1, q2}, {0, 1}, {L, #}, δ, q1, #, Ø)
δ:
(1) δ(q1, 0, #) = {(q1, L#)} // stack order: L on top, then # below
(2) δ(q1, 1, #) = Ø // illegal, string rejected, When will it happen?
(3) δ(q1, 0, L) = {(q1, LL)}
(4) δ(q1, 1, L) = {(q2, ε)}
(5) δ(q2, 1, L) = {(q2, ε)}
(6) δ(q2, ε, #) = {(q2, ε)} //if ε read & stack hits bottom, accept
(7) δ(q2, ε, L) = Ø // illegal, string rejected
(8) δ(q1, ε, #) = {(q2, ε)} // n=0, accept

• Goal: (acceptance)
– Read the entire input string
– Terminate with an empty stack

• Informally, a string is accepted if there exists a computation that uses up all the
input and leaves the stack empty.
• How many rules should be there in delta?
9
• Language: 0n1n, n>=0
δ:
(1) δ(q1, 0, #) = {(q1, L#)} // stack order: L on top, then # below
(2) δ(q1, 1, #) = Ø // illegal, string rejected, When will it happen?
(3) δ(q1, 0, L) = {(q1, LL)}
(4) δ(q1, 1, L) = {(q2, ε)}
(5) δ(q2, 1, L) = {(q2, ε)}
(6) δ(q2, ε, #) = {(q2, ε)} //if ε read & stack hits bottom, accept
(7) δ(q2, ε, L) = Ø // illegal, string rejected

(8) δ(q1, ε, #) = {(q2, ε)} // n=0, accept

• 0011
• (q1, 0 011, #) |-
(q1, 0 11, L#) |-
(q1, 1 1, LL#) |-
(q2, 1, L#) |-
(q2, e, #) |-
(q2, e, e): accept
• 011
• (q1, 0 11, #) |-
(q1, 1 1, L#) |-
(q2, 1, #) |-
Ø : reject
• Try 001 10
• Example: balanced parentheses,
• e.g. in-language: ((())()), or (())(), but not-in-language: ((())

M = ({q1}, {“(“, “)”}, {L, #}, δ, q1, #, Ø)


δ:
(1) δ(q1, (, #) = {(q1, L#)} // stack order: L-on top-then- # lower
(2) δ(q1, ), #) = Ø // illegal, string rejected
(3) δ(q1, (, L) = {(q1, LL)}
(4) δ(q1, ), L) = {(q1, ε)}
(5) δ(q1, ε, #) = {(q1, ε)} //if ε read & stack hits bottom, accept
(6) δ(q1, ε, L) = Ø // illegal, string rejected
// What does it mean? When will it happen?
• Goal: (acceptance)
– Read the entire input string
– Terminate with an empty stack

• Informally, a string is accepted if there exists a computation that uses up all the
input and leaves the stack empty.
• How many rules should be in delta?
11
• Transition Diagram:

(, # | L#

ε, # | ε q0 (, L | LL

), L | ε

• Example Computation:

Current Input Stack Transition


(()) # -- initial status
()) L# (1) - Could have applied rule (5), but
)) LL# (3) it would have done no good
) L# (4)
ε # (4)
ε - (5)

12
• Example PDA #1: For the language {x | x = wcwr and w in {0,1}*, but sigma={0,1,c}}
• Is this a regular language?
• Note: length |x| is odd
M = ({q1, q2}, {0, 1, c}, {#, B, G}, δ, q1, #, Ø)
δ:
(1) δ(q1, 0, #) = {(q1, B#)} (9) δ(q1, 1, #) = {(q1, G#)}
(2) δ(q1, 0, B) = {(q1, BB)} (10) δ(q1, 1, B) = {(q1, GB)}
(3) δ(q1, 0, G) = {(q1, BG)} (11) δ(q1, 1, G) = {(q1, GG)}
(4) δ(q1, c, #) = {(q2, #)}
(5) δ(q1, c, B) = {(q2, B)}
(6) δ(q1, c, G) = {(q2, G)}
(7) δ(q2, 0, B) = {(q2, ε)} (12) δ(q2, 1, G) = {(q2, ε)}
(8) δ(q2, ε, #) = {(q2, ε)}

• Notes:
– Stack grows leftwards
– Only rule #8 is non-deterministic.
– Rule #8 is used to pop the final stack symbol off at the end of a computation. 13
• Example Computation:

(1) δ(q1, 0, #) = {(q1, B#)} (9) δ(q1, 1, #) = {(q1, G#)}


(2) δ(q1, 0, B) = {(q1, BB)} (10) δ(q1, 1, B) = {(q1, GB)}
(3) δ(q1, 0, G) = {(q1, BG)} (11) δ(q1, 1, G) = {(q1, GG)}
(4) δ(q1, c, #) = {(q2, #)}
(5) δ(q1, c, B) = {(q2, B)}
(6) δ(q1, c, G) = {(q2, G)}
(7) δ(q2, 0, B) = {(q2, ε)} (12) δ(q2, 1, G) = {(q2, ε)}
(8) δ(q2, ε, #) = {(q2, ε)}

State Input Stack Rule Applied Rules Applicable


q1 01c10 # (1)
q1 1c10 B# (1) (10)
q1 c10 GB# (10) (6)
q2 10 GB# (6) (12)
q2 0 B# (12) (7)
q2 ε # (7) (8)
q2 ε ε (8) -
14
• Example Computation:

(1) δ(q1, 0, #) = {(q1, B#)} (9) δ(q1, 1, #) = {(q1, G#)}


(2) δ(q1, 0, B) = {(q1, BB)} (10) δ(q1, 1, B) = {(q1, GB)}
(3) δ(q1, 0, G) = {(q1, BG)} (11) δ(q1, 1, G) = {(q1, GG)}
(4) δ(q1, c, #) = {(q2, #)}
(5) δ(q1, c, B) = {(q2, B)}
(6) δ(q1, c, G) = {(q2, G)}
(7) δ(q2, 0, B) = {(q2, ε)} (12) δ(q2, 1, G) = {(q2, ε)}
(8) δ(q2, ε, #) = {(q2, ε)}

State Input Stack Rule Applied


q1 1c1 #
q1 c1 G# (9)
q2 1 G# (6)
q2 ε # (12)
q2 ε ε (8)

• Questions:
– Why isn’t δ(q2, 0, G) defined?
– Why isn’t δ(q2, 1, B) defined?
• TRY: 11c1 15
• Example PDA #2: For the language {x | x = wwr and w in {0,1}*}
• Note: length |x| is even
M = ({q1, q2}, {0, 1}, {#, B, G}, δ, q1, #, Ø)

δ:
(1) δ(q1, 0, #) = {(q1, B#)}
(2) δ(q1, 1, #) = {(q1, G#)}
(3) δ(q1, 0, B) = {(q1, BB), (q2, ε)} (6) δ(q1, 1, G) = {(q1, GG), (q2, ε)}
(4) δ(q1, 0, G) = {(q1, BG)} (7) δ(q2, 0, B) = {(q2, ε)}
(5) δ(q1, 1, B) = {(q1, GB)} (8) δ(q2, 1, G) = {(q2, ε)}
(9) δ(q1, ε, #) = {(q2, #)}
(10) δ(q2, ε, #) = {(q2, ε)}

• Notes:
– Rules #3 and #6 are non-deterministic: two options each
– Rules #9 and #10 are used to pop the final stack symbol off at the end of a computation.

16
• Example Computation:

(1) δ(q1, 0, #) = {(q1, B#)} (6) δ(q1, 1, G) = {(q1, GG), (q2, ε)}
(2) δ(q1, 1, #) = {(q1, G#)} (7) δ(q2, 0, B) = {(q2, ε)}
(3) δ(q1, 0, B) = {(q1, BB), (q2, ε)} (8) δ(q2, 1, G) = {(q2, ε)}
(4) δ(q1, 0, G) = {(q1, BG)} (9) δ(q1, ε, #) = {(q2, ε)}
(5) δ(q1, 1, B) = {(q1, GB)} (10) δ(q2, ε, #) = {(q2, ε)}

State Input Stack Rule Applied Rules Applicable


q1 000000 # (1), (9)
q1 00000 B# (1) (3), both options
q1 0000 BB# (3) option #1 (3), both options
q1 000 BBB# (3) option #1 (3), both options
q2 00 BB# (3)option #2 (7)
q2 0 B# (7) (7)
q2 ε # (7) (10)
q2 ε ε (10)

• Questions:
– What is rule #10 used for?
– What is rule #9 used for?
– Why do rules #3 and #6 have options?
– Why don’t rules #4 and #5 have similar options? [transition not possible if the previous input
symbol was different] 17
• Negative Example Computation:

(1) δ(q1, 0, #) = {(q1, B#)} (6) δ(q1, 1, G) = {(q1, GG), (q2, ε)}
(2) δ(q1, 1, #) = {(q1, G#)} (7) δ(q2, 0, B) = {(q2, ε)}
(3) δ(q1, 0, B) = {(q1, BB), (q2, ε)} (8) δ(q2, 1, G) = {(q2, ε)}
(4) δ(q1, 0, G) = {(q1, BG)} (9) δ(q1, ε, #) = {(q2, ε)}
(5) δ(q1, 1, B) = {(q1, GB)} (10) δ(q2, ε, #) = {(q2, ε)}

State Input Stack Rule Applied


q1 000 #
q1 00 B# (1)
q1 0 BB# (3) option #1
(q2, 0, #) by option 2
q1 ε BBB# (3) option #1 -crashes, no-rule to apply-
(q2, ε, B#) by option 2
-rejects: end of string but not empty stack-

18
• Example Computation:

(1) δ(q1, 0, #) = {(q1, B#)} (6) δ(q1, 1, G) = {(q1, GG), (q2, ε)}
(2) δ(q1, 1, #) = {(q1, G#)} (7) δ(q2, 0, B) = {(q2, ε)}
(3) δ(q1, 0, B) = {(q1, BB), (q2, ε)} (8) δ(q2, 1, G) = {(q2, ε)}
(4) δ(q1, 0, G) = {(q1, BG)} (9) δ(q1, ε, #) = {(q2, ε)}
(5) δ(q1, 1, B) = {(q1, GB)} (10) δ(q2, ε, #) = {(q2, ε)}

State Input Stack Rule Applied


q1 010010 #
q1 10010 B# (1) From (1) and (9)
q1 0010 GB# (5)
q1 010 BGB# (4)
q2 10 GB# (3) option #2
q2 0 B# (8)
q2 ε # (7)
q2 ε ε (10)

• Exercises:
– 0011001100 // how many total options the machine (or you!) may need to try before rejection?
– 011110
– 0111

19
Formal Definitions for PDAs
• Let M = (Q, Σ, Г, δ, q0, z0, F) be a PDA.

• Definition: An instantaneous description (ID) is a triple (q, w, γ), where q is in Q, w is


in Σ* and γ is in Г*.
– q is the current state
– w is the unused input
– γ is the current stack contents

• Example: (for PDA #2)

(q1, 111, GBR) (q1, 11, GGBR)

(q1, 111, GBR) (q2, 11, BR)

(q1, 000, GR) (q2, 00, R)


20
• Let M = (Q, Σ, Г, δ, q0, z0, F) be a PDA.

• Definition: Let a be in Σ U {ε}, w be in Σ*, z be in Г, and α and β both be in Г*. Then:

(q, aw, zα) |—M (p, w, βα)

if δ(q, a, z) contains (p, β).

• Intuitively, if I and J are instantaneous descriptions, then I |— J means that J follows


from I by one transition.

21
• Examples: (PDA #2)

(q1, 111, GBR) |— (q1, 11, GGBR) (6) option #1, with a=1, z=G, β=GG, w=11, and
α= BR

(q1, 111, GBR) |— (q2, 11, BR) (6) option #2, with a=1, z=G, β= ε, w=11, and
α= BR

(q1, 000, GR) |— (q2, 00, R) Is not true, For any a, z, β, w and α

• Examples: (PDA #1)

(q1, (())), L#) |— (q1, ())),LL#) (3)

22
• Definition: |—* is the reflexive and transitive closure of |—.
– I |—* I for each instantaneous description I
– If I |— J and J |—* K then I |—* K

• Intuitively, if I and J are instantaneous descriptions, then I |—* J means that J follows
from I by zero or more transitions.

23
• Definition: Let M = (Q, Σ, Г, δ, q0, z0, F) be a PDA. The language accepted by empty
stack, denoted LE(M), is the set

{w | (q0, w, z0) |—* (p, ε, ε) for some p in Q}

• Definition: Let M = (Q, Σ, Г, δ, q0, z0, F) be a PDA. The language accepted by final
state, denoted LF(M), is the set

{w | (q0, w, z0) |—* (p, ε, γ) for some p in F and γ in Г*}

• Definition: Let M = (Q, Σ, Г, δ, q0, z0, F) be a PDA. The language accepted by empty
stack and final state, denoted L(M), is the set

{w | (q0, w, z0) |—* (p, ε, ε) for some p in F}

24
• Lemma 1: Let L = LE(M1) for some PDA M1. Then there exits a PDA M2 such that L =
LF(M2).

• Lemma 2: Let L = LF(M1) for some PDA M1. Then there exits a PDA M2 such that L =
LE(M2).

• Theorem: Let L be a language. Then there exits a PDA M1 such that L = LF(M1) if and
only if there exists a PDA M2 such that L = LE(M2).

• Corollary: The PDAs that accept by empty stack and the PDAs that accept by final state
define the same class of languages.

• Note: Similar lemmas and theorems could be stated for PDAs that accept by both final
state and empty stack.

25
Back to CFG again:
PDA equivalent to CFG

26
• Definition: Let G = (V, T, P, S) be a CFL. If every production in P is of the form

A –> aα

Where A is in V, a is in T, and α is in V*, then G is said to be in Greibach Normal Form


(GNF).
Only one non-terminal in front.

• Example:

S –> aAB | bB
A –> aA | a
B –> bB | c Language: (aa++b)b+c

• Theorem: Let L be a CFL. Then L – {ε} is a CFL.

• Theorem: Let L be a CFL not containing {ε}. Then there exists a GNF grammar G such
that L = L(G).

27
• Lemma 1: Let L be a CFL. Then there exists a PDA M such that L = LE(M).

• Proof: Assume without loss of generality that ε is not in L. The construction can be
modified to include ε later.

Let G = (V, T, P, S) be a CFG, and assume without loss of generality that G is in GNF.
Construct M = (Q, Σ, Г, δ, q, z, Ø) where:
Q = {q}
Σ=T
Г=V
z=S
δ: for all a in Σ and A in Г, δ(q, a, A) contains (q, γ)
if A –> aγ is in P or rather:
δ(q, a, A) = {(q, γ) | A –> aγ is in P and γ is in Г*},
for all a in Σ and A in Г

• For a given string x in Σ* , M will attempt to simulate a leftmost derivation of x with G.

28
• Example #1: Consider the following CFG in GNF.

S –> aS G is in GNF
S –> a L(G) = a+

Construct M as:

Q = {q}
Σ = T = {a}
Г = V = {S}
z=S

δ(q, a, S) = {(q, S), (q, ε)}


δ(q, ε, S) = Ø

• Is δ complete?

29
• Example #2: Consider the following CFG in GNF.

(1) S –> aA
(2) S –> aB
(3) A –> aA G is in GNF
(4) A –> aB L(G) = a+ b+ // This looks ok to me, one, two or more a’s in the start
(5) B –> bB
(6) B –> b [Can you write a simpler equivalent CFG? Will it be GNF?]

Construct M as:
Q = {q}
Σ = T = {a, b}
Г = V = {S, A, B}
z=S

(1) δ(q, a, S) = {(q, A), (q, B)} From productions #1 and 2, S->aA, S->aB
(2) δ(q, a, A) = {(q, A), (q, B)} From productions #3 and 4, A->aA, A->aB
(3) δ(q, a, B) = Ø
(4) δ(q, b, S) = Ø
(5) δ(q, b, A) = Ø
(6) δ(q, b, B) = {(q, B), (q, ε)} From productions #5 and 6, B->bB, B->b
(7) δ(q, ε, S) = Ø
(8) δ(q, ε, A) = Ø
(9) δ(q, ε, B) = Ø Is δ complete?

30
• For a string w in L(G) the PDA M will simulate a leftmost derivation of w.

– If w is in L(G) then (q, w, z0) |—* (q, ε, ε)

– If (q, w, z0) |—* (q, ε, ε) then w is in L(G)

• Consider generating a string using G. Since G is in GNF, each sentential form in a leftmost derivation
has form:

=> t1t2…ti A1A2…Am

terminals non-terminals

• And each step in the derivation (i.e., each application of a production) adds a terminal and some non-
terminals.

A1 –> ti+1α

=> t1t2…ti ti+1 αA1A2…Am

• Each transition of the PDA simulates one derivation step. Thus, the i th step of the PDAs’ computation
corresponds to the ith step in a corresponding leftmost derivation with the grammar.

• After the ith step of the computation of the PDA, t1t2…ti+1 are the symbols that have already been read
by the PDA and αA1A2…Amare the stack contents.
31
• For each leftmost derivation of a string generated by the grammar, there is an equivalent
accepting computation of that string by the PDA.

• Each sentential form in the leftmost derivation corresponds to an instantaneous


description in the PDA’s corresponding computation.

• For example, the PDA instantaneous description corresponding to the sentential form:

=> t1t2…ti A1A2…Am

would be:

(q, ti+1ti+2…tn , A1A2…Am)

32
• Example: Using the grammar from example #2:

Grammar:
S => aA (1) (1) S –> aA
(2) S –> aB
=> aaA (3) (3) A –> aA G is in GNF
(4) A –> aB L(G) = a+b+
=> aaaA (3) (5) B –> bB
=> aaaaB (4) (6) B –> b

=> aaaabB (5)


=> aaaabb (6)

• The corresponding computation of the PDA: (1) δ(q, a, S) = {(q, A), (q, B)}
(2) δ(q, a, A) = {(q, A), (q, B)}
(rule#)/right-side# (3) δ(q, a, B) = Ø
(4) δ(q, b, S) = Ø
• (q, aaaabb, S) |— (q, aaabb, A) (1)/1 (5) δ(q, b, A) = Ø
(6) δ(q, b, B) = {(q, B), (q, ε)}
|— (q, aabb, A) (2)/1 (7) δ(q, ε, S) = Ø
(8) δ(q, ε, A) = Ø
|— (q, abb, A) (2)/1 (9) δ(q, ε, B) = Ø
|— (q, bb, B) (2)/2
|— (q, b, B) (6)/1
|— (q, ε, ε) (6)/2

– String is read
– Stack is emptied
– Therefore the string is accepted by the PDA 33
• Another Example: Using the PDA from example #2:

(q, aabb, S) |— (q, abb, A) (1)/1


|— (q, bb, B) (2)/2
|— (q, b, B) (6)/1
|— (q, ε, ε) (6)/2

• The corresponding derivation using the grammar:

S => aA (1)
=> aaB (4)
=> aabB (5)
=> aabb (6)

34
• Example #3: Consider the following CFG in GNF.

(1) S –> aABC


(2) A –> a G is in GNF
(3) B –> b
(4) C –> cAB
(5) C –> cC Language?
aab cc* ab
Construct M as:

Q = {q}
Σ = T = {a, b, c}
Г = V = {S, A, B, C}
z=S

(1) δ(q, a, S) = {(q, ABC)} S->aABC (9) δ(q, c, S) = Ø


(2) δ(q, a, A) = {(q, ε)} A->a (10) δ(q, c, A) = Ø
(3) δ(q, a, B) = Ø (11) δ(q, c, B) = Ø
(4) δ(q, a, C) = Ø (12) δ(q, c, C) = {(q, AB), (q, C))} // C->cAB|cC
(5) δ(q, b, S) = Ø (13) δ(q, ε, S) = Ø
(6) δ(q, b, A) = Ø (14) δ(q, ε, A) = Ø
(7) δ(q, b, B) = {(q, ε)} B->b (15) δ(q, ε, B) = Ø
(8) δ(q, b, C) = Ø (16) δ(q, ε, C) = Ø
35
• Notes:
– Recall that the grammar G was required to be in GNF before the construction could be applied.
– As a result, it was assumed at the start that ε was not in the context-free language L.
– What if ε is in L? You need to add ε back.

• Suppose ε is in L:

1) First, let L’ = L – {ε}

Fact: If L is a CFL, then L’ = L – {ε} is a CFL.

By an earlier theorem, there is GNF grammar G such that L’ = L(G).

2) Construct a PDA M such that L’ = LE(M)

How do we modify M to accept ε?

Add δ(q, ε, S) = {(q, ε)}? NO!!

36
• Counter Example:

Consider L = {ε, b, ab, aab, aaab, …}= ε + a*b Then L’ = {b, ab, aab, aaab, …} = a*b

• The GNF CFG for L’:


P:
(1) S –> aS
(2) S –> b

• The PDA M Accepting L’:


Q = {q}
Σ = T = {a, b}
Г = V = {S}
z=S

δ(q, a, S) = {(q, S)}


δ(q, b, S) = {(q, ε)}
δ(q, ε, S) = Ø

How to add ε to L’ now?

37
δ(q, a, S) = {(q, S)}
δ(q, b, S) = {(q, ε)}
δ(q, ε, S) = Ø

• If δ(q, ε, S) = {(q, ε)} is added then:


L(M) = {ε, a, aa, aaa, …, b, ab, aab, aaab, …}, wrong!

It is like, S -> aS | b | ε
which is wrong!
Correct grammar should be:
(0) S1 -> ε | S, with new starting non-terminal S1
(1) S –> aS
(2) S –> b

For PDA, add a new Stack-bottom symbol S1, with new transitions:
δ(q, ε, S1) = {(q, ε), (q, S)}, where S was the previous stack-bottom of M

Alternatively, add a new start state q’ with transitions:


δ(q’, ε, S) = {(q’, ε), (q, S)}
38
• Lemma 1: Let L be a CFL. Then there exists a PDA M such that L = LE(M).

• Lemma 2: Let M be a PDA. Then there exists a CFG grammar G such that LE(M) =
L(G).
– Can you prove it?
– First step would be to transform an arbitrary PDA to a single state PDA!

• Theorem: Let L be a language. Then there exists a CFG G such that L = L(G) iff there
exists a PDA M such that L = LE(M).

• Corollary: The PDAs define the CFLs.

39
Sample CFG to GNF transformation:
• 0n1n, n>=1
• S -> 0S1 | 01

• GNF:
• S -> 0SS1 | 0S1
• S1 -> 1

• Note: in PDA the symbol S will float on top, rather than


stay at the bottom!
• Acceptance of string by removing last S1 at stack bottom
40
Ignore this slide
M •= ({q
How
1 about
,q }, {“(“,
2 language
“)”}, {L, #}, δ, q , #, Ø) like: ((())())(), nested
1

δ:
(1) δ(q1, (, #) = {(q1, L#)}
(2) δ(q1, ), #) = Ø // illegal, string rejected
(3) δ(q1, (, L) = {(q1, LL)}

(4) δ(q1, ), L) = {(q2, ε)}


(5) δ(q2, ), L) = {(q2, ε)}
(6) δ(q2, (, L) = {(q1, LL)} // not balanced yet, but start back anyway
(7) δ(q2, (, #) = {(q1, L#)} // start afresh again

(8) δ(q2, ε, #) = {(q2, ε)} // end of string & stack hits bottom, accept
(9) δ(q1, ε, #) = {(q1, ε)} // special rule for empty string
(10) δ(q1, ε, L) = Ø // illegal, end of string but more L in stack
Total number of transitions? Verify all carefully.

41

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy