100% found this document useful (1 vote)
55 views

ATC MODULE-5_TM_

Uploaded by

sjdeshmukh0
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
55 views

ATC MODULE-5_TM_

Uploaded by

sjdeshmukh0
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 29

MODULE-4

[1] Algorithms and Decision Procedures for Context-Free Languages


1. 1 The Decidable Questions
Fortunately, the most important questions (i.e., the ones that must be answerable if context-free
grammars are to be of any practical use) are decidable.

1.1.1 Membership
"Given a language L and a string w, is w in L?'
This question can be answered for every context-free language and for every context-free language
L there exists a PDA M such that M accepts L. But existence of a PDA that accepts L does not
guarantee the existence of a procedure that decides it.
It turns out that there are two alternative approaches to solving this problem, both
of which work:

1.1.2 Using a Grammar to Decide


Algorithm for deciding whether a string w is in a language L:
decideCFLusingGrammar(L: CFL,w: string) =
1. If L is specified as a PDA, use PDA to CFG, to construct a grammar G such that L(G)
=L (M).
2. If L is specified as a grammar G, simply use G.
3. If w =ε then if SG is nullable then accept, otherwise reject.
4. If w ≠ ε then:
4.1. From G, construct G' such that L (G') = L(G)-{ε} and G' is in Chomsky normal form.
4.2. If G derives to, it does so in (2 • |w| - 1) steps. Try all derivations in G of that
number of steps. If one of them derives w, accept. Otherwise reject.

Worst Case Running time of decideCFLusingGrammar is : O(n2n)


1.1.3 Using a PDA to Decide

1.1.4 Elimination of ε-Transitions


Theorem: Given any context-free grammar G=(V,Σ,R,S), there exists a PDA M
such that L(M)=L(G)-{ε} and M contains no transitions of the form
((q1,ε,α),(q2,𝖰)). In other words, every transition reads exactly one input character.

Proof: The proof is by a construction that begins by converting G to Greibach normal form.
Now consider again the algorithm cfgtoPDAtopdown, which builds, from any context-free
grammar G, a PDA M that, on input w, simulates G deriving w, starting from
S. M= ({p,q},Σ,V,Δ, p,{q}), where Δ contains:

1. The start-up transition ((p,ε,ε),(q,S)), which pushes the start symbol on to the stack
and goes to state q.
2. For each rule X→s1s 2...sn, in R, the transition ((q,ε,X),(q,s1s2...sn)), which replaces X
by s1s2...sn. If n=0 (i.e., the right-hand side of the rule is ε), then the transition ( (q, ε,
X), (q, ε)).
3. For each character c ∈ Σ. the transition ((q, c, c), (q,ε)), which compares an
expected character from the stack against the next input character.

If G contains the rule X→cs 2...sn, (where c ∈Σ and s2 through sn, are elements of V-Σ), it is
not necessary to push c onto the stack, only to pop it with a rule from step 3.

Instead, we collapse the push and the pop into a single transition. So we create a transition that
can be taken only if the next input character is c. In that case, the string s2 through sn is pushed
onto the stack.

Since terminal symbols are no longer pushed onto the stack. We no longer need the transitions
created in step3 of the original algorithm.
So, M=({p,q},Σ,V,Δ,p,{q}), where Δ contains:

1. The start-up transitions: For each rule S→cs2...sn the transition


((p,c,ε),(q,s2...sn)).
2. For each rule X→cs2...sn (where c∈Σ and s2 through sn, are elements of V-Σ), the
transition
3. ((q,c,X),(q,s2...sn)).

cfgtoPDAnoeps(G:context-freegrammar)=
1. Convert G to Greibach normal form, producing G'.
2. From G' build the PDA M described above.

Halting Behavior of PDAs Without ε-Transitions

Theorem: Let M be a PDA that contains no transitions of the form ((q1,ε,s1),(q2,s2)).


i.e., no ε-transitions. Consider the operation of M on input w∈Σ*. M must halt and
either accept or reject w. Let n=|w|.
We make three additional claims:
a) Each individual computation of M must halt within n steps.
b) The total number of computations pursued by M must be less than or equal to
bn, where b is the maximum number of competing transitions from any state in
M.
c) The total number of steps that will be executed by all computations of M is
bounded by nbn

Proof:
a) Since each computation of M must consume one character of w at each step and M
will halt when it runs out of input, each computation must halt within n steps.
b) M may split into at most b branches at each step in a computation. The number of
steps in a computation is less than or equal to n. So the total number of computations
must be less than or equal to bn.
c) Since the maximum number of computations is bn and the maximum length of each
is n, the maximum number of steps that can be executed before all computations of M
halt is nbn.
So a second way to answer the question, "Given a context-free language L and a
string w, is w in L?" is to execute the following algorithm:

decideCFLusingPDA(L:CFL,w:string)=
1. If L is specified as a PDA, use PDAtoCFG, to construct a grammar G such that
L(G)=L(M).
2. If L is specified as a grammar G, simply use G.
3. If w=ε then if SG is nullable then accept, otherwise reject.
4.If w≠ε then:
1.1. From G, construct G' such that L(G')=L(G)-{ε} and G' is in Greibach
normal form.
1.2. From G' construct, using cfgtoPDAnoeps, a PDA M' such that
L(M')=L(G') and M' has no ε-transitions.
4.3 We have proved previously that, all paths of M' are guaranteed to halt
within a finite
number of steps. So run M' on w, Accept if M' accepts and reject otherwise.

1.1.5 Emptiness and Finiteness


Decidability of Emptiness and Finiteness
Theorem: Given a context-free language L. There exists a decision procedure that
answers each of the following questions:

1. Given a context-free language L, is L=⦰?


2. Given a context-free language L, is L infinite?
Since we have proven that there exists a grammar that generates L iff there exists a
PDA that accepts it. These questions will have the same answers whether we ask them
about grammars or about PDAs.

Proof :
decideCFLempty( G: context-free grammar) =
1. Let G' =removeunproductive(G).
2. If S is not present in G' then return True else return False.

decideCFLinfinite(G:context-freegrammar)=
1. Lexicographically enumerate all strings in Σ* of length greater than bn and
less than or equal to bn+1+bn.
2. If, for any such string w, decideCFL(L,w) returns True then return True. L is infinite.
3. If, for all such strings w, decideCFL(L,w) returns False then return False. L is
not infinite.

1.2. The Undecidable Questions

[2] TURING MACHINE

The Turing machine provides an ideal theoretical model of a computer. Turing machines are
useful in several ways:
• Turing machines are also used for determining the undecidability of certain languages and
• As an automaton, the Turing machine is the most general model, It accepts type-0
languages.
• It can also be used for computing functions. It turns out to be a mathematical model
of partial recursive functions.
• Measuring the space and time complexity of problems.

2.1 Turing machine model

Turing assumed that while computing, a person writes symbols on a one-dimensional


paper (instead of a two dimensional paper as is usually done) which can be viewed as a
tape divided into cells.

In Turing machine one scans the cells one at a time and usually performs one of
the three simple operations, namely:

(i) Writing a new symbol in the cell being currently scanned,


(ii) Moving to the cell left of the present cell, and
(iii) Moving to the cell right of the present cell.

•Each cell can store only one symbol.


•The input to and the output from the finite state automaton are affected by the R/W
head which can examine one cell at a time.

In one move, the machine examines the present symbol under the R/W head on the
tape and the present state of an automaton to determine:
(i) A new symbol to be written on the tape in the cell under the R/W head,
(ii) A motion of the R/W head along the tape: either the head moves one cell left (L),or
one cell right (R).
(iii) The next state of the automaton, and Whether to halt or not.
Definition:
Turing machine M is a 7-tuple, namely (Q, Σ, 𝚪, ✿, q0, b, F), where
1. Q is a finite nonempty set of states.
2. 𝚪 is a finite nonempty set of tape symbols,
3. b∈ 𝚪 is the blank.
4. Σ is a nonempty set of input symbols and is a subset o f 𝚪 and b∉Σ.
5. ✿ is the transition function mapping (q,x) onto (q',y,D) where D denotes the
direction of movement of R/W head; D=L or R according as the movement is to the left
or right.
6. q0∈Q is the initial state, and
7. F⊆Q is the set of final states.
Notes:
(1) The acceptability of a string is decided by the reachability from the initial state to
some final state.
(2) ✿ may not be defined for some elements of QX 𝚪.

2.2 REPRESENTATION OF TURING MACHINES

We can describe a Turing machine employing


(i) Instantaneous descriptions using move-relations.
(ii) Transition table, and
(iii) Transition diagram (Transition graph).

2.2 REPRESENTATION BY INSTANTANEOUS DESCRIPTIONS

Definition: An ID of a Turing machine M is a string 𝛼𝛽𝛾, where 𝛽 is the


present state of M, the entire input string is split as 𝛼𝛾, the first symbol of
𝛾 is the current symbol a under the R/W head and 𝛾 has all the
subsequent symbols of the input string, and the string 𝛼 is the substring of
the input string formed by all the symbols to the left of a.

EXAMPLE: A snapshot of Turing machine is shown in below Fig. Obtain


the instantaneous description.
Example:
IDs for the strings (a) 011 (b)0011 (c)001
(a)
q1011 |- xq211 |- q3xy1 |- xyq51 |-xyq51
As (q5,1) is not defined, M halts; so the input string 011 is not accepted

Notes: (1) For constructing the ID, we simply insert the current state in the
input string to the left of the symbol under the R/W head.
(2) We observe that the blank symbol may occur as part of the left or right
substring.
2.2.1 REPRESENTATION BY TRANSITION TABLE
We give the definition of ✿ in the form of a table called the transition table
If 𝛿 (q, a)=(𝛾,𝛼,𝛽). We write 𝛼𝛽𝛾 under the 𝛼-column and in the q-row. So if
we get 𝛼𝛽𝛾 in the table, it means that 𝛼 is written in the current cell, 𝛽
gives the movement of the head (L or R) and 𝛾 denotes the new state into
which the Turing machine enters.
EXAMPLE:
Consider, for example, a Turing machine with five states q1,...,q5 where q1
is the initial state and q5 is the (only) final state. The tape symbols are
0,1and b. The transition table given below describes ✿ :

2.2.2 REPRESENTATION BY TRANSITION DIAGRAM (TD)


The states are represented by vertices. Directed edges are used to
represent transition of states. The labels are triples of the form (𝛼,𝛽,𝛾)where
𝛼,𝛽∈𝚪 and 𝛾∈{L,R}.When there is a directed edge from qi to qj with label
(𝛼,𝛽,𝛾),it means that ✿(qi,𝛼)=(qj,𝛽,𝛾).
EXAMPLE:TRANSITION DIAGRAM
2.3 LANGUAGE ACCEPTABILITY BY TURING MACHINES
Let us consider the Turing machine M=(Q,Σ,𝚪,✿,q0,b,F). A string w in Σ* is said to be
accepted by M, if q0 w Ⱶ* 𝛼 1 p 𝛼 2 for some P∈F and 𝛼 1,𝛼 2∈ 𝚪*.
EXAMPLE: Consider the Turing machine M described by the table below

IDs for the strings (a) 011 (b)0011 (c)001


(a)
q1011 |- xq211 |- q3xy1 |- xyq51 |-xyq51
As (q5,1) is not defined, M halts; so the input string 011 is not accepted

(b)
q1OOll |- xq2011 |- xOq211 |- xq30y1 |- q4 XOyl |- xq1Oy1 |-xxq2y1 |- xxyq21 |- xxq3yy
|- xq3xyy|- xxq5yy |- x.xyyq5b· |- xxyyq5b |- xxyybq6
M halts. As q6 is an accepting state, the input string 0011is accepted by M.
(c)
M halts. As q2 is not an accepting state,001 is not accepted by M.

2.4 DESIGN OF TURING MACHINES

Basic guidelines for designing a Turing machine:


1. The fundamental objective in scanning a symbol by the R/W head is to know
what to do in the future. The machine must remember the past symbols scanned. The
Turing machine can remember this by going to the next unique state.

2. The number of states must be minimized. This can be achieved by changing the
states only when there is a change in the written symbol or when there is a change
in the movement of the R/W head.

REFER EXAMPLES SOLVED IN CLASS


2.5 TECHNIQUES FOR TM CONSTRUCTION

1. TURING MACHINE WITH STATIONARY HEAD

Suppose, we want to include the option that the head can continue to be in the same cell for
some input symbol. Then we define (q,a) as (q',y,S).This means that the TM, on reading the input
symbol a, changes the state to q' and writes y in the current cell in place of a and continues to
remain in the same cell. In this model (q, a) =(q', y, D) where D = L, R or S.

2. STORAGE IN THE STATE


We can use a state to store a symbol as well. So the state becomes a pair(q,a) where q is the
state and a is the tape symbol stored in (q, a). So the new set of states becomes Qx𝚪.
EXAMPLE: Construct a TM that accepts the language 0 1* + 1 0*.
We have to construct a TM that remembers the first symbol and checks that it does not appear
afterwards in the input string.
So we require two states, q0, q1. The tape symbols are 0,1 and b. So the TM, having the
'storage facility in state‘, is M=({q0,q1}X{0,1,b},{0,1},{0,1,b},✿,[q0,b],[q1,b]})

3. MULTIPLE TRACK TURING MACHINE


In a multiple track TM, a single tape is assumed to be divided into several tracks. Now the
tape alphabet is required to consist of k-tuples of tape symbols, k being the number of tracks.
In the case of the standard Turing machine, tape symbols are elements of r; in the case of TM
with multiple tracks, it is 𝚪k.

4. SUBROUTINES

First a TM program for the subroutine is written. This will have an initial state and a 'return'
state. After reaching the return state, there is a temporary halt for using a subroutine, new states
are introduced. When there is a need for calling the subroutine, moves are effected to enter the
initial state for the subroutine. When the return state of the subroutine is reached, return to the
main program of TM.
2.6 VARIANTS OF TURING MACHINES
The Turing machine we have introduced has a single tape. δ(q, a) is either a single triple
(p, y, D), where D = R or L, or is not defined.

We introduce two new models of TM:


(i) TM with more than one tape- called as multitape TM, and
TM where δ(q, a) ={(p1, y1, D1), (p2, y2, D2), •••• (pr , yr , Dr)} – called as nondeterministic TM.

i. Multitape TM:
A multitape TM has : a finite set Q of states. an initial state q o. a subset F of Q called
the set of final states. a set P of tape symbols. a new symbol b, not in P called the blank
symbol.

There are k tapes, each divided into cells. The first tape holds the input string w.
Initially, all the other tapes hold the blank symbol.
Initially the head of the first tape (input tape) is at the left end of the input w. All the other heads
can be placed at any cell initially.
δ is a partial function from Q x Гk into Q x Гk x {L, R, S} k.
A move depends on the current state and k tape symbols under k tape heads. In a
typical move:
(i) M enters a new state.
(ii) On each tape, a new symbol is written in the cell under the head.
(iii) Each tape head moves to the left or right or remains stationary.

The heads move independently: some move to the left, some to the right and the remain- ing
heads do not move.
The initial ID has the initial state qo, the input string w in the first tape (input tape), empty
strings of b's in the remaining k - 1 tapes. An accepting ID has a final state, some strings
in each of the k tapes.

Theorem 9.1: Every language accepted by a multitape TM is acceptable by some single-


tape TM (that is, the standard TM).
Proof: Suppose a language L is accepted by a k-tape TM M. We simulate M with a single-
tape TM with 2k tracks. The second. fourth, ..., (2k)th tracks hold the contents of the k-
tapes. The first. third, ... , (2k - l)th tracks hold a head marker (a symbol say X) to indicate the
position of the respective tape head. We give an 'implementation description' of the simulation of
M with a single tape TM M1. We give it for the case k =2. The construction can be extended
to the general case. Figure 9.9 can be used to visualize the simulation. The symbols A2 and
B5 are the current symbols to be scanned and so the head marker X is above the two symbols.

Initially the contents of tapes 1 and 2 of M are stored in the second and fourth tracks of MI'
The head markers of the first and third tracks are at the cells containing the first symbol.

To simulate a move of M, the 2k-track TM M1 has to visit the two head markers and store the
scanned symbols in its control. Keeping track of the head markers visited and those to be visited
is achieved by keeping a count and storing it in the finite control of M1' Note that the finite control
of M1 has also the information about the states of M and its moves. After visiting both head
markers. M1 knows the tape symbols being scanned by the two heads of M.

M1 revisits each of the head markers:


(i) It changes the tape symbol in the corresponding track of M1 based on the information regarding
the move of M corresponding to the state (of M) and the tape symbol in the corresponding tape M.
(ii) It moves the head markers to the left or right.
(iii) M1 changes the state of M in its control.
This is the simulation of a single move of M. M1
is ready to implement next Move.

M1 accepts a string w if the new state of M, as recorded in its control at the end of the processing
of w, is a final state of M.

Theorem 9.2: If M1 is the single-tape TM simulating multitape TM M, then the time


taken by M1 to simulate n moves of M is O(n2).
(ii) Non Deterministic TM:

Theorem : If M is a nondeterministic TM, there is a deterministic TM M1


such that T(M) = T(M1)
Proof: We construct M1 as a multitape TM. Each symbol in the input string leads to achange
in ID. M1 should be able to reach all IDs and stop when an ID containing a final state is
reached. So the first tape is used to store IDs of M as a sequence and also the state of M.
These IDs are separated by the symbol * (included as a tape symbol).
The current ID is known by marking an x along with the ID-separator * (The symbol * marked
with x is a new tape symbol.)
All IDs to the left of the current one have been explored already and so can be ignored
subsequently. Note that the current ID is decided by the current input symbol of w.

Figure 9.10 illustrates the deterministic TM M1.


1. M1 examines the state and the scanned symbol of the current ID. Using the knowledge of
moves of M stored in the finite control of M1, it checks whether the state in the current ID is
an accepting state of M. In this case M1 accepts and stops simulating M.
2. If the state q say in the current ID xqay, is not an accepting state of M1 and δ(q,a) has k triples,
M1 copies the ID xqay in the second tape and makes k copies of this ID at the end of the
sequence of IDs in tape 2.
3. M1 modifies these k IDs in tape 2 according to the k choices given by δ(q, a).
4. M1 returns to the marked current ID. erases the mark x and marks the next ID separator
* with x. Then M1 goes back to step 1.
2.7 LINEAR BOUNDED AUTOMATON
This model is important because:
(a) the set of context-sensitive languages is accepted by the model.
(b) the infinite storage is restricted in size but not in accessibility to the storage in comparison with the
Turing machine model. It is called the linear bounded automaton (LBA) because a linear function
is used to restrict (to bound) the length of the tape.
A linear bounded automaton is a nondeterministic Turing machine which has a single tape
whose length is not infinite but bounded by a linear function of the length of the input string.
The models can be described formally by the following set format:

All the symbols have the same meaning as in the basic model of Turing machines with the
difference that the input alphabet L contains two special symbols ¢ and $. ¢ is called the left-
end marker which is entered in the leftmost cell of the input tape and prevents the R/W head
from getting off the left end of the tape. $ is called the right- end marker which is entered
in the rightmost cell of the input tape and prevents the R/W head from getting off the right end
of the tape. Both the end markers should not appear on any other cell within the input tape,
and the R/W head should not print any other symbol over both the end markers.

Let us consider the input string w with |w|= n-2. The input string w can be recognized by an
LBA if it can also be recognized by a Turing machine using no more than kn cells of input
tape, where k is a constant specified in the description of LBA. The value of k does not
depend on the input string but is purely a property of the machine. Whenever we process
any string in LBA, we shall assume that the input string is enclosed within the end markers
¢ and $.

The above model of LBA can be represented by the block diagram of Fig. 9.11.
There are two tapes: one is called the input tape, and the other, working tape. On the input
tape the head never prints and never moves to the left. On the working tape the head can
modify the contents in any way, without any restriction.

In the case of LBA, an ID is denoted by (q, w. k), where q € Q. w € Г and k is some


integer between 1 and n. The transition of IDs is similar except that k changes to k - 1 if
the R/W head moves to the left and to k + 1 if the head moves to the right.

Relation between LBA and context-sensitive languages:


The set of strings accepted by nondeterministic LBA is the set of strings generated by the
context-sensitive grammars, excluding the null strings, now we give an important result:
If L is a context-sensitive language, then L is accepted by a linear bounded automaton. The
converse is also true.

QUESTIONS

1. Explain multitape and non-deterministic Turing machine.


2. Define Turing Machine. Explain working of Turing machine.
3. Explain any two techniques for TM construction.
4. Demonstrate the model of Linear Bounded Automata (LBA) with a neat diagram.
5. Explain Language acceptability and design of Turing machine.
+
Problems on Turing Machine

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy