FLAT
FLAT
A finite state machine has a set of states and two functions called the next-state and
output function.
The set of states correspond to all the possible combinations of the internal
storage. If there are n bits of storage, there are 2n possible states.
The next state function is a combinational logic function that, given the
inputs and the current state, determines the next state of the system.
The diagram given below explains the functioning of a finite state machine in
TOC.
The output function generates a set of outputs from the current state and the inputs.
Types
We mostly deal with the Moore machine. These two types are equivalent in
capabilities.
The components which exists in a finite state machine are explained below −
State − The states are usually drawn with circles and only one state can be active
at a time.
It is represented as follows −
Initial State − It is the starting point of our system. Initial states are usually drawn
with an arrow pointed to state, as shown below −
Final state − It is a subset of known states that indicates whether the input we
processed is valid or not. Accepting states are usually drawn as a double circle as
shown below −
Transitions − The machine moves from one state to another and is indicated as
transition. These are drawn as two states connected with a line, as shown below −
Finite Automaton can be classified into two types −
In DFA, for each input symbol, one can determine the state to which the machine
will move. Hence, it is called Deterministic Automaton. As it has a finite number
of states, the machine is called Deterministic Finite Machine or Deterministic
Finite Automaton.
Example
Q = {a, b, c},
∑ = {0, 1},
q0 = {a},
F = {c}, and
a a b
b c a
c b c
In NDFA, for a particular input symbol, the machine can move to any combination
of the states in the machine. In other words, the exact state to which the machine
moves cannot be determined. Hence, it is called Non-deterministic Automaton.
As it has finite number of states, the machine is called Non-deterministic Finite
Machine or Non-deterministic Finite Automaton.
(Here the power set of Q (2Q) has been taken because in case of NDFA, from
a state, transition can occur to any combination of Q states)
q0 is the initial state from where any input is processed (q0 ∈ Q).
F is a set of final state/states of Q (F ⊆ Q).
Example
Q = {a, b, c}
∑ = {0, 1}
q0 = {a}
F = {c}
Present State Next State for Input 0 Next State for Input 1
a a, b b
b c a, c
c b, c c
The following table lists the differences between DFA and NDFA.
DFA NDFA
The automaton may be allowed to change its state without reading the input
symbol 2.
In diagrams, such transitions are depicted by labeling the appropriate arcs with ε.
Note that this does not mean that E has become an input symbol. On the contrary,
we assume that the symbol E does not belong to any alphabet.
ε -NFAs add a convenient feature but (in a sense) they bring us nothing new.
They do not extend the class of languages that can be represented.
Both NFAs and E-NFAs recognize exactly the same languages.
Epsilon (ε) - closure
Epsilon closure for a given state X is a set of states which can be reached from the
states X with only (null) or E moves including the state X itself.
In other words, £-closure for a state can be obtained by union operation of the £-
closure of the states which can be reached from X with a single E move in a
recursive manner.
Example
State 0 1 epsilon
A B,C A B
B - B C
C C C -
Some RE Examples
Regular
Regular Set
Expressions
Set of strings of a’s and b’s of any length including the null
(a+b)*
string. So L = { ε, a, b, aa , ab , bb , ba, aaa…….}
Set of strings of a’s and b’s ending with the string abb. So L =
(a+b)*abb
{abb, aabb, babb, aaabb, ababb, …………..}
Finite automata may have outputs corresponding to each transition. There are two
types of finite state machines that generate output −
Mealy Machine
Moore machine
Mealy Machine
A Mealy Machine is an FSM whose output depends on the present state as well as
the present input.
Next state
b b x2 d x3
c d x3 c x1
d d x3 d x2
Moore Machine
Moore machine is an FSM whose outputs depend on only the present state.
→a b c x2
b b d x1
c c d x2
d d d x3
The following table highlights the points that differentiate a Mealy Machine from a
Moore Machine.
Output depends both upon the present Output depends only upon the present
state and the present input state.
Generally, it has fewer states than Generally, it has more states than Mealy
Moore Machine. Machine.
The value of the output function is a The value of the output function is a
function of the transitions and the function of the current state and the
changes, when the input logic on the changes at the clock edges, whenever state
present state is done. changes occur.
Algorithm 4
Step 2 − Copy all the Moore Machine transition states into this table format.
Step 3 − Check the present states and their corresponding outputs in the Moore
Machine state table; if for a state Qi output is m, copy it into the output columns of
the Mealy Machine state table wherever Qi appears in the next state.
Example
Next State
Present State Output
a=0a=1
→a d b 1
b a d 0
c c c 0
d b a 1
Step 1 & 2 −
Next State
→a d b
b a d
c c c
d b a
Step 3 −
Next State
=> a d 1 b 0
b a 1 d 1
c c 0 c 0
d b 0 a 1
Mealy Machine to Moore Machine
Algorithm 5
Step 1 − Calculate the number of different outputs for each state (Qi) that are
available in the state table of the Mealy machine.
Step 2 − If all the outputs of Qi are same, copy state Qi. If it has n distinct outputs,
break Qi into n states as Qin where n = 0, 1, 2.......
Step 3 − If the output of the initial state is 1, insert a new initial state at the
beginning which gives 0 output.
Example
Next State
→a d 0 B 1
b a 1 D 0
c c 1 C 0
d b 0 A 1
Here, states ‘a’ and ‘d’ give only 1 and 0 outputs respectively, so we retain states
‘a’ and ‘d’. But states ‘b’ and ‘c’ produce different outputs (1 and 0). So, we divide
b into b0, b1 and c into c0, c1.
Next State
Present State Output
a=0 a=1
→a d b1 1
b0 a d 0
b1 a d 1
c0 c1 C0 0
c1 c1 C0 1
d b0 a 0
2DFAs were introduced in a seminal 1959 paper by Rabin and Scott,[1] who proved
them to have equivalent power to one-way DFAs. That is, any formal language
which can be recognized by a 2DFA can be recognized by a DFA which only
examines and consumes each character in order. Since DFAs are obviously a
special case of 2DFAs, this implies that both kinds of machines recognize precisely
the class of regular languages. However, the equivalent DFA for a 2DFA may
require exponentially many states, making 2DFAs a much more practical
representation for algorithms for some common problems.
2DFAs are also equivalent to read-only Turing machines that use only a constant
amount of space on their work tape, since any constant amount of information can
be incorporated into the finite control state via a product construction (a state for
each combination of work tape state and control state).
Formal description
Automaton is nothing but a machine which accepts the strings of a language L over
an input alphabet Σ.
There are four different types of Automata that are mostly used in the theory of
computation (TOC). These are as follows −
When comparing these four types of automata, Finite-state machines are less
powerful whereas Turing machines are more powerful.
So far, we are familiar with the types of automata. Now, let us discuss the
expressive power of automata and further understand its applications.
Equivalence
Unit -2
The theory of formal languages finds its applicability extensively in the fields of
Computer Science. Noam Chomsky gave a mathematical model of grammar in
1956 which is effective for writing computer languages.
Grammar
Grammar G1 −
Here,
Example
Grammar G2 −
Here,
Strings may be derived from other strings using the productions in a grammar. If a
grammar G has a production α → β, we can say that x α y derives x β y in G. This
derivation is written as −
x α y ⇒G x β y
Example
The set of all strings that can be derived from a grammar is said to be the language
generated from that grammar. A language generated by a grammar G is a subset
formally defined by
L(G)={W|W ∈ ∑*, S ⇒G W}
Example
If there is a grammar
Here S produces AB, and we can replace A by a, and B by b. Here, the only
accepted string is ab, i.e.,
L(G) = {ab}
Example
= {am bn | m ≥ 1 and n ≥ 1}
Construction of a Grammar Generating a Language
We’ll consider some languages and convert it into a grammar G which produces
those languages.
Example
Problem − Suppose, L (G) = {am bn | m ≥ 0 and n > 0}. We have to find out the
grammar G which produces L(G).
Solution
Here, the start symbol has to take at least one ‘b’ preceded by any number of ‘a’
including null.
To accept the string set {b, ab, bb, aab, abb, …….}, we have taken the productions
−
S → aS , S → B, B → b and B → bB
S → B → b (Accepted)
S → B → bB → bb (Accepted)
S → aS → aB → ab (Accepted)
Thus, we can prove every single string in L(G) is accepted by the language
generated by the production set.
Any set that represents the value of the Regular Expression is called a Regular
Set.
Proof −
Hence, proved.
Proof −
So, L1 = { a,aa, aaa, aaaa, ....} (Strings of all possible lengths excluding Null)
Hence, proved.
Proof −
RE = (aa)*
So, L = {ε, aa, aaaa, aaaaaa, .......} (Strings of even length including Null)
So, L’ = {a, aaa, aaaaa, .....} (Strings of odd length excluding Null)
Hence, proved.
Proof −
So, L1 = {a, aa, aaa, aaaa, ....} (Strings of all possible lengths excluding Null)
Hence, proved.
Property 5. The reversal of a regular set is regular.
Proof −
RE (L) = 01 + 10 + 11 + 10
Hence, proved.
Proof −
L* = {a, aa, aaa, aaaa , aaaaa,……………} (Strings of all lengths excluding Null)
RE (L*) = a (a)*
Hence, proved.
Proof −
Here, L1 = {0, 00, 10, 000, 010, ......} (Set of strings ending in 0)
Then, L1 L2 = {001,0010,0011,0001,00010,00011,1001,10010,.............}
Set of strings containing 001 as a substring which can be represented by an RE −
(0 + 1)*001(0 + 1)*
Hence, proved.
∅* = ε
ε* = ε
RR* = R*R
R*R* = R*
(R*)* = R*
RR* = R*R
(PQ)*P =P(QP)*
(a+b)* = (a*b*)* = (a*+b*)* = (a+b*)* = a*(ba*)*
R + ∅ = ∅ + R = R (The identity for union)
R ε = ε R = R (The identity for concatenation)
∅ L = L ∅ = ∅ (The annihilator for concatenation)
R + R = R (Idempotent law)
L (M + N) = LM + LN (Left distributive law)
(M + N) L = ML + NL (Right distributive law)
ε + RR* = ε + R*R = R*
n an automata theory, there are different closure properties for regular languages.
They are as follows −
Union
Intersection
concatenation
Kleene closure
Complement
If L1 and If L2 are two regular languages, their union L1 U L2 will also be regular.
Example
Intersection
Example
Concatenation
If L1 and If L2 are two regular languages, their concatenation L1.L2 will also be
regular.
Example
Kleene Closure
Example
L1 = (a U b )
L1* = (a U b)*
Complement
Example
Note − Two regular expressions are equivalent, if languages generated by them are
the same. For example, (a+b*)* and (a+b)* generate the same language. Every
string which is generated by (a+b*)* is also generated by (a+b)* and vice versa.
Theorem
Let L be a regular language. Then there exists a constant ‘c’ such that for every
string w in L −
|w| ≥ c
|y| > 0
|xy| ≤ c
For all k ≥ 0, the string xykz is also in L.
Pumping Lemma is to be applied to show that certain languages are not regular. It
should never be used to show a language is regular.
Problem
Solution −
Solution
Complete.
Mechanistic.
Deterministic.
Since computations of deterministic multi track and multi tape machines are
simulated on a standard Turing machine, a solution using these machines also
establishes the decidability of a problem.
1. Create the pairs of all the states involved in the given DFA.
2. Mark all the pairs (Qa,Qb) such a that Qa is Final state and Qb is Non-Final
State.
3. If there is any unmarked pair (Qa,Qb) such a that δ(Qa,x) and δ(Qb,x) is
marked, then mark (Qa,Qb). Here x is a input symbol. Repeat this step until
no more marking can be made.
4. Combine all the unmarked pairs and make them a single state in the
minimized DFA.
Example
Step-2: Mark all the pairs (Qa,Qb) such a that Qa is Final state and Qb is Non-
Final State.
Step-3: If there is any unmarked pair (Qa,Qb) such a that δ(Qa,x) and δ(Qb,x) is
marked, then mark (Qa,Qb). Here x is a input symbol. Repeat this step until no
more marking can be made.
Step-4: Combine all the unmarked pairs and make them as a single state in the
minimized DFA.
The unmarked Pairs are Q2,Q1 and Q4,Q3 hence we combine them.
Following is the Minimized DFA with Q1Q2 and Q3Q4 as the combined states.
UNIT-3
Context-Free Grammar
Example
The grammar ({A}, {a, b, c}, P, A), P : A → aA, A → abc.
The grammar ({S, a, b}, {a, b}, P, S), P: S → aSa, S → bSb, S → ε
The grammar ({S, F}, {0, 1}, P, S), P: S → 00S | 11F, F → 00F | ε
A derivation tree or parse tree is an ordered rooted tree that graphically represents
the semantic information a string derived from a context-free grammar.
Representation Technique
Top-down Approach −
Bottom-up Approach −
The derivation or the yield of a parse tree is the final string obtained by
concatenating the labels of the leaves of the tree from left to right, ignoring the
Nulls. However, if all the leaves are Null, derivation is Null.
Example
A partial derivation tree is a sub-tree of a derivation tree/parse tree such that either
all of its children are in the sub-tree or none of them are in the sub-tree.
Example
If a partial derivation tree contains the root S, it is called a sentential form. The
above sub-tree is also in sentential form.
Example
CFG Simplification
In a CFG, it may happen that all the production rules and symbols are not needed
for the derivation of strings. Besides, there may be some null productions and unit
productions. Elimination of these productions and symbols is called simplification
of CFGs. Simplification essentially comprises of the following steps −
Reduction of CFG
Removal of Unit Productions
Removal of Null Productions
Reduction of CFG
Phase 1 − Derivation of an equivalent grammar, G’, from the CFG, G, such that
each variable derives some terminal string.
Derivation Procedure −
Step 1 − Include all symbols, W1, that derive some terminal and initialize i=1.
Phase 2 − Derivation of an equivalent grammar, G”, from the CFG, G’, such that
each symbol appears in a sentential form.
Derivation Procedure −
Step 2 − Include all symbols, Yi+1, that can be derived from Yi and include all
production rules that have been applied.
Solution
Phase 1 −
T = { a, c, e }
W2 = { A, C, E } U { S } from rule S → AC
W3 = { A, C, E, S } U ∅
G’ = { { A, C, E, S }, { a, c, e }, P, {S}}
where P: S → AC, A → a, C → c , E → aA | e
Phase 2 −
Y1 = { S }
Y2 = { S, A, C } from rule S → AC
Y4 = { S, A, C, a, c }
G” = { { A, C, S }, { a, c }, P, {S}}
where P: S → AC, A → a, C → c
Removal of Unit Productions
Removal Procedure −
Step 3 − Repeat from step 1 until all unit productions are removed.
Problem
S → XY, X → a, Y → Z | b, Z → M, M → N, N → a
Solution −
Y → Z, Z → M, and M → N
S → XY, X → a, Y → Z | b, Z → M, M → a, N → a
S → XY, X → a, Y → Z | b, Z → a, M → a, N → a
Now we will remove Y → Z.
S → XY, X → a, Y → a | b, Z → a, M → a, N → a
S → XY, X → a, Y → a | b
ε: A → .......… → ε
Removal Procedure
Step 3 − Combine the original productions with the result of step 2 and remove ε -
productions.
Problem
S → ASA | aB | b, A → B, B → b | ∈
Solution −
S→ASA | aB | b | a, A ε B| b | &epsilon, B → b
S→ASA | aB | b | a | SA | AS | S, A → B| b, B → b
normal forms:
A CFG is in Chomsky Normal Form if the Productions are in the following forms
−
A→a
A → BC
S→ε
Step 1 − If the start symbol S occurs on some right side, create a new start symbol
S’ and a new production S’→ S.
Step 2 − Remove Null productions. (Using the Null production removal algorithm
discussed earlier)
Step 3 − Remove unit productions. (Using the Unit production removal algorithm
discussed earlier)
Step 4 − Replace each production A → B1…Bn where n > 2 with A → B1C where
C → B2 …Bn. Repeat this step for all productions having two or more symbols in
the right side.
Problem
S → ASA | aB, A → B | S, B → b | ε
Solution
(1) Since S appears in R.H.S, we add a new state S0 and S0→S is added to the
production set and it becomes −
B → ∈ and A → ∈
S0→S, S→ ASA | aB | a, A → B | S | ∈, B → b
S0→S, S→ ASA | aB | a | AS | SA | S, A → B | S, B → b
A → B | S, B → b
A→S|b
B→b
A → b |ASA | aB | a | AS | SA, B → b
(4) Now we will find out more than two variables in the R.H.S
Hence we will apply step 4 and step 5 to get the following final production set
which is in CNF −
S0→ AX | aB | a | AS | SA
S→ AX | aB | a | AS | SA
A → b |AX | aB | a | AS | SA
B→b
X → SA
S→ AX | YB | a | AS | SA
A → b A → b |AX | YB | a | AS | SA
B→b
X → SA
Y→a
A CFG is in Greibach Normal Form if the Productions are in the following forms −
A→b
A → bD1…Dn
S→ε
Step 1 − If the start symbol S occurs on some right side, create a new start symbol
S’ and a new production S’ → S.
Step 2 − Remove Null productions. (Using the Null production removal algorithm
discussed earlier)
Step 3 − Remove unit productions. (Using the Unit production removal algorithm
discussed earlier)
S → XY | Xn | p
X → mX | m
Y → Xn | o
Solution
Here, S does not appear on the right side of any production and there are no unit or
null productions in the production rule set. So, we can skip Step 1 to Step 3.
Step 4
X in S → XY | Xo | p
with
mX | m
we obtain
S → mXY | mY | mXo | mo | p.
X in Y → Xn | o
X → mX | m
we obtain
Y → mXn | mn | o.
Two new productions O → o and P → p are added to the production set and then
we came to the final GNF as the following −
S → mXY | mY | mXC | mC | p
X → mX | m
Y → mXD | mD | o
O→o P→p
Lemma
Pumping lemma is used to check whether a grammar is context free or not. Let us
take an example and show how it is checked.
Problem
Solution
|vwx| ≤ n and vx ≠ ε.
Hence vwx cannot involve both 0s and 2s, since the last 0 and the first 2 are at least
(n+1) positions apart. There are two cases −
Case 1 − vwx has no 2s. Then vx has only 0s and 1s. Then uwy, which would have
to be in L, has n 2s, but fewer than n 0s or 1s.
The closure properties for context free language (CFG) are as follows −
S ->S1|S2
If both the languages belong to the context free language then union of both the
languages should belong to context free language.
By the above definition if a user generates S1 and S2 string or both then in that
case union of both the language is generated.
Hence, L1 U L2 ∈ CFL
In order to show that context free language is closed under concatenation, the
operation considers two starting variables S1 and for the two different languages
L1 and L2.
S->S1S2−
If both the language belongs to the context free language then concatenate one of
both the language should belong to context free language.
∀L1L2∈CFL
{W1W2:W1∈L1∈ΛW2∈L2}∈CFL
{W1W2:W1∈L1∈ΛW2∈L2}∈CFL
In order to show that context free language is closed under star operation. Consider
one start variable S1 for the languages L1
S->S1S| ∈
If the language belongs to the context free language then the star of the language
should belong to the context free language.
∀L1∈CFL
By the above definition, if the user generates zero or many strings which is the
definition of the star. So, context free language is closed under star operation.
Solution
Complete.
Mechanistic.
Deterministic.
Since computations of deterministic multi track and multi tape machines are
simulated on a standard Turing machine, a solution using these machines also
establishes the decidability of a problem.
Pushdown Automata
Basic Structure of PDA
an input tape,
a control unit, and
a stack with infinite size.
A PDA may or may not read an input symbol, but it has to read the top of the stack
in every transition.
A PDA can be formally described as a 7-tuple (Q, ∑, S, δ, q0, I, F) −
The following diagram shows a transition in a PDA from a state q1 to state q2,
labeled as a,b → c −
This means at state q1, if we encounter an input string ‘a’ and top symbol of the
stack is ‘b’, then we pop ‘b’, push ‘c’ on top of the stack and move to state q2.
Instantaneous Description
q is the state
w is unconsumed input
s is the stack contents
Turnstile Notation
The "turnstile" notation is used for connecting pairs of ID's that represent one or
many moves of a PDA. The process of transition is denoted by the turnstile symbol
"⊢".
This implies that while taking a transition from state p to state q, the input symbol
‘a’ is consumed, and the top of the stack ‘T’ is replaced by a new string ‘α’.
Note − If we want zero or more moves of a PDA, we have to use the symbol (⊢*)
for it.
Generally, the leftmost symbol indicates the top of the stack γ and the bottom at
the right end. This type of triple notation is called an instantaneous description or
ID of the pushdown automata.
Therefore,
(q0, aw, z0) ⊢ (q1, w, yz0)
Show the IDs or moves for input string w = “aabb” of PDA where,
M = ({q0, q1, q2}, {a, b}, {a, b, Z0}, δ, q0, Z0, {q2}),
Solution
Therefore, PDA reaches a configuration of (q2, λ, λ) i.e. PDA stack is empty and it
has reached a final state. So the string ‘w’ is accepted.
L(G) = L(P)
In the next two topics, we will discuss how to convert from PDA to CFG and vice
versa.
Step 3 − The start symbol of CFG will be the start symbol in the PDA.
Step 4 − All non-terminals of the CFG will be the stack symbols of the PDA and
all the terminals of the CFG will be the input symbols of the PDA.
Step 5 − For each production in the form A → aX where a is terminal and A, X are
combination of terminal and non-terminals, make a transition δ (q, a, A).
Problem
S → XS | ε , A → aXb | Ab | ab
Solution
where δ −
δ(q, a, a) = {(q, ε )}
δ(q, 1, 1) = {(q, ε )}
Output − Equivalent PDA, P = (Q, ∑, S, δ, q0, I, F) such that the non- terminals of
the grammar G will be {Xwx | w,x ∈ Q} and the start state will be Aq0,F.
Parsing is used to derive a string using the production rules of a grammar. It is used
to check the acceptability of a string. Compiler is used to check whether or not a
string is syntactically correct. A parser takes the inputs and builds a parse tree.
For top-down parsing, a PDA has the following four types of transitions −
Pop the non-terminal on the left hand side of the production at the top of the
stack and push its right-hand side string.
If the top symbol of the stack matches with the input symbol being read, pop
it.
Push the start symbol ‘S’ into the stack.
If the input string is fully read and the stack is empty, go to the final state
‘F’.
Example
Design a top-down parser for the expression "x+y*z" for the grammar G with the
following production rules −
Solution
⊢(y*z, X*YI) ⊢(y*z, y*YI) ⊢(*z,*YI) ⊢(z, YI) ⊢(z, zI) ⊢(ε, I)
For bottom-up parsing, a PDA has the following four types of transitions −
Example
Design a top-down parser for the expression "x+y*z" for the grammar G with the
following production rules −
Solution
⊢(y*z, +SI) ⊢ (*z, y+SI) ⊢ (*z, Y+SI) ⊢ (*z, X+SI) ⊢ (z, *X+SI)
UNIT -5
Turing Machine
Definition
Definition
δ(Qi, [a1, a2, a3,....]) = (Qj, [b1, b2, b3,....], Left_shift or Right_shift)
In a Non-Deterministic Turing Machine, for every state and symbol, there are a
group of actions the TM can have. So, here the transitions are not deterministic.
The computation of a non-deterministic Turing Machine is a tree of configurations
that can be reached from the start configuration.
An input is accepted if there is at least one node of the tree which is an accept
configuration, otherwise it is not accepted. If all branches of the computational tree
halt on all inputs, the non-deterministic Turing Machine is called a Decider and if
for some input, all branches are rejected, the input is also rejected.
A Turing Machine with a semi-infinite tape has a left end but no right end. The left
end is limited with an end marker.
It is a two-track tape −
Upper track − It represents the cells to the right of the initial head position.
Lower track − It represents the cells to the left of the initial head position in
reverse order.
The infinite length input string is initially written on the tape in contiguous tape
cells.
The machine starts from the initial state q0 and the head scans from the left end
marker ‘End’. In each step, it reads the symbol on the tape under its head. It writes
a new symbol on that tape cell and then it moves the head either into left or right
one tape cell. A transition function determines the actions to be taken.
It has two special states called accept state and reject state. If at any point of time
it enters into the accepted state, the input is accepted and if it enters into the reject
state, the input is rejected by the TM. In some cases, it continues to run infinitely
without being accepted or rejected for some certain input symbols.
Note − Turing machines with semi-infinite tape are equivalent to standard Turing
machines.
Combining turing machines (Linear Bounded Automata)
Here,
The computation is restricted to the constant bounded area. The input alphabet
contains two special symbols which serve as left end markers and right end
markers which mean the transitions neither move to the left of the left end marker
nor to the right of the right end marker of the tape.
A linear bounded automaton can be defined as an 8-tuple (Q, X, ∑, q0, ML, MR, δ,
F) where −
The Turing Machine (TM) is the machine level equivalent to a digital computer.
It was suggested by the mathematician Turing in the year 1930 and has become the
most widely used model of computation in computability and complexity theory.
The model consists of an input and output. The input is given in binary format
form on to the machine’s tape and the output consists of the contents of the tape
when the machine halts
The problem with the Turing machine is that a different machine must be
constructed for every new computation to be performed for every input output
relation.
This is the reason the Universal Turing machine was introduced which along with
input on the tape takes the description of a machine M.
The Universal Turing machine can go on then to simulate M on the rest of the
content of the input tape.
This machine would have three bits of information for the machine it is simulating
The Universal machine would simulate the machine by looking at the input on the
tape and the state of the machine.
It would control the machine by changing its state based on the input. This leads to
the idea of a “computer running another computer”.
It would control the machine by changing its state based on the input. This leads to
the idea of a “computer running another computer”.
Halting Problem
The total work done by the program completely depends on the input given to the
program.
The program may consist of several different numbers of loops that may be in
linear or nested manner.
The Halting Problem tells that it is not easy to write a computer program that
executes in the limited time that is capable of deciding whether a program halts for
an input.
In addition to that the Halting Problem never says that it is not practicable to
determine whether a given random program is going to halt (stop).
Generally, it asks the question like “Given a random program and an input,
conclude whether the given random program is going to halt when that input is
given”.
Program P
Input S.
Example
We can build a universal Turing machine which can simulate any Turing machine
on any input.
A decider for this problem would call a halt to simulations that loop forever.
Because of this, both versions of this question are generally called the halting
problem.
Turing machines (TM) can also be deterministic or non-deterministic, but this does
not make them any more or less powerful.
However, if the tape is restricted so that you can only see use of the part of the tape
with the input, the TM becomes less powerful (linear bounded automata) and can
only recognise context sensitive languages.
Many other TM variations are equivalent to the original TM. This includes the
following −
Multi-track
Multi-tape
Multi-head
Multi-dimensional tape
The off-line Turing machine
A Turing machine with several tapes we call it a multi tape Turing machine.
We define
Example
It is similar to DTM except that for any input and current state it has a number of
choices.
=Q x X ->2QxXx(L,R)
A NDTM is allowed to have more than one transition for a given tape symbol.
Multi-head Turing machine
Each head independently reads/ writes symbols and moves left/right or keeps
stationery.
Decidable Language
Undecidable Language
Problem
“Let the given input be some Turing Machine M and some string w. The problem
is to determine whether the machine M, executed on the input string w, ever moves
its readhead to the left for three delta rules in a row.”
Solution
Define M' is a Turing machine that takes a pair (M,w) as input, where M is a
Turing machine recognized by M' and w is the input to M.
Whenever the head of simulated machine M moves to left while processing input
w , M' stops and accepts (M,w)
Let us assume that this problem is decidable, and then we have to show that the
altering turing machine (ATM) is also decidable −
Now, the idea is to construct a Turing machine S which decides ATM in such a
way that it uses R.
On input, S first modifies machine M to M´, so that M´ moves its head to the left
from the left-most cell only when M accepts its input.
To ensure that during its computation M´ does not move the head left from the left-
most position,
First machine M´ shifts the input w one position to the right, and places a special
symbol on the left-most tape cell. The computation of M´ starts with the head on
the second tape cell.
During its computation M´ ever attempts to move its head to the left-most tape cell,
M´ finds out by reading the special symbol and puts the head back to the second
cell, and continues its execution. If M enters an accept state, then M´ enters a loop
that forces the head to always move to the left.
After S has constructed M´ it runs the decider R on input < M´; w>.
UNIT – 7
Chomsky hierarchy:
Chomsky Hierarchy represents the class of languages that are accepted by the
different machines.
Chomsky hierarchy
T = set of terminals
N = set of nonterminal
v->w
A -> xB
Grammar
Grammar accepted Language accepted Automaton
type
finite state
Type 3 regular grammar regular language
automaton
Set of strings of a’s and b’s of any length including the null
(a+b)*
string. So L = { ε, a, b, aa , ab , bb , ba, aaa…….}
Set of strings of a’s and b’s ending with the string abb. So L =
(a+b)*abb
{abb, aabb, babb, aaabb, ababb, …………..}
Context-Sensitive Grammar
αAβ → αγβ
Example
unrestricted grammar:
In automata theory, the class of unrestricted grammars (also called semi-Thue,
type-0 or phrase structure grammars) is the most general class of grammars in
the Chomsky hierarchy. No restrictions are made on the productions of an
unrestricted grammar, other than each of their left-hand sides being non-empty.
*******************************