Q. What Is Input Buffering. What Is Sentinels?

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

Q. What is Input Buffering.

What is
Sentinels?

Lexical Analysis has to access secondary


memory each time to identify tokens. It is
time-consuming and costly. So, the input
strings are stored into a buffer and then
scanned by Lexical Analysis.
Lexical Analysis scans input string from left
to right one character at a time to identify  After processing token ("int") both
tokens. It uses two pointers to scan tokens − pointers will set to the next token ('a'),
& this process will be repeated for the
 Begin Pointer (bptr) − It points to the whole program.
beginning of the string to be read.
 Look Ahead Pointer (lptr) − It moves
ahead to search for the end of the token.
Example − For statement int a, b;
 Both pointers start at the beginning of
the string, which is stored in the buffer.

A buffer can be divided into two halves. If the


look Ahead pointer moves towards halfway in
First Half, the second half is filled with new
characters to be read. If the look Ahead pointer
moves towards the right end of the buffer of
the second half, the first half will be filled with
 Look Ahead Pointer scans buffer until new characters, and it goes on.
the token is found.

 The character ("blank space") beyond


the token ("int") have to be examined
before the token ("int") will be
determined.

Buffer Pairs − A specialized buffering


technique can decrease the amount of
overhead, which is needed to process an input
character in transferring characters. It includes
two buffers, each includes N-character size When the terminal symbol is at top of the
which is reloaded alternatively. stack and it does not match the current input
Sentinels – symbol. In LL(1) parsing If the top of the
Sentinels are used to making a check, each stack is a non-terminal A, then the present
time when the forward pointer is converted, a input symbol is a, and the parsing table entry
check is completed to provide that one half of M [A, a] is considered empty.
the buffer has not converted off. If it is
completed, then the other half should be The parser design should be able to provide
reloaded. n the Buffer pairs scheme, every an error message (an error message which
time the forward pointer is moved, it depicts as much possible information as it can
checks in order to ensure that one half of
). It should be recovering from that error case,
the buffer did not move off. If it is done,
then the other half should be reloaded. and it should be able to continue parsing with
the rest of the input.
Hence, the ends of the buffer halves need
to go through two tests for every advance Error Recovery Techniques:
of the forward pointer.  Panic-Mode Error Recovery: In Panic-
Mode Error Recovery the technique is
 Test 1: For the end of the buffer. skipping the input symbols until a
 Test 2: To determine what character is to be synchronizing token is found.
 Phrase-Level Error Recovery: Each empty
read.
entry in the parsing table is filled with a
 The usage of sentinel helps to reduce the two pointer to a specific error routing take
tests that are required into one by extending care of that error case.
each buffer half to hold a sentinel character at Panic-Mode Error Recovery in LL(1)
the end. Parsing:
 The sentinel is a special character that should Panic-mode error recovery says that all the
input symbols are skipped until a
not be a part of the source code. synchronizing token is found from the string.
 An eof character is used as a Sentinel.
In this recovery method, we
use FOLLOW symbols as synchronizing
Q. Explain Error Recovery in Predictive tokens and the “synch” in the predictive
parsing table to indicate synchronizing tokens
Parsing ? obtained from the nonterminal’s FOLLOW
sets.
We know that the Predictive parser performs
Left most derivative while parsing the given What is the synchronizing token?
sentence. Now the given sentence may be a The terminal symbols which lie in follow set
of non-terminal they can be used as a
valid sentence or an invalid sentence with
synchronizing token set for that of non-
respect to the specified grammar. An error is terminal. In simple panic-mode error recovery
detected during the predictive parsing when for the LL (1)parsing.
the terminal on top of the stack does not match All the empty entries are marked as synch to
the next input symbol, or when nonterminal X indicate that the parser will skip all the input
on top of the stack, then the present input symbols until a symbol in the following set of
symbol is a, and the parsing table entry M [X, the non-terminal A which on the top of the
stack. Then Non-terminal A will be popped
a] is considered empty. from the stack by the parser. The parsing
continues from that state. For handling the
It is also possible that An error may occur
unmatched terminal symbols, the unmatched
while performing predictive parsing (LL(1) terminal symbol from the stack by the parser,
parsing) after which it generates an error message that
depicts the unmatched terminal is inserted.
Example for Panic Mode Error Recovery: Left Sentential and Right Sentential Form:
There are two separate examples explained  A left-sentential form is a sentential form
below by taking two different strings. that occurs in the leftmost derivation of
some sentence.
1.aab$
 A right-sentential form is a sentential
2. ceadb$
form that occurs in the rightmost
derivation of some sentence.
Phrase-level recovery in LL(1) Parsing:
Handle contains two things:
Phrase level recovery is implemented by  Production
filling in the blank entries in the predictive  Position
parsing table with pointers to error routines. At
each unfilled entry in the parsing table, it is Handle Pruning:
filled by a pointer to a special error routine
that will take care of that error case Removing the children of the left-hand side
specifically. These error routines can be of non-terminal from the parse tree is
different types like : called Handle Pruning.
A rightmost derivation in reverse can be
 change, insert, or delete input symbols.
obtained by handle pruning.
 issue appropriate error messages.
 pop items from the stack. Steps to Follow:
 Start with a string of terminals ‘w’ that is
to be parsed.
Q. What is handle pruning with example in  Let w = γn, where γn is the nth right
compiler design? sequential form of an unknown RMD.
 To reconstruct the RMD in reverse, locate
The handle is the substring that matches the handle βn in γn. Replace βn with LHS of
body of a production whose reduction some An ⇢ βn to get (n-1)th RSF γn-
represents one step along with the reverse of 1. Repeat.
a Rightmost derivation.  Start with a string of terminals ‘w’
The handle of the right sequential form Y is that is to be parsed.
the production of Y where the string S may  Let w = γn, where γn is the nth right
be found and replaced by A to produce the sequential form of an unknown RMD.
previous right sequential form in RMD(Right  To reconstruct the RMD in reverse,
Most Derivation) of Y. locate handle βn in γ n. Replace βn
Sentential form: S => a here, ‘a’ is called with LHS of some An ⇢ βn to get (n-
sentential form, ‘a’ can be a mix of terminals 1)th RSF γn-1. Repeat.
and nonterminals.
Example:

 S -> aABe
 A -> Abc | b
 B -> d
 Step:

 abbcde : γ = abbcde , A->b; Handle =


b
 aAbcde : γ = RHS = aAbcde , A-
>Abc; Handle = Abc
 aAde : γ = aAde , B->d; Handle = d
 aABe : γ = aABe, S-> aABe; Handle
= aABe
Q. What is Dependency Graph?

A dependency graph is used to represent the Example of Dependency Graph:


flow of information among the attributes in a Design dependency graph for the following
parse tree. In a parse tree, a dependency grammar:
graph basically helps to determine the
evaluation order for the attributes. E -> E1 + E2
E -> E1 * E2
The main aim of the dependency graphs is to
help the compiler to check for various types
of dependencies between statements in order
to prevent them from being executed in the
incorrect sequence, i.e. in a way that affects
the program’s meaning..
It assists us in determining the impact of a
change and the objects that are affected by it.
Drawing edges to connect dependent actions
can be used to create a dependency graph.
These arcs result in partial ordering among
operations and also result in preventing a
program from running in parallel.
Although use-definition chaining is a type of
dependency analysis, it results in unduly
cautious data reliance estimations. On a
shared control route, there may be four types
of dependencies between statements I and j.
Dependency graphs, like other-directed
networks, have nodes or vertices depicted as
boxes or circles with names, as well as arrows
linking them in their obligatory traversal
direction. Dependency graphs are commonly
used in scientific literature to describe
semantic links, temporal and causal
dependencies between events, and the flow of
electric current in electronic circuits. .
Ex. Of Dependency Graph
Types of dependencies: Dependencies are
broadly classified into the following
categories:
Q. What is Syntax Directed Definition?
1. Data Dependencies: Explain Any one type?

Syntax Directed Definition (SDD) is a kind of


2. Control Dependencies:
abstract specification. It is generalization of
context free grammar in which each grammar
3. Flow Dependency:
production X –> a is associated with it a set
of production rules of the form s = f(b1, b2,
4. Antidependence:
……bk) where s is the attribute obtained from
function f. The attribute can be a string,
5. Output-Dependency:
number, type or a memory location. Semantic
rules are fragments of code which are
6. Control-Dependency:
embedded usually at the end of production
and enclosed in curly braces ({ }).

Example:
E --> E1 + T { E.val = E1.val + T.val}

Annotated Parse Tree – The parse tree


containing the values of attributes at each
node for given input string is called annotated
or decorated parse tree.

Features –
 High level specification
 Hides implementation details
 Explicit order of evaluation is not
specified
Q. Explain principle source of code
Types of attributes – There are two types of
optimization.
attributes:
Code optimization is used to improve the
1. Inherited Attributes. intermediate code so that the output of the
2. Synthesized Attributes – These are those program could run faster and takes less space.
attributes which derive their values from their It removes the unnecessary lines of the code
children nodes i.e. value of synthesized and arranges the sequence of statements in
attribute at node is computed from the values order to speed up the program execution
of attributes at children nodes in parse tree. without wasting resources. It tries to improve
Example: the code by making it consume less resources
E --> E1 + T { E.val = E1.val + T.val} (i.e. CPU, Memory) and deliver high speed.
In this, E.val derive its values from E 1.val and
T.val
A transformation of a program is called local if
it can be performed by looking only at the
Computation of Synthesized Attributes –
statements in a basic block; otherwise, it is
 Write the SDD using appropriate
called global. Many transformations can be
semantic rules for each production in
performed at both the local and global levels.
given grammar.
Local transformations are usually performed
 The annotated parse tree is generated and
first.
attribute values are computed in bottom
up manner.
 The value obtained at root node is the
There are a number of ways in which a
final output. compiler can improve a program without
changing the function its computes.
Example: Consider the following grammar
S --> E Principal sources of code optimization are:

E --> E1 + T 1. Common-Subexpression Elimination: In the


E --> T common sub-expression, we don't need to be
T --> T1 * F computed it over and over again. Instead of
T --> F this we can compute it once and kept in store
F --> digit from where it's referenced when encountered
again. For e.g.
Initial code:
x = 2 * 3;

Optimized code:
x = 6;

5. Loop Optimizations: In loops, especially in


2. Copy Propagation: Assignments of the form the inner loops, programs tend to spend the
f : = g called copy statements, or copies for bulk of their time. The running time of a
program may be improved if the number of
short. The idea behind the copy-propagation
instructions in an inner loop is decreased, even
transformation is to use g for f, whenever if we increase the amount of code outside that
possible after the copy statement f: = g. Copy loop. Some loop optimization techniques are:
propagation means use of one variable instead i) Frequency Reduction (Code Motion): In
of another. This may not appear to be an frequency reduction, the amount of code in
improvement, but as we shall see it gives us an loop is decreased. A statement or expression,
which can be moved outside the loop body
opportunity to eliminate x. For e.g.
without affecting the semantics of the
x = Pi; program, is moved outside the loop.
A=x*r*r;
The optimization using copy propagation can ii) Induction-variable elimination, which we
be done as follows: A=Pi*r*r; apply to replace variables from inner loop.
Here the variable x is eliminated.
iii) Reduction in Strength: The strength of
3. Dead Code Elimination: The dead code may certain operators is higher than other
operators. For example, strength of * is higher
be a variable or the result of some expression
than +. Usually, compiler takes more time for
computed by the programmer that may not higher strength operators and execution speed
have any further uses. By eliminating these is less. Replacement of higher strength
useless things from a code, the code will get operator by lower strength operator is called a
optimized. For e.g. strength reduction technique.

4. Constant folding: Deducing at compile time


that the value of an expression is a constant
and using the constant instead is known as
constant folding. The code that can be
simplified by user itself, is simplified. For
e.g.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy