Q-1 Describe Data Structures For Symbol Table. Ans
Q-1 Describe Data Structures For Symbol Table. Ans
Q-1 Describe Data Structures For Symbol Table. Ans
During compilation process names get added in symbol table in which manner they
encountered in program as well as the information about that name is also added.
If new information about an existing name is discovered then that information is also
added. Thus in designing a symbol table mechanism, we would like a scheme that
allows us to add new entries and find a scheme that allows us to add new entries and
find existing entries in table efficiently.
1. Linear list
2. Binary tree
3. Hash table
New names are added to the list in the order in which they are encountered.
To insert a new name, we must scan down the list to make sure that it is not already
there. If not then add it otherwise an error message i.e. Multiply declared name.
When the name is located, the associated information can be found in words
following next.
To retrieve information about a name, we search from the beginning of the array
upto the position marked by AVAILABLE pointer, which indicates the beginning
of the empty portion of array.
NAME 1
Attribute1
NAME 2
Attribute1
:
:
NAME n
Attributen
AVAILABLE (pointer)
To find data about the NAME, we shall on the average search N/2 names. So
the cost of an enquiry is proportional to N.
One advantage of list organization is that the minimum possible space is taken in
simple compiler.
ii) Trees
It is a more efficient approach to symbol table organization. Here we add two link
fields LEFT and RIGHT to each record.
Following algorithm is used to look for NAME in a binary search tree where p
is initially a pointer to the root.
1. while p≠null do
Hashing table technique is suitable for searching and hence it is implemented in compiler.
In postfix notation, the operator comes after an operand, i.e., the operator
follows an operand.
Example
Syntax Tree
A tree in which each leaf node describes an operand & each interior node an
operator. The syntax tree is shortened form of the Parse Tree.
There are three types of three address code statements which are as follows
−
Q-3 Explain lexical, syntax and semantic phase errors and their
recovery in details.
Lexical phase errors
These errors are detected during the lexical analysis phase. Typical
lexical errors are:
1. Exceeding length of identifier or numeric constants.
2. The appearance of illegal characters
3. Unmatched string
Error recovery for lexical errors:
Panic Mode Recovery
In this method, successive characters from the input are removed one
at a time until a designated set of synchronizing tokens is found.
Synchronizing tokens are delimiters such as; or }
The advantage is that it is easy to implement and guarantees not to
go into an infinite loop
The disadvantage is that a considerable amount of input is skipped
without checking it for additional errors
Syntactic phase errors:
These errors are detected during the syntax analysis phase. Typical syntax
errors are:
Errors in structure
Missing operator
Misspelled keywords
Unbalanced parenthesis
Example : swich(ch)
{
.......
.......
}
Here, we did the work of the function addtwonum itself into the
subtract function. This is function inlining.
5. Function Cloning
For different calling arguments, specialized codes for a function are
constructed. Overloading a function is an example of this. We can
understand it with the following snippet:
void solve(int a){
….
}
void solve(int a, int b){
….
}
void solve(int a, float b, long c){
….
}
We can see that the function's name is the same(solve), but one of
them will be called according to the different parameters being
passed to it.
6. Partial Redundancy
In a parallel route, redundant expressions are calculated many times
without changing the operands. Partial-redundant expressions, on the
other hand, are calculated several times along a path without
changing the operands. By employing a code-motion approach, loop-
invariant code may be rendered largely redundant.
An example of a partially redundant code can be:
If (condition) {
a = y OP z;
} else {
...
}
c = y OP z;
2. Heap Allocation
Heap allocation is used where the Stack allocation lacks if we want to retain the values
of the local variable after the activation record ends, which we cannot do in stack
allocation, here LIFO scheme does not work for the allocation and de-allocation of
the activation record. Heap is the most flexible storage allocation strategy we can
dynamically allocate and de-allocate local variables whenever the user wants
according to the user needs at run-time. The variables in heap allocation can be
changed according to the user’s requirement. C, C++, Python, and Java all of these
support Heap Allocation.
For example:
int* ans = new int[5];
3. Stack Allocation
Stack is commonly known as Dynamic allocation. Dynamic allocation means the
allocation of memory at run-time. Stack is a data structure that follows the LIFO
principle so whenever there is multiple activation record created it will be pushed or
popped in the stack as activations begin and ends. Local variables are bound to new
storage each time whenever the activation record begins because the storage is
allocated at runtime every time a procedure or function call is made. When the
activation record gets popped out, the local variable values get erased because the
storage allocated for the activation record is removed. C and C++ both have support
for Stack allocation.
For example:
void sum(int a, int b){int ans = a+b;cout<<ans;}
// when we call the sum function in the example above,
memory will be allotted for the variable ans