MCS-202 Computer Organization

Download as pdf or txt
Download as pdf or txt
You are on page 1of 127

Data Representation

UNIT 2 DATA REPRESENTATION


Structure Page Nos.
2.0 Introduction
2.1 Objectives
2.2 Data Representation in Computer
2.3 Representation of Characters
2.4 Number Systems
2.5 Negative Number Representation Using Complements
2.5.1 Fixed Point Representation
2.5.2 Binary Arithmetic using Complement notation
2.5.3 Decimal Fixed Point Representation
2.6 Floating Point Representation
2.7 Error Detection and Correction Codes
2.8 Summary
2.9 Solutions/ Answers

2.0 INTRODUCTION
In the first Unit of this Block, you have learnt the concepts relating to different
architectures of a Computer System. It also explains the process of execution of an
instruction highlighting the use of various components of a Computer system. This
Unit explains about how the data is represented in a computer system.
The Unit first defines the concepts of number systems in brief, which is followed by
discussion on conversion of numbers of different number systems. An important
concept of signed complement notation, which is used for arithmetic operations on
binary numbers, has been explained in this Unit. This is followed by discussion on the
fixed point and floating point numbers, which are used to represent the numerical data
in computer systems. This Unit also explains the error detection and correction codes
and introduces you to basics of computer arithmetic operations.

2.1 OBJECTIVES
At the end of the unit you will be able to:

 Represent numeric data using binary, octal and hexadecimal numbers;


 Perform number conversions among various number bases;
 Define the ASCII and UNICODE representation for a character set;
 Explain the fixed and floating point number formats;
 Perform arithmetic operations using fixed and floating point numbers ; and
 Explain error detection and correction codes

2.2 DATA REPRESENTATION IN COMPUTER


A computer system is an electronic device that processes data. An electronic device,
in general, consists of two stable states represented as 0 and 1. Therefore, the basic
unit of data on a computer is called a Binary Digit or a Bit. With the advances in
quantum computing technology a new basic unit called Qubits has emerged, which
also represent 0 and 1, but with the difference that it can also represent both the states
at the same time. The concepts of Quantum computing is beyond the scope of this
unit.

27
Introduction to Digital A computer performs three basic operation on data, viz. data input, processing and
Circuits
data output. The data input and information output, in general, is presented in text,
graphics, audio or other human recognizable form. Therefore, all human readable
characters, graphics, audio and video should be coded using bits such that computer is
able to interpret them. The most common code to represents characters into computer
are ASCII and UNICODE. Pictures and graphs can be represented using pixel (picture
elements), digital sound and video are represented by coding the frames in digital
formats. Since graphics, digital audio and digital video, which are stored on storage
devices as files, are very large in size, therefore, a large number of storage formats
that use data compression techniques are used for represent digital information. Some
of these concepts are explained in Unit 8.
The numeric data is used for computation in computer. However, as computer is an
electronic device, it can only process binary data. Thus, in general, numeric data is to
be converted to binary for computation. Computer uses fixed point and floating point
representation for representing numeric data. Data in computer is stored in random
access memory (RAM) and is required to be transferred in or out of the RAM for the
purpose of processing, therefore, an error detection mechanism may be employed to
identify and correct simple errors while transfer of binary data. The subsequent
sections of the Unit explains the character representation, representation of binary
numbers and error detection mechanism.

2.3 REPRESENTATION OF CHARACTERS


A character can be presented in a computer using a binary code. This code should be
same for different types of computers, else the information from one computer will
not be transferable to other computers. Thus, there is need of a standard for character
representation. A coding standard has to address two basic issues, viz. the length of
code, and organisation of different types of characters, which include printable
character set of different languages and special characters. Two important character
representation schemes are ASCII and UNICODE, which are discussed next.

American Standard Code for Information Interchange (ASCII)


ASCII was among the first character encoding standard. The length of ASCII is 7-bit.
Thus, it can represent 27=128 characters. It represents printable characters - English
alphabets (both lower case and upper case), decimal digits, special characters as
present on the present day keyboard, certain graphical characters etc.; and non-
printable control characters. American National Standards Institute (ANSI) has
designed a standard ISO 646 in 1964, which is based on ASCII.
However, as the basic unit of computer storage was 8, 16, 32 or 64 bits, the ASCII
was extended to create an 8 bit code. This code could represent 28=256 characters,
most of the additional characters in extended code were graphics characters. ANSI has
designed a code ISO 8859 for extended ASCII. It may be noted that ASCII has many
variants, which are based on characters used in different countries.
In ASCII, the coding sequence of characters is very interesting. Simple binary
arithmetic operations can result in conversion of lower case to upper case characters.
For example, character 'A' in ASCII is represented as the binary value 100 0001,
which is equivalent to a value 65 in decimal notation, whereas the character 'a' is
stored as binary value 110 0001, which is the decimal 97. Thus, conversion from
lowercase to upper case and vice-versa may be performed by subtracting or adding 32,
as the case may be.
ISO 8859 which is based on extended ASCII is a good representation; however, all the
languages cannot be represented using ASCII as its length is very small. Therefore, a

28
new standard that could represent almost all the characters of all the languages was Data Representation
developed. This is called the UNICODE.
Unicode
Unicode is a standard for character representation, which provides a unique code also
called code point, for every character of almost all the languages of the world. The set
of all the codes is called code space. The code space is divided into 17 continuous
sequences of codes called code planes, with each code plane can represent 216 codes.
Thus, Unicode values ranges from U+000016 to U+10FFFF16. Here U+ represents the
Unicode followed by the hexadecimal value of a code point. The code planes of the
Unicode being U+0000016 to U+0FFFF16; U+1000016 to U+1FFFF16; U+2000016 to
U+2FFFF16; … , U+F000016 to U+FFFFF16; and U+10000016 to U+10FFFF16. You can
learn about more details on Unicode from the further readings. Also read the
hexadecimal number system given in the next section to learn about the hexadecimal
values given above.
One of the major advantages of using Unicode is that it helps in seamless digital data
transfer among the applications that use this character formatting, thus, not causing
any compatibility problem.
Unicode code points may consist of about 24 binary digits, however, all of these code
points may not be required for a given set of data. In addition, a digital system
requires the data in the units of bytes. Thus, a number of encodings has been designed
to represent Unicode code points in a digital format. Two of these popular encodings -
Unicode Transformation Formats are UTF-8 and UTF-16. UTF-8 uses 1 to 4 bytes to
represent the code points of Unicode. Most of the 1 byte UTF-8 code points are
compatible to ASCII. UTF-16 represents code points as one or two 16-bit code units.
The standard ISO 10646 represents various Unicode coding formats.
In general, if you are working with web pages having mostly English language, UTF-
8 may be a good choice of character representation. However, if you are creating a
multi-lingual web page, it may be a good idea to use UTF-16.
Indian Standard Code for information interchange (ISCII)
The ISCII is and ASCII compatible code consisting of eight-bits. The code for values
0 to 127 in ISCII is similar to ASCII; however, for the values 128 to 225 it represents
the characters of Indian scripts. IS 13194:1991 BIS standard defines the details of
ISCII. However, with the popularity of Unicode, its use has now been limited.

2.4 NUMBER SYSTEMS


A number system is used to represent the quantitative information. This section
discusses the binary, octal and hexadecimal number systems
Formally, a number system is represented using a base or radix, which is equal to the
distinct digits used by that system, and the position of a digit in a number. For
example, the decimal number system has a base 10. It consists of ten decimal digits,
viz. 0, 1, 2, 3, 4, 5, 6, 7, 8 and 9; and a place value as shown in the following example:
9×104 + 8×103 + 7×102 + 6×101 + 5×100 + 4×10-1 + 3×10-2 + 2×10-3 + 1× 10-4
= 98765.4321

Binary Numbers: A binary number system has a base 2 and consists of only two
digits 0 and 1, which are also called the bits. For example, 10012 represent a binary
number with four binary digits. The subscript 2 represents that the number 1001 has a
base 2 or in other words is a binary number.

29
Introduction to Digital Note: The subscript shown in the numbers represents the base of the number. In case a
Circuits
subscript is not given then please assume it as per the context of discussion.
Conversion of binary number to Decimal equivalent:
A binary number is converted to its decimal equivalent by multiplying each binary
digit by its place value. For example, a seven digit binary number 10010012 can be
converted to decimal equivalent value as follows:
Binary Digits of Number 1 0 0 1 0 0 1
The place value 26 25 24 23 22 21 20
=64 =32 =16 =8 =4 =2 =1
Binary digit × Place value 1×64 0×32 0×16 1×8 0×4 0×2 1×1
Computed values 64 0 0 8 0 0 1
Sum of the computed values 64+0+0+8+0+0+1 = 73 in Decimal

You may now try converting few more numbers. Try 0010001, which will be
16+1=17; 1111111 will be 64+32+16+8+4+2+1=127. So a 7 bit binary number can
contain decimal values from 0 to 127.

Octal Numbers: An Octal number system has a base of 8, therefore, it has eight
digits, which are 0,1,2,3,4,5,6,7. For example, 765432108 is an octal number.
Conversion of Octal number to Decimal equivalent:
An Octal number is converted to its decimal equivalent by multiplying each octal digit
by its place value. For example, an octal number 54328 can be converted to decimal
equivalent value as follows:
Octal Digits of Number 5 4 3 2
The place value 83 82 81 80
=512 =64 =8 =1
Octal digit × Place value 5×512 4×64 3×8 2×1
Computed values 2560 256 24 2
Sum of the computed values 2560+256+24+2=284210

Hexadecimal Numbers: A hexadecimal number system has a base of 16, therefore,


it uses sixteen digits, which are 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A(10), B(11), C(12), D(13),
E(14), F(15). For example, FDA916 is a hexadecimal number.

Conversion of Hexadecimal number to Decimal equivalent:


A hexadecimal number is converted to its decimal equivalent by multiplying each
hexadecimal digit by its place value. For example, a hexadecimal number 13AF16 can
be converted to decimal equivalent value as follows:
Hexadecimal Digits of Number 1 3 A=10 F=15
The place value 163 162 161 160
=4096 =256 =16 =1
Hexadecimal digit × Place value 1×4096 3×256 10×16 15×1
Computed values 4096 768 160 15
Sum of the computed values 4096+768+160+15=503910

Conversion of Decimal to Binary: A decimal number can consists of Integer and


fraction part. Both are converted to binary separately.
Process:
For Integer part: Repetitively divide the quotient of integer part by 2 keeping
remainder separate till quotient is 0. Collect all the remainders from last remainder to
first to make equivalent binary

30
For Factional part: Repetitively multiply the fraction by 2 and maintain the list of Data Representation
integer value that is obtained till fraction becomes 0. Collect all the integer values.
The following example explains the process of Decimal to binary conversion.
Example 1: Convert the decimal number 22.25 to binary number.
Solution:
For Integer part: Repetitively divide the For Factional part: Repetitively
quotient of integer part by 2 keeping multiply the fraction by 2 and
remainder separate till quotient is 0. maintain the list of integer value that
Integer value of example: 22 is obtained till fraction becomes 0.
Fraction value of example:.25
Integer After Division by 2 Direction Fraction After Direction
Part of Reading Part multiplication of
the by 2 Reading
Result the
Quotient Remainder Result Integer Result
part

22 11 0 .25 0.50 0

11 5 1 .50 1.00 1

5 2 1 .00 STOP Ans: 01

2 1 0

1 0 1

0 STOP Ans:
10110

22.2510 in binary is 10110.012

Verification

Binary Digits of Number 1 0 1 1 0 . 0 1


The place value 24 23 22 21 20 2-1 2-2
=16 =8 =4 =2 =1 =1/2 =1/4
.
Digit × Place value 1×16 0×8 1×4 1×2 0×1 0×0.5 1×0.25
Computed values 16 0 4 2 0 0 0.25
Sum of the computed 22.2510
values
--
The method as shown above is a standard technique. You must start using this method
for various problems. However, you may use the following simple technique of
converting integer part of decimal numbers to binary.
Conversion from Decimal to Binary a simpler Process:
Assume a decimal number N is to be converted to binary. Now perform the following
steps:
1. Is the decimal number N equal to a binary place value, then assign that place value
to P and move to step 3.
2. Else, find the binary place values which is just lower to the decimal number N.
Assign this place value to P. For example, for number 73 just lower place value is
64, as 26 = 64 and 27=128.
3. Put 1 in the position of P and subtract the place value P from N.
31
Introduction to Digital 4. If (N-P) ≠ 0 then Repeat the steps 1 to 3 by taking new N=N-P
Circuits
5. Put 0 in all the remaining places, where 1 has not been put.
The following example demonstrate this technique showing conversion of 73 to
binary.
Example 2: Convert decimal numbers 73, 39 and 20 into binary using the method as
above.
The following table shows the process of the conversion.
The place 26 25 24 23 22 21 20
value =64 =32 =16 =8 =4 =2 =1
N = 73 128 > 73 > 64, therefore, P=64
Step 2 and 3 1 N = N-P = 73-64 =9 (Not 0 so repeat)
Step 2 16 > 9 > 8 so new P=8 and New N=9-8=1
Step 3 1 1
Step 1 Since 1 is a place value, new P=1 and New N =1-1=0
Step 3 1 1 1
Step 4 1 0 0 1 0 0 1

Place values 64 32 16 8 4 2 1
N=39 1
New N=7 1 1
New N=3 1 1 1
New N=1 1 1 1 1
Step 4 0 1 0 0 1 1 1

Place values 64 32 16 8 4 2 1
N=20 1
New N=4 1 1
Step 4 0 0 1 0 1 0 0

The logic as presented here can be extended to the fractional part, however, it is
recommended that you may follow the repeated multiplication method as explained
earlier for the fractions.
Conversion of Binary number to Octal Number
The base of a binary number is 2 and the base of octal number is 8. Interestingly,
23=8. Thus, if you simply group three binary digits, the equivalent value may form
the octal digit. However, you may be wondering how to group binary numbers. This is
explained with the help of following example.
Example 3: Convert the binary 11001101.001112 into equivalent Octal number.
Process: The process is to group three binary digits. The grouping before the binary
point is done from right to left and after the binary point from left tonright. Each of
the group then is converted to equivalent octal digit. The following table shows this
conversion process.
Binary Number - 1 1 0 0 1 1 0 1 . 0 0 1 1 1 -
Grouping Directions .
Grouped (- replaced by 0 1 1 0 0 1 1 0 1 . 0 0 1 1 1 0
0)
Binary place values 4 2 1 4 2 1 4 2 1 . 4 2 1 4 2 1
Equivalent Octal Digit 0+2+1=3 0+0+1=1 4+0+1=5 . 0+0+1=1 4+2+0=6
Octal Number 3 1 5 . 1 6
Therefore, 11001101.001112 is equivalent to 315.168

32
Conversion of Binary number to Hexadecimal Number Data Representation

The base of a binary number is 2 and the base of hexadecimal number is 16. You may
notice that 24=16. Therefore, conversion of binary to hexadecimal notation may
require grouping of 4 binary digits. This is explained with the help of following
example.
Example 4: Convert the binary 11001101.001112 into equivalent hexadecimal
number.
Process: The process is almost similar to binary number to octal number conversion
expect now four binary digits are combined as given in the following table.
Binary Number 1 1 0 0 1 1 0 1 . 0 0 1 1 1 - - -
Grouping Direction .
Grouped 1 1 0 0 1 1 0 1 . 0 0 1 1 1 0 0 0
Binary place values 8 4 2 1 8 4 2 1 . 8 4 2 1 8 4 2 1
Hexadecimal digit 8+4+0+0=12 8+4+0+1=13 . 0+0+2+1=3 8+0+0+0=8
Hexadecimal 12 is C 13 is D . 3 8

Therefore, 11001101.001112 is equivalent to 315.168 and CD.3816


As computer is a binary device, therefore, all the numbers of different number systems
may be represented in binary format. This is shown in the following table.

Decimal Binary Equivalent Binary


Hexadecimal Binary-coded
Number Coded Octal coded
Number Hexadecimal
Decimal Number Octal
0 0000 0 000 0 0000
1 0001 1 001 1 0001
2 0010 2 010 2 0010
3 0011 3 011 3 0011
4 0100 4 100 4 0100
5 0101 5 101 5 0101
6 0110 6 110 6 0110
7 0111 7 111 7 0111
8 1000 10 001 000 8 1000
9 1001 11 001 001 9 1001
10 0001 0000 12 001 010 A 1010
11 0001 0001 13 001 011 B 1011
12 0001 0010 14 001 100 C 1100
13 0001 0011 15 001 101 D 1101
14 0001 0100 16 001 110 E 1110
15 0001 0101 17 001 111 F 1111
16 0001 0110 21 010 000 10 0001 0000
17 0001 0111 22 010 001 11 0001 0001

49 0100 1001 61 110 001 31 0011 0001

63 0110 0110 77 111 111 3F 0011 1111
Table 1: Decimal, Octal, Hexadecimal Numbers

33
Introduction to Digital Please note the following points in the Table 1 given above.
Circuits
 The Binary coded decimal (BCD) is the representation of each decimal digit
to a sequence of 4 bits. For example, a decimal number 12 in BCD is 0001
0010. This representation is used in several calculators for performing
computation.
 It may be noted that BCD is not binary equivalent value. For example, the
BCD value of decimal 49 is 0100 1001 but its binary equivalent value is 0011
0001.

 Please also note that binary coded hexadecimal values are equivalent to binary
value of a number. For example, decimal value 63 in hexadecimal binary
notation is 0011 1111, which is same as its binary value.
The conversion of decimal to octal and hexadecimal may be performed in the same
way as done using repeated division or multiplication of binary. The process is exactly
same except, in decimal number to octal or hexadecimal number conversion division
is done by 8 or 16 respectively.
Check Your Progress 1
1) Perform the following conversions:
i) 11100.011012 to Octal and Hexadecimal
ii) 11011010102 to Octal and Hexadecimal
.........................................................................................................................................
.........................................................................................................................................

.........................................................................................................................................

2) Convert the following numbers to binary.


i) 11910
ii) 19.12510
iii) 32510
.........................................................................................................................................
.........................................................................................................................................
.........................................................................................................................................

3) Convert the numbers to hexadecimal and octal.


i) 11910
ii) 19.12510
iii) 32510
.........................................................................................................................................
.........................................................................................................................................
.........................................................................................................................................

34
Data Representation
2.5 NEGATIVE NUMBER REPRESENTATION
USING COMPLEMENTS
You have gone through the details of binary representation of character data and the
number systems. In general, you use positive and negative integers and real numbers
for computation. How these numbers can be represented in binary? This section
describes how positive and negative numbers can be represented in binary for
performing arithmetic operations.
In general, Integer numbers can be represented using the sign and magnitude of the
number, whereas real numbers may be represented using a sign, decimal point and
magnitude of integral and fractional part. Real numbers can also be represented using
a scientific exponential notation. This section explains how integers can be
represented as binary numbers in a computer system.
Integer representation in binary:
An integer is represented in binary using fixed number of binary digits. One of the
simplest representations for representing integer would be - to represent the sign using
a bit; and magnitude may be represented by the remaining bits. Fortunately, the value
of sign can be either positive or negative, therefore, it can easily be represented in
binary. The + sign can be represented using 0 and – sign can be represented using 1.
For example, a decimal number 73 has a sign + (bit value 0) and magnitude 73 (binary
equivalent 1001001). The following table shows some of the numbers using this Sign-
magnitude Representation:

Number Sign Bit Magnitude


+73 0 1 0 0 1 0 0 1
-73 1 1 0 0 1 0 0 1
+39 0 0 1 0 0 1 1 1
-39 1 0 1 0 0 1 1 1
+127 0 1 1 1 1 1 1 1
-127 1 1 1 1 1 1 1 1
0 0 0 0 0 0 0 0 0
-0 1 0 0 0 0 0 0 0

Table 2: 8-bit Sign-Magnitude representation


Please note the following points about this 8 bit sign-magnitude representation:
 It represents number in the range 0 to +127 and -0 to -127. Therefore,
the range of the numbers that can be represented using these 8 bits is
-127 to +127.
 It represents 255 different numbers viz. -127 to -1, ±0 and +1 to +127.
 The number of bits, which are used to represent the magnitude of the
number, can be used to determine the maximum and minimum
numbers that are representable.
 There are two representations of zero +0 and -0
Is this representation suitable for representing numbers for computation? It has one
basic problem that the sequences of steps to perform arithmetic operations are not
straight forward. For example, for adding +73 and -39, first you need to compare the
sign of the numbers, and as the signs are different in this case, therefore, you should
perform subtraction of smaller number from the bigger number and finally assigning
the sign of bigger number to the result.

35
Introduction to Digital Is there any better representation? Yes, an interesting representation that uses
Circuits
complement of a number to represent negative numbers has been designed. What is a
complement of a number?
Complement notation: A complement, by definition, is a number that makes a given
number complete. For the decimal numbers, this completeness can be defined with
respect to the highest value of the digit, i.e. 9 or the next higher value, i.e. 10. These
are called 9’s and 10’s complement respectively for the decimal numbers.
For example, for a decimal digit 3, the 9’s complement would be 9-3 =6 and 10’s
complement would be 10-3=7.
In general, for a number with base B two types of complements are defined –(B-1)’s
complement and B’s complement. For example, for decimal system base value B is10.
Therefore, for decimal numbers two complements, viz. 9’s and 10’s complements, are
defined. Thus, for binary system where base is 2, the two complements, viz. 1’s
complement and 2’s complement, are defined. The following example illustrates the
steps of finding 9’s and 10’s complement for decimal numbers.
Example 5: Compute the 9’s complement and 10’s complement for a four digit
decimal number 1095, 8567 and 0560.
Solution: Following table shows the process:
Complement Operation The Number
Number 1 0 9 5
9’s Complement
Subtract each digit from 9 8 9 0 4
Add 1 in the 9’s complement - - - 1
10’s Complement
It results in 10’s complement 8 9 0 5
Number 8 5 6 7
9’s Complement
Subtract each digit from 9 1 4 3 2
Add 1 in the 9’s complement - - - 1
10’s Complement
It results in 10’s complement 1 4 3 3
Number 0 5 6 0
9’s Complement
Subtract each digit from 9 9 4 3 9
Add 1 in the 9’s complement - - - 1
10’s Complement
It results in 10’s complement 9 4 4 0

Table 3: Computation of 9’s and 10’s complement


Please note that the sum of the number and its 9’s complement for the numbers of 4
digits is 9999, and the sum of the number and its 10’s complement is 10000. The 9’s
and 10’s complement of the numbers can be used in a computer system when BCD
numbers are used instead of binary numbers. Similarly, the 1’s and 2’s complement
can be computed for binary numbers. The following example demonstrates the
complement notation in binary.
Example 6: Compute the 1’s and 2’s complement for the binary numbers 10012,
11112 and 00002 using representation, which has four bits.
Solution:
Solution: Following table shows the process:
Complement Operation The Number
Number 1 0 0 1
1’s Complement
Subtract each digit from 1 0 1 1 0
Add 1 in the 1’s complement - - - 1
2’s Complement
It results in 2’s complement 0 1 1 1
Number 1 1 1 1
1’s Complement
Subtract each digit from 1 0 0 0 0
36
Add 1 in the 1’s complement - - - 1 Data Representation
2’s Complement
It results in 2’s complement 0 0 0 1
Number 0 0 0 0
1’s Complement
Subtract each digit from 1 1 1 1 1
Add 1 in the 1’s complement - - - 1
2’s Complement It results in 2’s complement. 0 0 0 0
There will be a carry bit from the last digit

Table 4: Computation of 1’s and 2’s complement


Please note the following in the table as above:
 On subtracting 1 from binary digit will result in change of bit from 0 to 1 OR
1 to 0.
 When you add binary digit 1 with 1, then it results in a sum bit of 0 and carry
bit as 1.
 1’s complement of 0000 is 1111, when 1 is added to it, you will get 10000 as
the 2’s complement. Since, only 4 binary digits are used in the notation as
above, the fifth digit, which is 1, is ignored while taking the complement.

An interesting observation from the Table 4 is that 1’s complement can be obtained
simply by changing 1 to 0 and 0 to 1. For obtaining 2’s complement leave all the
trailing zeros and the first 1 intact and after that complement the remaining bits. For
example, for an eight bit binary number 10101100, the complement can be done as
follows:

Number 1 0 1 0 1 1 0 0
1’s Complement change every bit from 0 to 1 OR 1 to 0 0 1 0 1 0 0 1 1
Number 1 0 1 0 1 1 0 0
For 2’s complement leave the trailing 0’s till first 1 1 0 0
then complement remaining bits(change 0 to 1 or 1 to 0) 0 1 0 1 0
2’s Complement of the Number 0 1 0 1 0 1 0 0

Table 5: Computation of 1's and 2's complement


But, how are these complement notations used in a computer system to represent
integers? The next sub-section explains the integral representation of computers.

2.5.1 Fixed Point Representation


A computer system uses registers or memory locations to store arithmetic data like
numbers. The number stored in these locations are of fixed size, such as 8 or 16 or 32
or 64 or 128 bits etc. Interestingly, binary point is not represented in the numbers,
rather its location is assumed. The fixed point number representation assumes that the
binary point is at the end of all the binary digits, thus, can be used to represent
integers. Since, Integers include positive and negative number both, therefore, fixed
point number also use one bit as the sign bit as shown in Table 2. Fixed point numbers
may use either signed magnitude notation or complement notation. However, as
explained in the previous section signed magnitude notation is not a natural notation
for binary arithmetic, the complement notation is used in computers. The complement
notation works well for the digital binary numbers as they are of fixed length. For the
sake of simplicity, in this unit we will use an complement notation having a length of
8-bits.
For the fixed point number representation signed 1's complement and signed 2's
complement notation can be used. The signed complement notation is same as the
complement notation as introduced in the previous section, except that it uses a sign
bit, in addition to represent magnitude. In signed 1's and 2's complement notation the
positive number has the same magnitude as that of binary number with the sign bit as
zero, however, the negative numbers are represented in complement form. The
37
Introduction to Digital following example explains the process of conversion of decimal numbers to signed
Circuits
1's or signed 2's complement notation.
Example 7: Represent the +73, -73, +39, -39, +127, -127 and 0 using signed 1's
complement notation.

Solution: The table 6 shows the values in signed 1's complement notation of length 8
bits (S is the sign bit). Please note that even in signed 1's complement notation there
are two representations for 0. The number range for 1's complement for this 8 bit
representation is -127 to -0 and +0 to +127. So it can represent 28-1 (as two
representation of 0) =255 numbers.
Number Process S 7 bits
Sign is 0 (positive) and 7 bit magnitude is
+73 0 1 0 0 1 0 0 1
same as binary equivalent value of 73
Take 1's complement of all the 8 bits
-73 1 0 1 1 0 1 1 0
(including sign bit) to obtain -73
+39 Follow same process as stated for +73 0 0 1 0 0 1 1 1
-39 Follow same process as stated for -73 1 1 0 1 1 0 0 0
+127 Follow same process as stated for +73 0 1 1 1 1 1 1 1
-127 Follow same process as stated for -73 1 0 0 0 0 0 0 0
0 Follow same process as stated for +73 0 0 0 0 0 0 0 0
-0 Follow same process as stated for -73 1 1 1 1 1 1 1 1

Table 6: 8-bit Signed 1's complement notation


Example 8: Represent the +73, -73, +39, -39, +127, -127, 0 and -128 using signed 2's
complement notation.

Solution: The table 7 shows the values in signed 2's complement notation of length 8
bits (S is the sign bit). Please note that in signed 2's complement notation there is a
unique representations for 0, therefore, -128 can also be represented. Thus, the range
of the number that can be represented using signed 2's complement notation is -128 to
+127. Thus, a total of 256 numbers can be represented using signed 2's complement
notation.
Number Process S 7 bits
Sign is 0 (positive) and 7 bit magnitude is
+73 0 1 0 0 1 0 0 1
same as binary equivalent value of 73
Take 2's complement of the number
-73 1 0 1 1 0 1 1 1
(including sign bit) to obtain -73
+39 Follow same process as stated for +73 0 0 1 0 0 1 1 1
-39 Follow same process as stated for -73 1 1 0 1 1 0 0 1
+127 Follow same process as stated for +73 0 1 1 1 1 1 1 1
-127 Follow same process as stated for -73 1 0 0 0 0 0 0 1
0 Follow same process as stated for +73 0 0 0 0 0 0 0 0
-0 Follow same process as stated for -73 0 0 0 0 0 0 0 0
-128 -127-1 is = -128 1 0 0 0 0 0 0 0

Table 7: 8-bit Signed 2's complement notation


In general, a signed 2's number representation of n bits can represent numbers in the
range -2n-1 to + (2n-1-1). Therefore, an 8 bit representation can represent the numbers in
the range -28-1 to + (28-1-1); i.e. -27 to +(27-1), which is -128 to +127. For a 16 bit
representation this range will be -215 to +(215-1), which is -32768 to +32767. Please
relate these ranges to range given in programming languages like C.
Signed 2's complement notation is one of the best notation to perform arithmetic on
numbers. Next, we explain the process of performing arithmetic using fixed point
numbers.
38
Data Representation
2.5.2 Binary Arithmetic using Complement notation
In this section, we discuss about binary arithmetic using fixed point complement
notation.

Arithmetic addition: Arithmetic addition operation can be performed using any


of the signed-magnitude, singed 1's complement and signed 2's complement notation.

Addition using signed-magnitude notation:


The process of addition of two numbers using signed magnitude notation will require
the following steps:
Step 1: Check if the numbers have similar or different signs.
Step 2: If signs are same then just add the two numbers, otherwise identify the
number having bigger magnitude (in case both numbers have same
magnitude then first number may be assumed as bigger number) and
subtract the smaller number from bigger number.
Step 3: If signs of the number are same then check if the result exceeds the size of
number of bits of the representation, if that happens then report overflow
of number. For the numbers with different signs overflow cannot occur.
Step 4: The sign of the final number is the sign of any operand, if signs are same;
or the sign of the bigger number, if signs are different.
The following example explains the process of addition using signed-magnitude
notation. The example, uses an 8 bit representation.

Example 9: Add the decimal numbers 75 and -80 using signed magnitude notation,
assuming the 8-bit length of the notations.
Solution: The numbers are (The left most bit is the Sign bit):
Num
Signed Magnitude Signed 1's Complement Signed 2's Complement
ber
+75 0 1 0 0 1 0 1 1 0 1 0 0 1 0 1 1 0 1 0 0 1 0 1 1
+80 0 1 0 1 0 0 0 0 0 1 0 1 0 0 0 0 0 1 0 1 0 0 0 0
-80 1 1 0 1 0 0 0 0 1 0 1 0 1 1 1 1 1 0 1 1 0 0 0 0

Table 8: 8-bit representation of +75, +80 and -80


For the present example, signs of two numbers are different and the magnitude of -80
is higher than 75, therefore, 75 is subtracted from 80 as shown in the following table
and the sign of the -80, which is minus is selected for the output. Please note that
during the subtraction, carry is required to be taken as done in the decimal numbers. In
binary, if a carry is taken then the number changes to two digit number as shown in
the table. Please note that 10 of binary is a value 2 in decimal.
Number Signed Magnitude
Carry taken out from the bit No No No yes yes yes yes No
Updated bit value 0 10 1 10 1 10 1 10
-80 1 1 0 1 0 0 0 0
+75 0 1 0 0 1 0 1 1
Subtraction Sign bit 1-1 0-0 0-0 1-1 1-0 1-1 10-1
Result =-5 1 0 0 0 0 1 0 1

Table 9: 8-bit addition using Signed magnitude notation


Example 10: Add the decimal numbers 75 and 80 using signed magnitude notation,
assuming the 8-bit length of the notations.
39
Introduction to Digital Solution: The process of addition of +75 and +80 in signed magnitude representation
Circuits
is shown below:
Number Signed Magnitude
Carry from previous bit addition No Yes No No No No No No
Carry bit value 1
+75 0 1 0 0 1 0 1 1
+80 0 1 0 1 0 0 0 0
1 1+1=10 0+0 0+1 1+0 0+0 1+0 1+0
Result is -27 (error) 1 0 0 1 1 0 1 1

Table 10: 8-bit addition using Signed magnitude notation having overflow

The addition of 7 bit magnitude has resulted in 8 bit output, which cannot be stored in
this notation as this notation has a length of 8 bits with 1 sign bit. The last bit will be
lost and you will obtain an incorrect result -27. This problem has occurred as the size
of the number is fixed. This is called the overflow. You may please note that the
actual addition of the 75 and 80 is 155, which is beyond the range of 8 bit signed
magnitude representation, which is -127 to +127. This is why you should be careful
while selecting the integral data types in a programming languages. For example, in
case you have selected small unsigned integer of byte size for a variable, then you can
store a only the number values in the range 0 to 255 in that variable.
Addition using signed-1's complement notation:
In signed 1's complement notation the addition process is simpler than signed-
magnitude representation. An interesting fact is that in this notation you do not have to
check the sign, just add the numbers, why? This is due to the fact that complement of
a number as defined makes it complete, the binary digits are complement of each
other and even the sign bits are complement. Therefore, the process of addition of two
signed 1's complement notation just requires addition of the two numbers, irrespective
of the sign. The process of addition in signed 1's complement representation will
require the following steps:
Step 1: Just add the numbers, irrespective of sign.
Step 2: Now check the following conditions:
Carry in to Carry out of
Comments
the Sign Bit the Sign bit
No No Result is fine
Yes Yes Add 1 to result and it is fine
No Yes Overflow, incorrect result
Yes No Overflow, incorrect result
Table 11: The conditions of 1's complement notation, while addition
The following example demonstrates the process of addition .
Example 11: Add the decimal numbers 75 and -80 using signed 1's complement
notation, assuming the 8-bit length of the notations.

Solution: The numbers are (The left most bit is the Sign bit). The Table 8 shows the
values of +75 and -80 in signed 1's complement notation.

Number Signed 1's Complement Notation


Carry from
No No No yes yes yes yes -
previous bit
40
addition Data Representation
Updated bit value - - - 1 1 1 1 -
+75 0 1 0 0 1 0 1 1
-80 1 0 1 0 1 1 1 1
Sign bit 1+0 0+1 1+0+0 1+1+1 1+0+1 1+1+1 1+1
Addition
(0+1=1) =1 =1 =1 =11 =10 =11 =10
Result (-5) 1 1 1 1 1 0 1 0
Since, the result is negative, so taking a 1's complement to verify the magnitude
-Result=+5 0 0 0 0 0 1 0 1

Table12: 8-bit addition using Signed 1's complement notation


Example 12: Add the decimal numbers 75 and 80 using signed 1's complemt notation,
assuming the 8-bit length of the notations.
Solution: The process of addition of +75 and +80 in signed magnitude representation
is shown below:
Carry out
Number Signed 1's Complement Notation
(9th bit)
Carry from
previous No Yes No No No No No No -
bit addition
No Carry
Carry in to
Updated out of the
Sign Bit - - - - - - -
bit value sign bit
1
addition
+75 0 1 0 0 1 0 1 1
+80 0 1 0 1 0 0 0 0
Sign bit 1+1 0+0 0+1 1+0 0+0 1+0 1+0
Addition
(0+0+1=1) =10 =0 =1 =1 =0 =1 =1
Result (A
negative 1 0 0 1 1 0 1 1
number)
There is carry in the sign bit (1) and no carry out of the sign bit. Therefore, as per
Table 11, there is an overflow and the result is incorrect. You can observe that the
result is negative for addition of two positive numbers, which is NOT possible.
Table 13: 8-bit addition using Signed 1's complement notation having overflow
Once again, please observe that the range of numbers in 8-bit 2's complement notation
is -127 to +127, and the addition of two numbers 155 cannot be represented in 8-bits.
Hence, there is an overflow.
Addition using signed-2's complement notation:
In signed 2's complement notation the addition process is simplest of these three
representations. In this notation also, you do not have to check the sign, just add the
numbers including the sign bit. The process of addition in signed 2's complement
representation will use the following steps:
Step 1: Just add the numbers, irrespective of sign.
Step 2: Now check the following conditions:
Carry in to Carry out of
Comments
the Sign Bit the Sign bit
No No Result is fine
Yes Yes Result is fine
No Yes Overflow, incorrect result
Yes No Overflow, incorrect result
Table 14: The conditions of 2's complement notation, while addition
The following example demonstrates the process of addition using signed 2's
complement notation.
41
Introduction to Digital Example 13: Add the decimal numbers (i) -69 -59 (ii) -69+59 (iii) +69-59 and (iv)
Circuits
+69=59
Solution: The numbers are (The left most bit is the Sign bit). The Table 14 shows the
numbers in signed 2's complement notation. The left most bit is the sign bit
Number Signed 2's Complement
+69 0 1 0 0 0 1 0 1
- 69 1 0 1 1 1 0 1 1
+59 0 0 1 1 1 0 1 1
-59 1 1 0 0 0 1 0 1
Table 15: Numbers of example 13 in 2's complement notation
(i) -69-59
Carry
out
Number Signed 2's Complement Notation
(9th
bit)
Carry Carry
from in to
previous yes Sign yes yes yes yes yes yes -
bit bit
addition yes
Carry
for 1 1 1 1 1 1 1 1 -
addition
-69 1 0 1 1 1 0 1 1
-59 1 1 0 0 0 1 0 1
Addition
of bits 1+1+1 1+0+1 1+1+0 1+1+0 1+1+0 1+0+1 1+1+0 1+1
given =11 =10 =10 =10 =10 =10 =10 =10
above
Result 1 1 0 0 0 0 0 0 0
There is carry in to the sign bit (1) and there is a carry out of the sign bit (1).
Therefore, as per Table 14, there is NO overflow and the result is correct and equal to
-128. Discard the carry out bit (the 9th bit).
Table 16: Addition of two negative numbers without overflow
(ii) -69+59
Carry
out
Number Signed 2's Complement Notation
(9th
bit)
Carry in
Carry from
to Sign
previous bit No yes yes yes No yes yes -
bit
addition
No
Carry for addition - - 1 1 1 - 1 1 -
-69 1 0 1 1 1 0 1 1
+59 0 0 1 1 1 0 1 1
Addition of bits 1+0 1+0+0 1+1+1 1+1+1 1+1 1+0+0 1+1+1 1+1
given above =1 =1 =11 =11 =10 =1 =11 =10
Result - 1 1 1 1 0 1 1 0
There is No carry in to the sign bit and there is No carry out of the sign bit. Therefore, as per
Table 14, there is NO overflow and the result is correct and equal to -10. Verify the result
yourself.
Table 17: Addition of bigger negative number and smaller positive numbers. No
overflow is possible.

42
(iii) +69-59 Data Representation

Carry out
Number Signed 2's Complement Notation
(9th bit)
Carry
from Carry in to
previous yes Sign bit No No No yes No yes -
bit yes
addition
Carry
for 1 1 1 1 -
addition
+69 0 1 0 0 0 1 0 1
-59 1 1 0 0 0 1 0 1
Addition
of bits 1+0+1 1+1 0+0 0+0 1+0+0 1+1 1+0+0 1+1
given =10 =10 =0 =0 =1 =10 =1 =10
above
Result 1 0 0 0 0 1 0 1 0
There is a carry in to the sign bit (1) and there is a carry out of the sign bit (1).
Therefore, as per Table 14, there is NO overflow and the result is correct and equal to
+10. Discard the carry out bit (the 9th bit). Verify the result yourself.
Table 18: Addition of smaller negative number and bigger positive numbers. No
overflow is possible.
(iv) +69+59
Carry
out
Number Signed 2's Complement Notation
(9th
bit)
Carry Carry
from in to
previous No Sign yes yes yes yes yes yes -
bit bit
addition yes
Carry
for - 1 1 1 1 1 1 1 -
addition
+69 0 1 0 0 0 1 0 1
+59 0 0 1 1 1 0 1 1
Addition
of bits 1+0+0 1+1+0 1+0+1 1+0+1 1+0+1 1+1+0 1+0+1 1+1
given =1 =10 =10 =10 =10 =10 =10 =10
above
Result - 1 0 0 0 0 0 0 0
There is a carry in to the sign bit (1) but there is NO carry out of the sign bit.
Therefore, as per Table 14, there is an overflow and the result is incorrect. Verify the
result yourself. Overflow has occurred as the addition of the two numbers is +128,
which is out of the range of numbers that can be represented using 8-bit signed 2's
complement notation.
Table 19: Addition of two positive numbers.
It may be noted that for the signed 2’s complement notation, which is using 8 bits
representation, is -128 to +127, which can be checked from table 16 and table 19.
Overflow formally is defined as the situation where the result of operation on two or
more numbers, each of size n digits, exceeds the size n.

43
Introduction to Digital Overflow may cause even your correct programs to output incorrect results, therefore,
Circuits
is a very risky error. One of the ways of avoiding overflow in programs is to select
appropriate data types and verifying the results range.
Arithmetic Subtraction: In general, a computer system uses the signed 2's
complement notation, which simplifies the process of addition and subtraction as well
as has a single representation for 0. You can perform subtraction by just taking the 2's
complement of the number that is to be subtracted, and thereafter just adding the two
numbers just like it has been shown in this section.
Multiplication and division: Multiplication and division operations using signed 2's
complement notations are not straight forward. One of the simplest approach to
multiply two signed 2’s complement numbers is by multiplying the positive numbers
and then adjusting the result based on the sign. However, this approach is time
consuming as well as not used for implementation of multiplication operation. There
are a number of algorithms for performing multiplication and division. One such
algorithm is the Booth’s algorithm. A detailed discussion on these topics is beyond the
scope of this course.
In several arithmetic computations binary representation of decimal number is used
for performing arithmetic operations. The next subsection briefly explains this
representation.

2.5.3 Decimal Fixed Point Representation


Decimal digits can be represented in binary directly using four bits, as there are only
10 decimal digits, whereas 24=16 different values can be expressed using 4 bits. Thus,
a BCD may be represented as 0000 (for decimal digit 0) to 1001 (for decimal digit 9).
In addition, the sign can be represented using a single bit; however, it may change the
format of representation. Thus, in decimal fixed point representation even sign is
represented as four bits. Interestingly, the positive sign is represented using 1100 and
negative sign is represented using 1101. Please note these two combinations are
different from the representation of decimal digits, which is 0000 to 1001.
Example14: Represent +125 as BCD and a binary number.
+125 in BCD is given below:
Sign Digit 1 2 5
1100 0001 0010 0101

+125 in Binary:
S 7-bit magnitude
- 64 32 16 8 4 2 1
0 1 1 1 1 1 0 1

Why is this representation needed? In several computing devices the computations are
performed on binary coded decimals directly, without conversion to binary. One such
device was old calculator. You may refer to further readings for more details on BCD
arithmetic.
Check Your Progress 2
1) Write the BCD for the following decimal numbers:
i) -23456
ii) 17.89
iii) 299
.........................................................................................................................................
.........................................................................................................................................
44
......................................................................................................................................... Data Representation

.........................................................................................................................................

2) Compute the 1’s and 2’s complement of the following binary numbers. Also
find the decimal equivalent of the number.

i) 1110 0010
ii) 0111 1110
iii) 0000 0000
.........................................................................................................................................
.........................................................................................................................................
.........................................................................................................................................
……………………………………………………………………………………….

3) Add the following decimal numbers by converting them to 8-bit signed 2’s
complement notation.
i) +56 and – 56
ii) +65 and –75
iii) +121 and +8
Identify, if there is overflow.
.........................................................................................................................................
……………….. ..............................................................................................................
…….................................................................................................................................
…………………………………………………………………………………………

2.6 FLOATING POINT REPRESENTATION


In most real numbers, you may use a decimal point to distinguish integer and fraction
part. However, in a computer system the position of binary point is assumed in the
numbers. Fixed point representation, in general, fixes the location of the point towards
the right most. Thus, integer values are represented using fixed point representation.
What about the real numbers. A real number can be represented using an exponential
notation. This forms the basis of binary number representation called floating point
representation. For example, a decimal real number 29.25 can be represented as
0.2925×102 or 2925×10-2.
The first part of the number is called the “mantissa” of “significand” and second part
of the number is called the exponent. You may please note that the mantissa can either
be integer or fraction as shown in the example; the exponent value is adjusted
accordingly. In computer the mantissa and exponent both are represented as binary
numbers and the location of binary point is assumed. The following example explains
the binary floating point representation. This representation is IEEE 754 standard for
32-bit floating point number.
It has the following format:

Bit Positions
1 2 to 9 10 to 32
from the left 45
Introduction to Digital (a) Basic details
Length of
Circuits 1 bit 8 bits 23 bits
Field
Stores the fractional
To store the Sign
Purpose To store the Exponent Significand of the
bit
Number
The Sign bit is The exponent is The Significand is
Comment for the stored in biased form stored as a normalized
Significand with a bias of 127 binary number
Exponent (8 bits) so Significand values (23 The Number
possible values 0 to 255. Bits) Represented
A bias of 127 is Assume that Significand
assumed. Let the be M, which is 23 bit
exponent be exp long
For Exponent value (exp) All the bits of M are The number is ±0
0 zeros. depending on the sign bit.

M is NOT zero (M may The Number is


not be normalized) ±0.M ×2-126
For Exponent values Normalized The Number is
(exp) from 1 to 254 representation is used,
therefore, the first bit is ±1.M ×2exp-127
assumed to be 1.

For Exponent value (exp) All bits of M are zeros The number is ±∞
255 depending on the sign bit
It does NOT represent a
M is NOT zero. valid Number
(b) Single Precision 32-bit IEEE-754 Standard
Table 20: IEEE 754 Floating Point 32-bit Number Representation
The three terms in Table 20 are fractional Significand, bias and normalized. They are
explained below:
fractional Significand: Floating point number assumes that the position of binary point
is prior to the Significand, therefore, Significand is a fraction (Refer to example 15).
Bias: It is an interesting way to store signed numbers without using any sign bit. It
stores the number by adding a value in the exponent. For example, a 4 bit binary
number can store values 0000 to 1111, i.e. values 0 to 15. A bias of 8 will allow
values -8 to +7 to be stored in this range by adding the bias. In other words exponent
value -8 will be coded as (-8+8) 0, -7 will be coded as (-7+8) 1, and so on till +7,
which will be coded as (+7+8) 15. But, why is biasing used for exponent? The basic
reason here is that biased numbers simplify the floating point arithmetic. This is
explained later with the help of an example.
Normalized: A fraction is called normalized if it starts with a bit value 1 and not with
bit value 0. For example, the values .1001, .1111, .1000, .1010 are normalized, but the
values .0100, .0001, .0010, .0011 are not normalized.
The following example explains the process of converting a decimal real number to a
floating point number representation using IEEE-754 standard (32-bit representation).
You may solve similar problems using double precision representation also, where
only the size of exponent (and bias) and significand is different.
Example 15: Represent the number -29.25 using IEEE 754 (32 bit) representation as
shown in Table 20.
46
Solution: Data Representation

Step 1: Convert the number to binary


The number should first be converted to binary as follows:
Sign bit = 1 as number is negative
29 can be represented in 7 bits as 001 1101
.25 can be represented in 4 bits as .0100
Thus, 29.25 without sign is 001 1101 . 0100
Step 2: Normalize the number
Normalizing the number requires binary point to be moved before the most
significant 1, it requires point to be shifted to left by 5 spaces. Thus, the
normalized number now is: 0.1 1101 0100×25.
Step 3: Adjust the normalized number
It may be noted from Table 20(b), that in IEEE-754 representation, when the
exponent is between 1 and 254, the first bit is assumed to be 1, therefore, the
Significand whose size is 23 bits, actually represents 24 bit Significand. In
addition, please note that as the number is assume to be ±1.M ×2exp-127,
therefore, the value to be represented (0.1 1101 0100×25) must be adjusted to
this format by shifting the binary point one place to the right and adjusting the
exponent. Thus, the adjusted number is 1.11010100×24.
Step 4: Compute the exponent using the bias
Finally add bias to the exponent value to obtain exp value of IEEE-754. In this
case, exp = 4+127 (127 is the bias value)=131.
Step 5: Represent the final number
Represent the sign bit (S), exp in 8 bits and Significand in 23 bits, as follows:
S exp of length 8 bits Significand of length 23 bits
(value 131) (value 1.11010100) Represented as 1. 110 1010 0
1 1000 0011 110 1010 0000 0000 0000 0000

Example 16: A number using IEEE 754 (32 bit) is given below, what is the
equivalent decimal value.
S exp of length 8 bits Significand of length 23 bits (M)
1 1000 1001 111 1000 0000 0000 0000 0000
Solution:
The number is represented as: ±1.M ×2exp-127
The sign bit states it is a negative number
M is 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Exp is 1 0 0 0 1 0 0 1 = 137 in decimal.
The number is -1.11110000000000000000000×2137-127
= -1.11110000000000000000000×210
= -11111000000.0000000000000
= -11111000000
= -1984 in decimal

In floating point numbers a term precision is very important. What is precision? The
precision defines the correctness of representation. For example, suppose you just use
2 decimal digits in a fractional decimal numbers, then you can represent the numbers
0.10, 0.11, 0.99 etc precisely. The number 0.985 may be either truncated to 0.98 or
rounded off to 0.99. This introduces an error in number, which is due to the fact that
the size of Significand is limited. For scientific computations such errors may lead to
failure. Therefore, IEEE-754 defines many different precision of numbers, few such
popular precisions are single precession IEEE-754 number, which is a 32-bit
representation, explained above; IEEE-754 double precision number, which is a 64
47
Introduction to Digital bit representation with 1 sign bit, 11 bit exponent and 52 bit Significand; and IEEE-
Circuits
754 quadruple precision number, which is a 128 bit representation with 1 sign bit, 15
bit exponent and 112 bit Significand. It may be noted that in programming languages
you use data types float and double, which corresponds to the IEEE-754 single and
double representation respectively.
Finally, what is the range of the numbers that can be represented using the IEEE-754
representation? As stated in Table 20, the minimum exponent value for a normalized
number is 1 and maximum is 254. Therefore, the minimum (negative) number will be:
S exp of length 8 bits Significand of length 23 bits (M)
1 0000 0001 000 0000 0000 0000 0000 0000
This will be equal to ±1.M ×2exp-127
= -1.000 0000 0000 0000 0000 0000×21-127
= -1×2-126
The maximum (positive) number will be:
S exp of length 8 bits Significand of length 23 bits (M)
1 1111 1110 111 1111 1111 1111 1111 1111
This will be equal to ±1.M ×2exp-127
= +1.111 1111 1111 1111 1111 1111×2254-127
= +(1.111 1111 1111 1111 1111 1111
+0.000 0000 0000 0000 0000 0001
-0.000 0000 0000 0000 0000 0001) ×2127
= +(10. 000 0000 0000 0000 0000 0000
-0.000 0000 0000 0000 0000 0001) ×2127
= +(2-1×2-23) ×2127
You may please note that IEEE-754 has a representation for 0 and infinite.
Arithmetic Using Floating Point Numbers:
As you have noticed that addition and subtraction using 2’s complement notation was
direct, but addition and subtraction of floating point number requires several steps.
These steps are explained with the help of the following example.
Example 17: Add the following floating point numbers
Equivalent Numbers IEEE 754 32 bit representation
Decimal Binary S exp Significand (M)
-7 -1.11×2129-127 1 1000 0001 110 0000 0000 0000 0000
= -1.11×22
= - 111.0
+24 +1.1×2131-127 0 1000 0011 100 0000 0000 0000 0000
= +1.1×24
= +11000.0
Solution:
Step 1: Find the difference in exponents of the numbers
1000 0011 - 1000 0001 = 0000 0010 = 2 in decimal
Step 2: Align the Significand of the smaller number by denormalizing it

Shift the smaller number to right by the difference of exponents as shown:

The value of 1.M


Significand of First Number (Smaller) 1.110 0000 0000 0000 0000
Shift it to right twice (denormalized) 0.011 1000 0000 0000 0000
Step 3: Check the sign of the two numbers, if same add else subtract smaller number
from bigger number.
The signs are different; therefore, subtract smaller number from larger number
48
The value of 1.M Data Representation
Significand of Second Number (Larger) 1.100 0000 0000 0000 0000
Denormalized first number (smaller) 0.011 1000 0000 0000 0000
Result of Subtraction 1.000 1000 0000 0000 0000

Step 4: Select the sign and exponent of the bigger number as sign and exponent of the
result and Normalize the Significand by adjusting the exponent
The result is shown below. Please note that in this case, there is no need to
normalize the result as it is already normalized.

Result of addition IEEE 754 32 bit representation


operation (verification)
Decimal Binary S exp 1.M
+17 +1.0001×2131-127 0 1000 0011 1.000 1000 0000 0000 0000
= +1.0001×24
= +10001.0

Likewise, subtraction operation can be performed.

Multiplication and division operations in floating point number require multiplication


or division of significands as well as addition or subtraction of exponents. In addition,
these operations may require normalizing the result. The following example shows the
multiplication of two floating point numbers.

Example 18: Multiply the following floating point numbers


Equivalent Numbers IEEE 754 32 bit representation
Decimal Binary S exp Significand (M)
-7 - 111.0 1 1000 0001 110 0000 0000 0000 0000
+24 +11000.0 0 1000 0011 100 0000 0000 0000 0000
Solution:
Step 1: Multiply the Significand values of the two numbers and truncate to 23 bits
(plus one assumed bit)
The value of 1.M
Significand of First Number 1.110 0000 0000 0000 0000
Significand of Second Number 1.100 1000 0000 0000 0000
Significand after multiplication 10.101 0000 0000 0000 0000
Step 2: If multiplication, add the exponents and subtract the bias as both the numbers
have biased exponents; if division, subtract the exponent of divisor from the
exponent of dividend and add the bias as it get canceled in subtraction.
Exponent of First Number 1000 0001
Exponent of Second Number 1000 0011
Multiplication operation, so add and subtract bias 1 0000 0100
-127 -0111 1111
The new exponent 1000 0101
Step 3: Check the sign of the two numbers, if same result has + sign else - sign
The signs are different; therefore, result has negative sign. Also, normalize the
Significand of the result

Result of Multiplication IEEE 754 32 bit representation


49
Introduction to Digital operation
Circuits
Decimal Binary S exp 1.M
Normalize the result 1 1000 0101 10.10 1000 0000 0000 0000
Normalized result 1 1000 0110 1.010 1000 0000 0000 0000
-168 -1.0101×2134-127 1 1000 0110 1.010 1000 0000 0000 0000
= -1.0101×27
= -10101000.0
Likewise, division operation can be performed. You may refer to further readings for
finding more details on floating point numbers and arithmetic.

2.7 ERROR DETECTION AND CORRECTION


CODES
In the previous sections you have gone through various binary codes and
representations. A computer works on these binary numbers and during the operations
of the computer data is transferred from a source to one or more destinations. During
this process of transmission of data, there is a possibility of transmission errors. The
purpose of error detection and correction codes is to identify those data transmission
errors and correct the data, as far as possible. As the data in computer consists of
binary digits, therefore, an error in a bit can result in change of its value from 0 to 1 or
vice versa. This section explains one error detection code called Parity and one error
detection and correction code called Hamming Error correction code.
Parity bit: The purpose of a parity bit is to detect an error in a group of bits. But how
does it perform the task of checking error? It is explained with the help of following
process:
Steps Source Side Destination Side
A parity bit is generated at source,
which ensures that:
Number of 1’s in sources data and
source parity bit is odd (called Odd
parity)
1
OR
Number of 1’s in sources data and
source parity bit is even (called Even
parity)

The source data and source parity bits


2
are sent to the designation.
The source data and source parity
bits are received at the designation
and a destination parity bit is
3 generated using only the data
received (not source parity) by using
the same process i.e. Even of Odd
parity used at source.
Source parity bit and destination
parity bit are compared. If they are
same, then no error in data is
4
detected, else either the data or
parity bit is in error, which is
reported.
Table 21: Error detection using parity bit
Example 19 explains the process as given above.
Example 19: 7-bit data 010 1001 is sent from a source, such as CPU register, to a
destination, such as RAM. The data is received at the destination as 010 1000 having
error in one bit. How does this error be detected by parity bit?
50
Solution: Data Representation

Steps The Process


Step 1: Data to be sent: 010 1001
At Source Odd Parity bit is computed as:
The data has 3 bits having value 1.
So odd parity bit =0
Step 2: The source parity + source data is sent as:
At Source 0 010 1001
Step 3: As per the statement of the example the data is
At Destination received as: 0 010 1000
Source Parity = 0 is received correctly as error is in
one bit only
Destination parity is computed on data received,
which is 010 1000. It has 2 bits as 1, therefore, Odd
parity bit at destination=1
Step 4: Source parity bit (0) ≠ Destination parity bit(1)
At Destination ERROR in data

It may be noted that parity bit can detect errors in case 1 bit is in error. In case 2 bits
are in error, then it will fail to detect the error.
Hamming Error-Correcting Code: The Hamming code was conceptualized by
Richard Hamming at Bell Laboratories. This code is used to identify and correct the
error in 1 bit. Thus, unlike parity bit, which just identifies the existence of error, this
code also identifies the bit that is in error. The idea of Hamming’s code is to divide
the data bits into a number of groups; and using the parity bit to identify, which
groups are in error; and based on the groups in error, identify the bit which has caused
the error. Thus, the grouping process has to be very special, which is explained below:
How to Group data bits? Before grouping, you may assume the placement of data and
parity bits using the following considerations.
A bit position that is exact power of 2 will be used for storing parity bit. For example,
20=1, that is 1st bit position will be used to store parity bit, likewise 21=2, 22=4, and
23=8, i.e. 2nd , 4th and 8th bit positions will also be used to store parity bit. Thus, you
have now 7 bit data and 4 parity bits, so a total of 11 bit positions. (p indicates parity
bit and d indicates data bit)

Bit Position 12 11 10 9 8 7 6 5 4 3 2 1
Stores d8 d7 d6 d5 p4 d4 d3 d2 p3 d1 p2 p1
For grouping the data bit
number is used to identify
the parity bit to which data
should be member of
Bit position 12 (8+4) 8 4 - -
contains (d8)
Bit position 11 (8+2+1) 8 - 2 1
contains (d7)
Bit position 10(8+2) 8 - 2 -
contains (d6)
Bit position 9(8+1) 8 - - 1
contains (d5)
Bit position 8 contains p4
(p4)
Bit position 7(4+2+1) - 4 2 1
contains (d4)
Bit position 6(4+2) - 4 2 -
51
Introduction to Digital contains (d3)
Circuits
Bit position 5(4+1) - 4 - 1
contains (d2)
Bit position 4 contains p3
(p3)
Bit position 3(2+1) - - 2 1
contains (d1)
Bit position 2 contains p2
(p2)
Bit position 1 contains p1
(p1)
Table 22: Placement of data and parity bits for Hamming's error detection and
correction code
Groups for parity bits: The groups are made for each on the basis of bit positions, on
the basis of above Table. A bit position, which includes a parity bit position is
included in the group of that parity bit. For example, the bit at bit position 12 will be
included in group of parity bit p4 and p3; similarly, bit position 7 will be included in
group of parity bit p3, p2 and p1. But why these grouping? You may please note that
each data bit is part of unique combination of groups, so if it is in error, it will cause
errors in all those groups to which it is a part of. Thus, by identifying all the groups,
which has parity mismatch, will identify the bit which is in error. The following table
shows these groups for 8 bit data.
Group for Bit positions and data bit
Parity bits
p4 Bit position 12 data bit d8, Bit position 11 data bit d7, Bit position 10
data bit d6 and Bit position 9 data bit d5
p3 Bit position 12 data bit d8, Bit position 7 data bit d4, Bit position 6
data bit d3 and Bit position 5 data bit d2
p2 Bit position 11 data bit d7, Bit position 10 data bit d6, Bit position
7data bit d4, Bit position 6 data bit d3 and Bit position 3data bit d1
p1 Bit position 11 data bit d7, Bit position 9 data bit d5, Bit position
7data bit d4, Bit position 5 data bit d2 and Bit position 3data bit d1

Therefore, the parity bits will be generated using the following data bits:
Parity bit Compute Odd parity of Data bits
p4 d8, d7, d6 and d5
p3 d8, d4, d3 and d2
p2 d7, d6, d4, d3 and d1
p1 d7, d5, d4, d2 and d1

So, how the data bit in error be recognised? It is illustrated with the help of following
example
Example 20: 8-bit data 1010 1001 is sent from a source to a destination. The data is
received at the destination as 1000 1001 having error in only one bit. How does this
error be detected and corrected by Hamming’s error detection and correction code?
Solution:
Step 1: Place the bits as shown in Table 22 and generate parity bits at the source, for
example, the odd parity bit p4 is computed using d8, d7, d6 and d5 (shown as shaded
cells in the following table). Their values are 1, 0, 1, 0 as shown in the table, as there
are only two bits containing 1, therefore, the odd parity value for p4 is 1. Likewise
compute the other parity bits as shown in Table 23.
Step 2: Data and the associated parity bits in the sequence as shown below are sent to
the destination, where once again parity bits are computed for the received data.
Step 3: Compare the source parity bits and destination parity bits as shown in Table
23. Please note when two parity bit match, a 0 is put in the compare word else a 1 is
put. The magnitude of comparison word, indicates the bit position that is in error.
52
Step 4: If there is an error, then the data at bit position that is in error is Data Representation
complemented.
Step 5: The data is used at the destination after omitting the parity bits.
Bit Position 12 11 10 9 8 7 6 5 4 3 2 1
Stores d8 d7 d6 d5 p4 d4 d3 d2 p3 d1 p2 p1
Data Bits 1 0 1 0 1 0 0 1
Compute Odd parity bit 1
p4 using d8, d7, d6 and
d5
Compute Odd parity bit 1
p3 using d8, d4, d3 and
d2
Compute Odd parity bit 0
p2 using d7, d6, d4, d3
and d1
Compute Odd parity bit 1
p1 using d7, d5, d4, d2
and d1
Data and Parity bits at 1 0 1 0 1 1 0 0 1 1 0 1
Source
Data is sent to the destination, where data is received with 1 bit in error (given),
therefore all the source parity bits are received without any error
Data received at 1 0 0 0 1 1 0 0 1 1 0 1
destination including
parity bits
Step 2: Compute the parity bits using the data received at the destination
Data Bits Received 1 0 0 0 1 0 0 1
Compute Odd parity bit 0
p4 using d8, d7, d6 and
d5
Compute Odd parity bit 1
p3 using d8, d4, d3 and
d2
Compute Odd parity bit 1
p2 using d7, d6, d4, d3
and d1
Compute Odd parity bit 1
p1 using d7, d5, d4, d2
and d1
Step 3: Compare the source parity bits and destination parity bits.
Source Parity bits 1 1 0 1
Destination Parity bits 0 1 1 1
Parity Comparison word 1 0 1 0
(0 if source and
destination parity match
else 1)
The comparison word is 1010 = 10 in decimal, i.e. bit position 10 is in error. The error
in this bit can be corrected by complementing the bit position 10.
Corrected Data 1 0 1 0 1 0 0 1
Table 23: Example of Hamming's error detection and correction code

It may be noted in Table 23 that the value of comparison word 0000 would mean that
there is no error in transmission of data. In addition, the values 1000, 0100, 0010 and
0001 would mean that one bit error has occurred in the transmission of source parity

53
Introduction to Digital bits p4, p3, p2 and p1 respectively. Thus, no change would be needed in the received
Circuits
data bits at the destination in such cases.
It may please be noted that Hamming's code presented in this section can detect and
correct errors in a single bit ONLY. It will not work, in case two or more bits are in
error. One final question is about the size of the code needed to correct single bit
error. The size will be dependent on the size of data. A simple rule is that the size of
code and the data should be less than the possible bit positions that can be flagged by
the comparison word. If the data to be transmitted is of size D bits and P is the number
of parity bits needed for the given Hamming's code, then size of the code is the
smallest value of P, which satisfies that following equation:
D + P < 2P
For example, for a D=4 bits, the value of P would be 3 as:
4 + 3 < 23 as 7<8
and for a D=8 bits, the value of P would be 4 as:
8 + 4 < 24 as 12<16
Check Your Progress 3
1) Represent the following numbers using the IEEE-754 32bit standard:
i) 39.125
ii) –0.0000110002
2) Compute the Odd and Even parity bits for the following data:
i) 0111110
ii) 0110000
iii) 1110111
iv) 1001100

.........................................................................................................................................
.........................................................................................................................................

3) A 4 bit data 1011 is received at the destination as 1111, assuming single bit is in
error, illustrate how Hamming's single error correction code will detect and
correct the error
.........................................................................................................................................
.........................................................................................................................................
.........................................................................................................................................

2.7 SUMMARY
This Unit has introduced you to the basic aspects of data representation. It introduces
the character representing including ASCII and Unicode. In addition, the Unit
explains number conversion and fixed point representation of binary number. The
Unit also highlights the arithmetic operations. This was followed by a detailed
discussion on the floating point numbers. Though only IEEE 754 32-bit single
precision numbers are explained, however, the logic discussed is applicable to double
precision numbers too. The Unit finally introduces you to error detection code -parity
bit and error detection and correction code. You must practice the data conversions
and these codes as they would be useful, when you deal with binary numbers.

54
You should refer to the further readings for more detailed information on these topics. Data Representation
You are advised to take the help of further readings, Massive Open Online Courses
(MOOCs), and other online resources as Computer Science is a dynamic area.

2.8 SOLUTIONS/ANSWERS

Check Your Progress 1


1) i) 11100.011012 to Octal and Hexadecimal
Binary Number 0 1 1 1 0 0 . 0 1 1 0 1 0
Grouping Directions
Grouped (- replaced by 0) 0 1 1 1 0 0 . 0 1 1 0 1 0
Binary place values 4 2 1 4 2 1 . 4 2 1 4 2 1
Equivalent Octal Digit 0+2+1=3 4+0+0=4 . 0+2+1=3 0+2+0=2
Octal Number 3 4 . 3 2

Binary Number 0 0 0 1 1 1 0 0 . 0 1 1 0 1 0 0 0
Grouping Directions .
Grouped 0 0 0 1 1 0 . 0 1 1 1 0 0
1 0 0 0
Binary place values 8 4 2 1 8 4 2 1 . 8 4 2 1 8 4 2 1
Equivalent 0+0+0+1=1 8+4+0+0=C . 0+4+2+0=6 8+0+0+0=8
Hexadecimal Digit
Hexadecimal Number 1 C . 6 8

ii) 11011010102 to Octal and Hexadecimal


Binary Number 0 0 1 1 0 1 1 0 1 0 1 0
Grouping Directions
Grouped (- replaced by 0) 0 0 1 1 0 1 1 0 1 0 1 0
Binary place values 4 2 1 4 2 1 4 2 1 4 2 1
Equivalent Octal Digit 0+0+1=1 4+0+1=5 4+0+1=5 0+2+0=2
Octal Number 1 5 5 2

Binary Number 0 0 1 1 0 1 1 0 1 0 1 0
Grouping Directions
Grouped 0 0 1 1 0 1 1 0 1 0 1 0
Binary place values 8 4 2 1 8 4 2 1 8 4 2 1
Equivalent Hexadecimal Digit 0+0+2+1=3 0+4+2+0=6 8+0+2+0=A
Hexadecimal Number 3 6 A

2) i) 11910 to binary
The place 26 25 24 23 22 21 20
value =64 =32 =16 =8 =4 =2 =1
N = 119 119-64=55; 55-32=23; 23-16=7; 7-4=3; 3-2=1; 1-1=0
Equivalent Binary 1 1 1 0 1 1 1

ii) 19.12510
The place 24 23 22 21 20 . 2-1 2-2 2-3
55
Introduction to Digital value =16 =8 =4 =2 =1 =0.5 =0.25 =0.125
Circuits
N = 19.125 19-16=3; 3-2=1; 1-1=0; and 2-3 =0.125
Equivalent Binary 1 0 0 1 1 . 0 0 1

iii) 32510
The place 28 27 26 25 24 23 22 21 20
value =256 =128 =64 =32 =16 =8 =4 =2 =1
N = 325 325-256=69-64=5; 5-4=1; 1-1=0
Equivalent Binary 1 0 1 0 0 0 1 0 1

3 i) 11910
The place 26 25 24 23 22 21 20
value =64 =32 =16 =8 =4 =2 =1
Equivalent Binary 1 1 1 0 1 1 1
Equivalent Octal 1 6 7
Equivalent Hexadecimal 7 7

ii) 19.12510
The place 24 23 22 21 20 . 2-1 2-2 2-3
value =16 =8 =4 =2 =1 =0.5 =0.25 =0.125
Equivalent Binary 1 0 0 1 1 . 0 0 1
Equivalent Octal 2 3 . 1
Equivalent Hexadecimal 1 3 . 0010=2

iii) 32510
The place 28 27 26 25 24 23 22 21 20
value =256 =128 =64 =32 =16 =8 =4 =2 =1
Equivalent Binary 1 0 1 0 0 0 1 0 1
Equivalent Octal 5 0 5
Equivalent Hexadecimal 1 4 5

Check Your Progress 2

Check Your Progress 2


1) i) -23456 to BCD
Sign Digit 2 3 4 5 6
1101 0010 0011 0100 0101 0110

ii) 17.89
Sign Digit 1 7 . 8 9
1100 0001 0111 . 1000 1001

iii) 299
Sign Digit 2 9 9
1100 0010 1001 1001
2)
Deci The Number Signed 1's Complement Signed 2's Complement
56
mal Data Representation
-30 1 1 1 0 0 0 1 0 0 0 0 1 1 1 0 1 0 0 0 1 1 1 1 0
+126 0 1 1 1 1 1 1 0 1 0 0 0 0 0 0 1 1 0 0 0 0 0 1 0
0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0

3) i) +56 and – 56
Carry
out
Number Signed 2's Complement Notation
(9th
bit)
Carry for
1 1 1 1 1 -
addition
+56 0 0 1 1 1 0 0 0
-56 1 1 0 0 1 0 0 0
Addition of
1+0+1 1+0+1 1+1+0 1+1+0 1+1 0+0 0+0 0+0
bits given
=10 =10 =10 =10 =10 =0 =0 =0
above
Result 1 0 0 0 0 0 0 0 0
There is a carry in to the sign bit (1) and there is a carry out of the sign bit (1).
Therefore, as per Table 14, there is NO overflow and the result is correct and equal to
0. Discard the carry out bit (the 9th bit). Verify the result yourself.
ii) +65 and –75
Carry out
Number Signed 2's Complement Notation
(9th bit)
Carry for
1 -
addition
+65 0 1 0 0 0 0 0 1
-75 1 0 1 1 0 1 0 1
Addition of
0+1 1+0 0+1 0+1 0+0 0+1 1+0+0 1+1
bits given
=1 =1 =1 =1 =0 =1 =1 =10
above
Result 1 1 1 1 0 1 1 0
There is No carry in to the sign bit and there is No carry out of the sign bit. Therefore,
as per Table 14, there is NO overflow and the result is correct and equal to -10.
Verify the result yourself.
iii) +121 and +8
Carry
out
Number Signed 2's Complement Notation
(9th
bit)
Carry for
1 1 1 1 -
addition
+121 0 1 1 1 1 0 0 1
+8 0 0 0 0 1 0 0 0
Addition of bits 1+0+0 1+1+0 1+1+0 1+1+0 1+1 0+0 0+0 0+1
given above =1 =10 =10 =10 =10 =0 =0 =1
Result 1 0 0 0 0 0 0 1
There is a carry in to the sign bit (1) and there is NO carry out of the sign bit.
Therefore, as per Table 14, there is OVERFLOW and the result is incorrect.

Check Your Progress 3


57
Introduction to Digital 1) i) 39.125
Circuits
Step 1: Convert the number to binary
Sign bit = 0 as number is positive
39 can be represented in 7 bits as 010 0111
.125 can be represented in 4 bits as .0010
Thus, 39.125 without sign is 0100111.0010
Step 2: Normalize the number
Normalizing the number requires binary point to be moved before the most
significant 1, it requires point to be shifted to left by 6 spaces. Thus, the
normalized number now is: 0.1001110010×26.
Step 3: Adjust the normalized number
The number is assume to be ±1.M ×2exp-127, therefore, the value to be
represented (0.1001110010×26) must be adjusted. The adjusted number is
1.001110010×25.
Step 4: Compute the exponent using the bias
Add bias to the exponent value.
exp = 5+127 (127 is the bias value)=132.
Step 5: Represent the final number
Represent the sign bit (S), exp in 8 bits and Significand in 23 bits, as follows:
S exp of length 8 bits Significand of length 23 bits
(value 132) (value 1.001110010) Represented as 1.001110010
0 1000 0100 001 1100 1000 0000 0000 0000

ii) –0.0000110002
Sign bit = 1 as number is negative
The number is of the format ±1.M ×2exp-127,
1.1000×2-5
exp = -5+127=122
S exp of length 8 bits Significand of length 23 bits
(value 122) (value 1.1000) Represented as 1.1000
0 0111 1010 100 0000 0000 0000 0000 0000

2)
The Number Even Parity Odd Parity
0111110 1 0
0110000 0 1
1110111 0 1
1001100 1 0

3) The size of data (D)=4 bits, the value of P would be 3 as:


4 + 3 < 23 as 7<8
The bit positions for this code would be:

58
Bit Position 7 6 5 4 3 2 1 Data Representation
Stores d4 d3 d2 p3 d1 p2 p1
For grouping the data bit number is used to identify
the parity bit to which data should be member of
Bit position 7(4+2+1) contains (d4) 4 2 1
Bit position 6(4+2) contains (d3) 4 2 -
Bit position 5(4+1) contains (d2) 4 - 1
Bit position 4 contains (p3) p3
Bit position 3(2+1) contains (d1) - 2 1
Bit position 2 contains (p2) p2
Bit position 1 contains (p1) p1

Parity bit Compute Odd parity of Data bits


p3 d4, d3 and d2
p2 d4, d3 and d1
p1 d4, d2 and d1

Bit Position 7 6 5 4 3 2 1
Stores d4 d3 d2 p3 d1 p2 p1
Data Bits 1 0 1 1
Compute Odd parity bit p3 using d4, d3 and d2 1
Compute Odd parity bit p2 using d4, d3 and d1 1
Compute Odd parity bit p1 using d4, d2 and d1 0
Data and Parity bits at Source 1 0 1 1 1 1 0
Data received at destination including parity bits 1 1 1 1 1 1 0
Data Bits Received 1 1 1 1
Compute Odd parity bit p3 using d4, d3 and d2 0
Compute Odd parity bit p2 using d4, d3 and d1 0
Compute Odd parity bit p1 using d4, d2 and d1 0
Source Parity bits 1 1 0
Destination Parity bits 0 0 0
Parity Comparison word (0 if source and 1 1 0
destination parity match else 1)
Location 6 is in error, which is decimal equivalent 1 0 1 1
of 110 the comparison word. So Corrected Data

59
Introduction to Digital
Circuits
UNIT 3 LOGIC CIRCUITS - AN
INTRODUCTION
Structure Page Nos.
3.0 Introduction
3.1 Objectives
3.2 Logic Gates
3.3 Boolean Algebra
3.4 Logic Circuits
3.5 Combinational Circuits
3.5.1 Canonical and Standard Forms of an Boolean expression.
3.5.2 Minimization of Gates
3.6 Design of Combinational Circuits
3.7 Examples of Logic Combinational Circuits
3.7.1 Adders
3.7.2 Decoders
3.7.3 Multiplexer
3.7.4 Encoder
3.7.5 Programmable Logic Array
3.7.6 Read Only Memory ROM
3.8 Summary
3.9 Solutions/ Answers

3.0 INTRODUCTION

In the previous units, we have discussed the basic configuration of computer


system, simple instruction execution, data representation and different
computer organisations. In addition, ISA and micro-architecture was discussed
in unit 1 of this block. It may be noted that, as stated in the unit1, gates and
logic circuits are the building blocks of a computer system. This unit
introduces you to some of the basic components of computer system that are
essential for learning the logic of binary computing devices. In this unit, you
will be introduced to the concepts of logic gates, binary adders, logic circuits
and combinational circuits. These circuits form the backbone of any computer
system and knowing them will be useful in lower level programming.

3.1 OBJECTIVES

After going through this unit you will be able to:


 explain the basic functions of logic gates;
 describe role of Boolean algebra in digital circuits;
 perform the minimization the number of gates for a Boolean expression;
 identify and explain the basic circuits in a computer system
 design simple combinational circuits.

3.2 LOGIC GATES


A computer system is a binary device that uses electronic signals to perform
basic computation on the digital data, which is also stored electronically. The
60
Principles of Logic
basis of such computation, called digital logic, are the electronic circuits Circuits I
fabricated on the semi-conductor chips that are used to formulate a set of
operations. These basic sets of operations are then used to create complex
circuitry, which is able to perform arithmetic, logical and control operations in
a computer system. Thus, the simplest form of binary logic is: how a set of
inputs can be used to create a typical output sequence, which is achieved using
electronic gates.
A logic gate is an electronic circuit made of transistors, which produces a
characteristic output signal for a typical input. In general, the input accepts one
to several input values and produces a specific output. This output can be 0 or
1. Logic gates are used for implementing basic Boolean operations, which is
explained in the subsequent sections. A logic gate is represented using a
graphics symbol and performs a simple binary function, which can be
represented with the help of a truth table. Figure 3.1 shows the basic logic
gates. Please note that in Figure 3.1, the character I represent input values and
F represent output values.

Gate Graphical Symbol Truth Table Description


It simply inverts the
I F
input value, i.e. input
NOT I F 0 1
0 will be converted to
1 0
1 and vice-versa.
I1 I2 F
0 0 0 This gates output a
OR I1 0 1 1 value 1, if at least one
F 1 0 1 of its input is 1.
I2
1 1 1
I1 I2 F
This gate output is 1,
0 0 0
I1 if both the input
AND 0 1 0
F values are 1 else
I2 1 0 0
output is 0.
1 1 1
I1 I1 I2 F
This is NOT AND, so
F 0 0 1
I2 where ever AND gate
NAND 0 1 1
produce 0, this gate
1 0 1
outputs 1.
1 1 0
I1 I2 F
I1 0 0 1
NOR F 0 1 0 This is NOT OR.
I2 1 0 0
1 1 0
I1 I2 F
Exclusive OR
I1 0 0 0
XOR produces output 1
F 0 1 1
I2 when the two input
1 0 1
are dissimilar.
1 1 0
Figure 3.1: Logic Gates

In the next few sections, we explain how these simple logic gates can be used
to construct logic circuits. The next section explains the mathematics of logic
circuits.

61
Introduction to Digital
Circuits 3.3 BOOLEAN ALGEBRA

Boolean algebra was designed by George Boole in the 19th century. It presents
mathematical foundation for performing various functions on binary variables.
Please recall that binary variables can have only two values 0 or 1. The value 0
by convention is taken as False and 1 as True. Please also refer to Figure 3.1,
which shows the truth table for various gates. These truth tables can also be
represented using the Boolean function. Figure 3.2 shows the Boolean
algebraic representation of logic gates of Figure 3.1.
Boolean
Gate Explanation
Representation

The symbol ‘′’ in Boolean expression


NOT F = I′ represents negation operator. Thus, output F is
complement or negation of the value of I.
The OR is represented by Boolean operator ‘+’,
OR F = I1 + I2 which represents that the value of F be zero
only if both I1 and I2 are zero else 1.

The Boolean operator ‘.’ represents AND


AND F = I1 . I2 operation. The value of F be 1 if both I1 and I2
are 1, else F will be 0.

NAND F = (I1 . I2)′ NOT of AND

NOR F = (I1 + I2)′ NOT of OR

XOR F = I1 ⊕ I2 ⊕ is an exclusive – OR operator.


Figure 3.2: Gates and related Boolean algebraic expression.
The Boolean algebra is very useful for mathematically representing a binary
operation. For example, addition of two binary digits can be represented in
truth table form as:
I1 I2 Carry (C) Sum (S)
0 0 0 0
0 1 0 1
1 0 0 1
1 1 1 0

 It can be represented using two Boolean functions, one for each output,
viz. Carry and Sum, as:
C = I1 . I2 and
S = I1 ⊕ I2 (Please refer to Fig. 3.1 & Fig 3.2)
The Boolean algebra is used to simplify logic circuits that are made of logic
gates. However, before we demonstrate this process of simplification, first you
may go through the basic rules of Boolean algebra. Figure 3.3 shows these
rules. Please note that some of the rules are shown with proof using truth table.
You can make truth table yourself for the cases for which the proof is not
shown.

62
Principles of Logic
Circuits I
Input Identities
(i) I I+0=I I+1=1 I.0=0 I.1=I
0 0+0= 0 0+1=1 0.0=0 0.1= 0
1 1+0= 1 1+1=1 1.0=0 1.1= 1

Input Identities
I I+I=I I + I′ = 1 I.I=I I . I′ = 0
(ii)
0 0+0= 0 0+1=1 0.0= 0 0.1=0
1 1+1= 1 1+0=1 1.1= 1 1.0=0
(Please note 0′= 1 and 1′ = 0)
(iii) The rules (given without proof)
I1+I2=I2+I1 ;
I 1 . I2 = I2 . I1 ;
I1+(I2+I3)=(I1+I2)+I3 ;
I1.(I2.I3)=(I1.I2).I3
(iv) The rules (given without proof)
I1. (I2+I3) = (I1. I2 + I1.I3) ;
I1+I2.I3=(I1+I2) . (I1+I3)
(v) Demorgan’s Laws:
(I1+I2)′ = I1′.I2′
(I1.I2) ′ = I1′ +I2′
(Very important laws for algebraic simplification.)
(vi) Complement of complement of a number is the Number itself
I I′ (I′) ′
0 1 0 so (I′)′ = I
1 0 1
Figure 3.3: The Rules of Boolean algebra
All the rules and identities as given in Figure 3.3 can be used for simplification
of Boolean function. This is explained with the help of following example.
Example: Simplify the Boolean function:
F = ((A′+B′)′ + (A.B) ′)′
Solution:
F = ((A′+B′)′ + (A.B)′)′
F = ((A′+B′)′)′ . ((A.B)′)′ (Using Demorgan’s Law)
= (A′+B′) . (A.B) Using Rule (vi)
= (A.B) . (A′+B′) Reversing the terms - Rule (iii)
= ((A.B).A′) + ((A.B) . B′) Using Rule (vi) taking (A.B) as I1
= ((A.A′).B) + (A.(B.B′) Using Rule (iii)
= 0.B + A.0 Using Rule (ii)
= 0+0 Using Rule (i)
=0
F=0
63
Introduction to Digital You can check the above using the following Truth Table
Circuits
(A′+B′)′
A B A′ B′ (A′+B′) (A.B) (A′+B′)′ (A.B)′ ((A′+B′)′+(A.B)′)′
+ (A.B)′
0 0 1 1 1 0 0 1 1 0
0 1 1 0 1 0 0 1 1 0
1 0 0 1 1 0 0 1 1 0
1 1 0 0 0 1 1 0 1 0

The Boolean algebra is very useful in simplification of logic circuits. It is


explained in the next section.

3.4 LOGIC CIRCUITS


A logic circuit performs the basic operation on binary data. The operation of a logic
circuit can be represented using a Boolean function, or in other words a Boolean
function can be implemented as a logic circuit. The three basic gates that can be used
for drawing logic circuits are - AND, OR and NOT gates. Consider, for example, the
Boolean function: -
F (A,B,C) = A.B+C
The relationship between this function and its binary variables A, B, C can be
represented in a truth table as shown in Figure 3.4(a). Figure 3.4(b) shows the
corresponding logic circuit.

Inputs Output
A B C F= A.B+C

0 0 0 F=0.0+0=0
0 0 1 F=0.0+1=1
0 1 0 F=0.1+0=0
0 1 1 F=0.1+1=1
1 0 0 F=1.0+0=0
1 0 1 F=1.0+1=1
1 1 0 F=1.1+0=1
1 1 1 F=1.1+1=1
(a) Truth Table

A A.B
B A.B+C

C
C
Input Output F

(b) Logic Circuit

Figure 3.4: Truth table & logic diagram for F = A . B + C

64
While fabricating these logic circuits, it is expected that fewer gate types are used; Principles of Logic
Circuits I
however, these gate types should be able to create all kinds of circuits. Therefore,
functionally complete set of gates, which are a set of gates by which any Boolean
function may be implemented, are used to fabricate the logic circuits. Examples of
functionally complete sets are: [AND, OR, NOT]; [NOR]; [NAND] etc. NAND gate,
also called universal gate, is a special gate and can be used for fabrication of all kinds
of circuits. You may refer to further readings for more details on Universal gates.
Check Your Progress 1
1) What is a logic gate? What is the meaning of term Universal gate?
.........................................................................................................................................
.........................................................................................................................................

2) Prove the identity I1+I2.I3 = (I1+I2).(I1+I3) using Truth Table


……………………………………………………………………………………….
……………………………………………………………………………………….
3) Simplify the function F = ((A′+B) ′+(A.B′)′)′
……………………………………………………………………………………….
……………………………………………………………………………………….
……………………………………………………………………………………….
……………………………………………………………………………………….
……………………………………………………………………………………….

4) Draw the logic diagram of the function before simplification.


.........................................................................................................................................
.........................................................................................................................................
.........................................................................................................................................
………………………………………………………………………………………..
5) Draw the logic diagram of the simplified function.
.........................................................................................................................................
.........................................................................................................................................

……………………………………………………………………………………….

3.5 COMBINATIONAL CIRCUITS


Combinational circuit is an interconnection of gates, which produces one or more
output based on some Boolean function for which it has been designed. A good
combinational circuit does not include feedback loops. A combinational circuit is also
represented using equivalent truth table or a Boolean function.

The output of the combinational circuit changes instantaneously with respect of input,
though some delay is introduced due to transfer of signal from the circuit. This delay
is dependent on the depth which is computed as number of gates in the longest path
from input to output. For example, the depth of the combinational circuit of Figure 3.5
is 2.

65
Introduction to Digital
Circuits
A A.B
B ((A.B) + (A′+B))

A′ A′+B
B

Figure 3.5: A two level AND-OR combinational circuit

Combinational circuits are primarily used to create the computational circuits


of computer system logic; therefore, efficient design of combinational circuit
may enhance the performance of a computer. Thus, one of the design goals of
combinational circuits design is to minimize the number of gates in a
combinational circuit. The constraints for combinational circuit design are:
 a combinational circuit should have limited depth.
 The number of input lines to a gate and number of gates to which the
output of gate is fed, should be limited.
How can it be achieved? The following sub-sections explains the basic issues
for combinational circuit design.

3.5.1 Canonical and Standard Forms of a Boolean expression.


An algebraic expression can exist in two standard forms:
i) Sum of Products (SOP) form
ii) Product of Sums (POS) form
Sum of product form
A SOP expression consists of terms consisting of operator (AND) which are
joined by + operator (OR). For example, the expression A′.B.C + A.B is an
expression consisting of three variables A, B and C. This expression is in SOP
from with two terms A′.B.C and A.B. Each of these terms consists of product
of variables using AND (.) operator, and the two terms are joined by an OR(+)
operator, that is why the name Sum of Products form. In a SOP expression, a
term which includes every variable in normal or complement form is called a
minterm or standard product term. For example, in the above expression the
term A′.B.C is a minterm, but A.B is not a minterm. However, if needed the
term A.B can be converted to two minterms as:
A.B = A.B.(C+C′)
= A.B.C + A.B.C′.
In addition, please note that the value of minterm be ONE for exactly one
possible combination of input values of A, B and C variables. For example, the
minterm A′.B.C will have a value 1 if A′=1 and B=1 and C=1. i.e., A=0, B=1
and C=1; for any other combination of values of A, B, C the minterm will have
a ZERO value.
Interestingly, the number of minterms depends on number of variables. Given,
n variables, the number of minterms will be 2n. For example, for two variables,

66
Principles of Logic
n=2 ⇒ 2n=22=4. The possible minterms for two variables are shown in the Circuits I
Figure 3.6.
Variables
Minterm
A B

0 0 A′B′ m0

0 1 A′B m1

1 0 AB′ m2

1 1 AB m3
Figure 3.6: Minterms for two variables
A function can be represented as a sum of minterms, for example a function F
in two variables using minterms A′B + AB can be represented as:

F(A,B)= A′B + AB
which can be represented as:
F(A,B) = ∑ (1,3)
(Please note that A′B is minterm m1 or 1 and AB is minterm m3 or 3,

Product of Sum form


In this form an expression is written as the product (AND) of the terms, which
use OR(+) as the basic operation, e.g. a three variables expression
(A+B′+C).(A′+B′) is in POS form having the two terms (A+B′+C) and
(A′+B′). Both the terms uses + operator and are joined by the AND operator,
thus, the name Product of Sum form. In POS form a term, which include all the
variables either in normal or complemented form is called a maxterm. For
example, the expression (A+B′+C).(A′+B′) has a maxterm (A+B′+C). A
maxterm can have a value 0 for exactly one combination of input. For example
the maxterm A+B′+C will have value 0 if A=0, B′=0 and C=0, which is A=0,
B=1 and C=0.
For any other combination of values of A, B and C, it will have a value 1.
The following Figure shows the maxterms. Please note that the output of
maxterm is 0, only for the given combination of input.
A B Maxterm
0 0 A+B M0
0 1 A+B′ M1
1 0 A′+B M2
1 1 A′+B′ M3
Figure 3.7: Maxterms for two variables

Example: Represent the function, whose SOP form is given below into an equivalent
function in POS form.

F(A,B) = A′.B + A.B or F(A,B) = ∑ (1,3) or the truth table representation is:

67
Introduction to Digital A B F(A,B)
Circuits
0 0 0
0 1 1
1 0 0
1 1 1
Solution:
The complement of this function in SOP form is represented as (the minterms that has
0 as function output).

F′(A,B) = A′.B′ + A.B′ --------(1)


Taking complement of equation (1) , you will get the function F in POS form.

(F′(A,B))′ = (A′.B′+A.B′)′
 F(A,B) = (A′.B′)′+(A.B′)′
= ((A′)′+(B′)′).(A′+(B′)′)
= (A+B) . (A′+B)
Form table you can determine that function in POS form is:
F(A,B) = ∏ (0,2) as the terms are M0 and M2
Thus, you can see:
F(A,B) = ∑ (1,3) = ∏ (0,2)
(SOP form) (POS form)
With this background of minterm and maxterm, you now are ready to perform the
process of grouping of minterms, which will result in minimization of gates needed
for a digital circuit. This is discussed in the next section.
3.5.2 Minimization of Gates
The simplification of Boolean expression is useful for the design of a good
combinational circuit. There are several methods of doing so, however in this unit
only the following two methods are discussed in details.

 Algebraic Simplification
 Karnaugh Maps

Algebraic Simplification
The following example explains the process of algebra simplification
Example : Simplify the function: F(A,B,C) = ∑ (0,1,4,5,6,7)
Solution: Expanding the Minterms of the functions as:

F(A,B,C) = A′.B′.C′+A′.B′.C + A.B′.C′ + A.B′.C + A.B.C′ + A.B.C


= A′.B′(C′+C) + A.B′.(C′+C) +A.B.(C′+C)
= A′.B′.1 +A.B′.1+A.B.1 (as C′+C = 1)
= (A′.B′+A.B′) + A.B
= B′ (A′+A) + A.B
= B′+A.B
Please note that C input has no effect on the function.

The truth table for the function and the equivalent expression is:

68
Principles of Logic
Circuits I
A B C F(A,B,C) = ∑ (0,1,4,5,6,7) B′+A.B
0 0 0 1 1
0 0 1 1 1
0 1 0 0 0
0 1 1 0 0
1 0 0 1 1
1 0 1 1 1
1 1 0 1 1
1 1 1 1 1

Thus, the logic circuit for the simplified equation F(A,B,C) = AB+B′

A
A.B
AB+B′
B

B′
Figure 3.8: Simplified logic function using algebraic Simplifications
The logic diagram of the simplified expression is drawn using one NOT, OR and
AND gate each.
The algebraic simplification becomes cumbersome because it is not clear which
simplification should be applied next. The Karnaugh map is a simplified process of
design of logic circuit using graphical approach. This is discussed next.

Karnaugh Maps
Karnaugh map is graphical way of representing and simplifying a Boolean function.
They are useful for design of circuits involving 2 to 6 variables. The following is the
process for simplification of logic circuit using Karnaugh map (K map).

Step 1: Create a Rectangular K-map and Assign binary and decimal equivalent values
to each cell
Create a rectangular grid of variables in a function. Figure 3.9 shows the map
of two, three and four variables. A map of 2 variables consists of a grid 22 = 4
elements or cells, while a map of 3 variables has 23 = 8 cells and 4 variables
has 24 =16 cells. Please note that the number of cells are same as the
maximum possible number of minterms for those number of variables.
Each cell corresponds to a set of variable values, shown on the top or left of
the K-map. For example, the values 00, 01, 11, 10 are written on the top of the
cells of K-maps of 3 and 4 variables. These represent the values of the
variables. For example, for the 3-variable k-map values written on BC side for
the first cell 00 indicate B=0 and C=0. Please note that variable values are
assigned such that any two adjacent cells (horizontal or vertical) differ only in
one variable. For example, cell values 01 and 11 differ in 1 bit only, so are the
values 11 and 10. The decimal equivalent values are shown inside the cells.
For example, for a 3-varaible K map cell having A=1 and BC=11, which is
ABC as 111 is 7. Please note that the sequence of the number is not sequential
in 3 variable and 4 variable K maps. This is because of the condition of
change in only one variable between two adjacent cells. The decimal
equivalent of minterm varaible values are marked inside the cells. For
example, decimal equivalent (or minterm equivalent) number placed in the
cell having ABCD values as 1111 in the 4 variable k-map is 15.

69
Introduction to Digital Please note that bottom row is adjacent to top row; and last column is
Circuits
adjacent to first column as they differ in only one variable respectively.

CD
B AB 00 01 11 10
A 0 1 BC
00 0 1 3 2
0 0 1 A 00 01 11 10
0 0 1 3 2 01 4 5 7 6
1 2 3
1 4 5 7 6 11 12 13 15 14

10 8 9 11 10

2-variables K map 3-variables K map 4-variables K-map


Fig. 3.9: K-map of 2, 3 and 4 variables
Step 2: MAP the Boolean function or truth table of Boolean function into K-map.
Put a value 1 for every minterm for which the function output is 1.
Step 3: Simplify algebraic expression: Find adjacency of 1’s in the K-map. You
must find maximum adjacency in a sequence of …, 8, 4, 2. A cell having 1
can appear in more than one adjacencies. Find the maximal adjacencies till
all 1’s are part of at least one adjacency.
Step 4: Write the Boolean term for each adjacency and join these terms using OR
operator.
Resultant function is the simplified Boolean expression.
Example: Use K-map for finding the simplified Boolean function for the function
F(A,B,C,D) = ∑ (0,2,8,9,10,11,15)
Solution: The Truth table for the function is given below.

Decimal A B C D F
0 0 0 0 0 1
1 0 0 0 1 0
2 0 0 1 0 1
3 0 0 1 1 0
4 0 1 0 0 0
5 0 1 0 1 0
6 0 1 1 0 0
7 0 1 1 1 0
8 1 0 0 0 1
9 1 0 0 1 1
10 1 0 1 0 1
11 1 0 1 1 1
12 1 1 0 0 0
13 1 1 0 1 0
14 1 1 1 0 0
15 1 1 1 1 1

(a) Truth table

CD 00 01 11 10
AB
00 1 0 1 3
1 2 (i) Adjacency 1: Four Corners
4 5 7 6
(cells Numbered 0, 2, 8, 10)
01
12 13 (ii) Adjacency 2: The bottom Row
11 1 15 14
(cells Numbered 8, 9, 11, 10)
8 9 11
10 1 1 1 1 10
(iii) Adjacency 3: Cell 11 and Cell 15
(b) Karnaugh’s map

Figure 3.10: Truth table & K-Map of Function F =  (0, 2, 8, 9, 10, 11, 15)
70
Principles of Logic
The three adjacencies of the K-map are shown in the Figure 3.10. You can write the
Circuits I
Boolean expression for each adjacency.
1) The adjacency 1 of four corners (cells Numbered 0, 2, 8, 10) can be written
algebrically as:
A′.B′.C′.D′ + A′.B′.C.D′ + A.B′.C′.D′ + A.B′.C.D′
= A′.B′.D′ .(C′+C) + A.B′.D′.(C′+C)
= A′.B′.D′ + A.B′.D′ (as C′+C=1)
= (A′+A).B′.D′
= B′.D′ (as A′+A=1)
Please note that an adjacency of 8/4/2 reduces the variables by 3/2/1 respectively.

A direct way of doing so is to identify the variables values of the adjacent cells which
does not change, e.g. for this adjacency cell variable ABCD are 0000, 0010, 1000 and
1010. Thus the variable values of B and D does not change in all these 4 cells. In
addition, since B and D have zero values among all these four cells, therefore, the
expression is B′D′.

2) The four 1’s in the bottom row (cells Numbered 8, 9, 11, 10)
The values of variable AB does not change and is 10 for the entire row, therefore, the
expression for this adjacency would be A.B′

3) The two 1’s in cell 11 and 15


A.B.C.D + A.B′.C.D
= A.C.D.(B+B′)
= A.C.D
Also please infer it directly from values 1111 1011

Thus, the simplified Boolean expression using K-Map is


F(A,B,C,D) = B′.D′+A.B′+A.C.D
The simplified expression using K-map is in SOP from. In order to obtain expression
in POS form the K-map is created for 0 values and adjacency identified. The
following example explains these steps.

Example: Use K-map to find the simplified Boolean function for the function
F(A,B,C,D) = ∑ (0,2,8,9,10,11,15) in POS form.

Solution: The truth table is shown in previous example. It can be used to draw K-map
for 0 values, which will be for the complement of the function, i.e. F′(A,B,C,D), as:

CD 00 01 11 10
AB
0 1 3 2
00
0 0
4 5 7 6
01
0 0 0 0
12 13 15 14
11
0 0 0
8 9 11 10
10

K-map for F′
71
Introduction to Digital
Circuits
Four Adjacencies:
(i) Cells (1,3,5,7) : A′ and D does not change, so the term is A′.D
(ii) 2nd Row (cells 4,5,7,6) : A′ B does not change A′.B
(iii) Cells 4,5,12,13 : B and C′ does not change B.C′
(iv) Cells 6 & 14 : B,C,D′ does not change B.C.D′

Since, you have found the adjacencies of 0’s, therefore


F′(A,B,C,D) = A′.D+A′.B+B.C′+B.C.D′
or F(A,B,C,D) = (A′.D+A′.B+B.C′+B.C.D′)′
= (A′.D+A′.B)′ . (B.C′+B.C.D′)′
= (A′.D)′ . (A′.B)′ . (B.C′)′ . (B.C.D′)′
= ((A′)′+D′) . ((A′)′+B′) . (B′+(C′)′) . [(B.C)′+(D′)′]
F(A,B,C,D) = (A+D′) . (A+B′) . (B′+C) . (B′+C′+D), which is in POS form.

In certain digital design situations, some of the input combination has no significance,
for example, while designing the circuit for BCD, the output for the input
combinations 0000 (digit 0) to 1001 (decimal digit 9) are needed. For the rest of input
1010 to 1111, the output does not matter. Such K-maps are designed using DONOT
CARE condition. The output for DONOT CARE input combinations is marked
as X in the K-map. The cells marked X can be used for determining the
maximal dependencies, but need not be covered as the case is for all 1’s
output. A detailed discussion on this is beyond the scope of this unit.

What will happen if you went to design circuits for more than 6 variables? With the
increase in number of variables K-Maps become more cumbersome and are not
suitable. Other methods have been designed to do so, which are beyond the scope of
this course.
Check Your Progress 2
1) Draw the truth table for the following Boolean functions:
(i) F(A,B,C) = A′.B.C′+A.B.C+A.B.C′+B.C+A.C
(ii) F(A,B,C) = (A+B) . (A′+C′) . (C′+B′)
…………………………………………………………………………………………
…………………………………………………………………………………………
2 Simplify the following using algebraic simplification. And draw the logic
diagram for the function so obtained
(i) F(A,B) = (A′.B′+B′)′
(ii) F(A,B) = (A.B+A′.B′)′
…………………………………………………………………………………………
…………………………………………………………………………………………

3) Simplify the following Boolean functions in SOP and POS forms using K-
Maps. Draw the logic diagram for the resultant function.
F (A,B,C,D) =  (0,2,5,7,12,13,15)
…………………………………………………………………………………………
…………………………………………………………………………………………

72
Principles of Logic
3.6 DESIGN OF COMBINATIONAL CIRCUITS Circuits I

The digital circuits, are constructed with NAND or NOR gates instead of AND–OR–
NOT gates as they are Universal Gates. Therefore, any digital circuit can be
implemented using these gates. To prove this point in the following diagram AND,
OR and NOT gates are implemented using NAND and NOR gates. This is shown in
figure 3.11 to 3.13 below.

NOT Operation:

A F=A′ A F=A′
A A
A A F A A F
0 0 1 0 0 1
1 1 0 1 1 0
Figure 3.11: NOT Operation using NAND or NOR gates

AND Operation:
Performing AND using NAND gates can be achieved by first performing the NAND
or the input followed by inverting the output as shown in Figure 3.12
F = A .B
= ((A.B)′)′
F = (A NAND B) ′

A
(A.B)′ (A.B)

B
Figure 3.12: Logic circuit of AND Operation using NAND gates
AND operation can also be implemented using NOR gates. The following Boolean
expression identifies that first NOR gates are used to invert the A and B input
followed by taking NOR of A′ with B′
F = A.B
F = ((A.B)′)′
= (A′+B′)′
= A′ NOR B′

A A′
(A′+B′)′≡A.B

B
B′

Figure 3.13: Logic circuit of AND Operation using NOR gates

73
Introduction to Digital OR Operation:
Circuits
OR operation can be performed using NAND gate. Please refer to following Boolean
expressions:
F = A+B
= ((A+B)′)′
F = (A′.B′)′  A′ NAND B′

A A′
(A′.B′)′

B
B′
Figure 3.14: Logic circuit of OR Operation using NAND gates
F=(A+B)
F= ((A+B)′)′.
F = (A NOR B)′

A (A+B)′ (A+B)
B

Figure 3.15: Logic circuit of OR Operation using NOR gates


A Boolean function can be implemented using the universal NAND or NOR gates by
expressing the function in sum of product form as explained in the following example.
Example: Draw the circuit for F(A,B,C) = ∑ (0,1,3,7) using NAND gates.
Solution: Find the optimal Boolean function using K-map in SOP form:

BC
00 01 11 10
A
0 1 5 2
0
1 1 1
4 5 7 6
1
1

F(A,B,C) = A′B′+BC
The AND – OR gate logic circuit for this is:
B

A′B′+BC
C
A′
B′
Figure 3.16 Logic Circuit Using AND-OR gate
74
For NAND gate logic circuit Principles of Logic
Circuits I
F(A,B,C) = (A′B′+BC)
= ((A′.B′)′)′+((B.C)′)′
= ((A′. B′)′.(B.C)′)′ ( Use of Demorgan's law)
= ((A′ NAND B′).(B NAND C))′
= (A′ NAND B′) NAND (B NAND C)

Thus, the circuit can be made simply by replacing two levels AND-OR circuit by
NAND gates:
B

C (A′B′+BC)
A′

B′

Figure 3.17: Logic Circuit by Replacing AND - OR circuit by NAND gates


A combinational circuit is required to produce a specific set of output for a given step
of input. The design of a combinational circuit simply requires the following steps:
Step 1: Make the Truth table for the required design. You must draw the truth table
for every output value.
Step 2: Use K-map or any other method to create optimal Boolean function that
creates the desired output. One function is designed for each output.
Step 3: Draw the resultant circuit using universal gates.
The next section first design the design of half adder circuit as a combinational circuit.
In addition, next section discusses some of the combinational circuits, the design of
which is not detailed in this Unit..

3.7 EXAMPLES OF COMBINATIONAL


CIRCUITS
In this section, first the combinational circuits design is demonstrated using basic
combinational circuits like half and full adders. This is followed by discussion on
combinational circuits like decoders, multiplexers etc.

3.7.1 Adders
Addition is one of the most common arithmetic operations. In this section two
different kinds of addition circuits are designed. The first of the two circuit adds two
binary digits and is called a half adder, while the second adds three bits-two addend
and one carry bit, and is called a full adder.
Half Adder:
Let us assume that a half adder circuit is adding two bits a and b to produce one sum
bit (s) and one carry bit (c). The following truth table shows this operation. Please
note one adding a =1 and b=1 you get a carry as 1 and sum bit as 0 as shown in truth
table. The K-maps for the addition is shown in figure 3.18.

75
Introduction to Digital a b c s
Circuits
0 0 0 0
0 1 0 1
1 0 0 1
1 1 1 0
(a) Truth table

s c
b b
0 1 0 1
a a
0 0 1
0 11 0
2 3 2 3
1 1 1 1

(b) K- map for sum bit (c) K-map for carry bit
Figure 3.18: Truth table and K-maps for half adder
The Boolean expression for them from the k-maps are:
s = a′b+ab′ and
c = a.b
The logic circuit for the half adder is based on the Boolean expressions are given are
shown in Figure 3.19.

a′ a′b
b
s = a′b+ab′
a
a
a ab′
a′
b′
b b

b′
a
c = ab

b
Figure 3.19: The half adder circuit-input addend bits a, b; output sum bit (s) and
carry bit (c)
Full Adder:
Full adder is a circuit that adder 3 bits, viz. 2 addend bits and one carry bit. The truth
table for full adder is shown in Figure 3.20. Please note that in figure 3.20, cin is carry
in bit and cout is carry out bit.

76
Principles of Logic
Input Output Circuits I
Decimal a b cin Carry Sum
equivalent out (cout) (s)
0 0 0 0 0 0
1 0 0 1 0 1
2 0 1 0 0 1
3 0 1 1 1 0
4 1 0 0 0 1
5 1 0 1 1 0
6 1 1 0 1 0
7 1 1 1 1 1
Figure 3.20: The Truth Table for Full Adder
Please note that in the truth table, when a = 1, b = 1 and cin =1 , than the output is 11 ,
which means sum bit (s) is 1 and carry out bit is also 1. The K-map for these are also
shown in Figure 3.21

cin cout
0 1 cin
ab 0 1
00 0
11 ab
0 1
2 3
00
01 1 2 3
6 7
01 1
11 1 6
4 5
11 1 17
10 1 4 5
10 1

K-map for sum bit K – Maps for cout


No adjacency Three adjacencies
Figure 3.21: The K-maps for Full Adder
The Boolean functions based on the K-maps given in Figure 3.21 are given
below:
s = a′.b′.cin + a′.b.cin′ + a b cin + a b′ cin′
cout = a b + b cin + a cin
Figure 3.22 shows the full adder circuit. Please note that for simplicity the
circuits for inverting the input values are not drawn.

a′
b′
cin
a′
b
cin′
a
b
cin
a
b′
cin′ (a) Sum bit

77
Introduction to Digital
Circuits
a
b
b
cin
a
cin
(b) Carry Out bit
Figure 3.22: Full Adder
Full adder and half adder only perform bit addition of two operands without or
with carry bit respectively. However, binary numbers have several bits e.g.
integers can be 4 byte long. How will they be added? This is performed by
creating a sequence of full adders, where carry out bit of the lower bit addition
is fed as carry in bit of next higher bit addition, as shown in figure 3.23.

a0 b0 a1 b1 a2 b2 a3 b3

cin = 0 Full c0 cin Full c1 cin Full c2 cin Full c3 cout


Adder Adder Adder Adder
(bit 0) (bit 1) (bit 2) (bit 3)

s0 s1 s2 s3

Figure 3.23: Addition of a 4 bit number using 4 full adders


Please note that in the Figure 3.23, the carry out of the previous bit addition is
input as carry in bit of the next bit addition. For example, the value of c0 bit,
which is carry out of bit 0 addition, will be available if the full adder of bit 0
has been completed. Drawback of this circuit is the time taken to add the
number is large as each full adder will take some signal propagation time, and
the addition of the next bit cannot be performed till the cin bit is available. A
faster binary adder circuit would predict the carry bit. These are called look-
ahead carry adders. It may be noted that carry out of the 0th bit can be
computed using the Boolean function c0 = a0.b0. The value c0 is input as cin of
full adder of bit 1. The truth table of Figure 3.24 has been drawn for full adder
of bit 1. This truth table can be used to design circuit that computes the value
of c1 prior to actual addition.
Input Output
Decimal c0 a1 b1 c1 s1
equivalent
0 0 0 0 0 0
1 0 0 1 0 1
2 0 1 0 0 1
3 0 1 1 1 0
4 1 0 0 0 1
5 1 0 1 1 0
6 1 1 0 1 0
7 1 1 1 1 1
Figure 3.24: Truth table for Full adder (bit 1)
78
The k-map for c1 is shown in Figure 3.25 Principles of Logic
Circuits I

a1b1
00 01 11 10
c0
0 1 3 2
0
1
4 5 7 6
1
1 1 1
Figure 3.24: K-map for c1 output of Full adder (bit 1)
There are three adjacencies in the K-map of Figure 3.24. The resultant Boolean
function for c1 would be:
c1 = a1.b1+c0.a1+c0.b1
c1 = a1b1+c0.(a1+b1) (Taking c0 common)
c1 = a1.b1+a0.b0.(a1+b1) (Replacing c0 by its equivalent)
Logic circuits can be designed for prediction of carry bits c0, c1, etc. and
resultant circuits can be implemented along with full adder circuits. You can
observe that Boolean expression for higher order carry bits like c2, c3 etc. will
become more complex, which results in complex logic circuits. Thus, look
ahead carry bit adders may be implemented for addition of binary numbers of
size 4-8 bits.
Adder-Subtractor Circuit
Adder subtractor circuit is an interesting design, in which a same circuit is used
for addition as well as subtraction. This example shows how with some
additional logic, you may be able to perform additional operations. ALU is a
fine example of extension of such logic. Figure 3.25 shows the circuit of 4 bit
adder-subtractor circuit by using full adders.
s3 s2 s1 s0
Carry in
sign bit
0/1

0 No overflow Full c2 Full Full c0 Full


1 Overflow Adder Adder Adder Adder
Carry out of (bit 3) (bit 2) c1 (bit 1) (bit 0) cin
sign bit

a3 a2 a1 a0

b3 b2 b1 b0

Mode bit: Addition


Operation mode bit = 0
Subtraction operation mode
bit = 1
Figure 3.25: Adder Subtractor circuit using full adders for
addition and subtraction of 4-bit 2′s complement numbers

79
Introduction to Digital
Circuits
You may please note that the mode bit controls the b input. The following
Figure shows the details of operation.
Mode bit = 0 Mode bit = 1
Input b Inputs the bits of Input b Input value after
b input XOR with taking XOR of
mode bit value input b and mode
(0) bit
0 0 XOR 0 = 0 0 0 XOR 1 = 1
1 1 XOR 0 = 1 1 1 XOR 1 = 0
Thus, when mode bit is 0, the input to Thus, when mode bit is 1, the input to
full adder is the value of input b. adder is the value of 1′s complement
of b
cin = 0 as mode bit = 0 so the circuit cin = 1, so the addition is r = a+b′+1
adds input a and input b.
or r= a+2′s complement of b
r = a – b ; subtraction of a and b
Figure 3.26: Use of Mode bit to control b input in Adder subtractor circuit

Please also note that in 2’s complement notation the last bit is treated as sign
bit. The overflow condition is checked by finding if the carry into the sign bit
and carry out of sign bit are same or not same. In case carry in to the sign bit is
not the same as carry out of the sign bit, then overflow is set to 1 (by XOR
gate) else overflow is set to 0 (No overflow).
3.7.2 Decoders
Decoder, as the name suggests, decodes the input to one of the output line.
Figure 3.27 shows the truth table and logic circuit of a 2 × 4 decoder.
Truth table
Input Output
a b c d e f
0 0 1 0 0 0
0 1 0 1 0 0
1 0 0 0 1 0
1 1 0 0 0 1
The Boolean functions for various output values are
c = a′ b′
d = a′ b
e = a b′ a′
f=a b c
00
b′
a′ d
b 01
a a e
b′ 10
b a f
11
Figure 3.27: 2 × 4 decoder
80
Principles of Logic
A decoder line would be selected if it has an output 1. In general, decoder is a Circuits I
very useful circuit for selecting lines and forms the basis of Random Access
Memory.
Please note that numbers of output for 2 bit decoder are 22 = 4; hence the name
2 × 4 decoder. Similarly, the number of output for a 3 bit input would be 23 = 8
and it is called 3 × 8 decoder.
3.7.3 Multiplexer
A multiplayer allows sharing of a line by multiple inputs. It may be very useful
for serialization of data bits over a single output line. The design of a
multiplexer is however, different from other combinational circuits as it is the
selection lines which control the selection of input line. The following is the
truth table of a 4 × 1 multiplexer. A 4 × 1 multiplexer selects one of the 4 input
lines to be transmitted over a single output. Out of these 4 lines, which will be
selected, will be determined by 2 selection lines. How many selection lines
will be required for 8 × 1 multiplexer? Since 23 = 8, so 3 selection lines would
be required for 8 × 1 multiplexer.

Input
Selection Lines Input Output
s1 s0
0 0 I0 I0
0 1 I1 I1
1 0 I2 I2
1 1 I3 I3

Please note the values of output can be Ii , where the value of subscript i can
vary from 0 to 3.

I0
0
s1
I1
1
s0 Output
I2 I0/I1/I2/I3
2

I3
3

Figure 3.28: 4 × 1 Multiplexer


Please note this is a very important circuit for sharing of an output line across
many sources of input.
3.7.4 Encoders
An encoder, in general, is the inverse of a decoder. Based on its input it
produces a specific output. For example, the truth table of a 4 × 2 encoder is
shown in Figure 3.29

81
Introduction to Digital Input Output
Circuits
I0 I1 I2 I3 O1 O0
1 0 0 0 0 0
0 1 0 0 0 1
0 0 1 0 1 0
0 0 0 1 1 1
Figure 3.29: Truth Table of 4 × 2 encoder
The simple expression for various output can be
O1 = I2 + I3
and O0 = I0 + I1 Thus, the simple circuit for this encoder is
I0
O0
I1
I2
O1
I3
Figure 3.30 Logic Diagram of a simple encoder

3.7.5 Programmable Logic Array


The basic combinational circuits can be implemented using AND-OR-NOT
gates. PLAs are circuits which are prefabricated for all possible combination of
AND, OR, NOT gates. They can be used to fabricate any kind of logic circuit.
They are primarily designed for SOP form of logic circuit.

Figure 3.31: Programmable Logic Array


The figure 3.31(a) shows a PLA of 3 inputs and 2 outputs. Figure 3.31(b) shows an
implementation of logic function given below using the PLA of figure 3.31 (a).:
O0 = I0 . I1 . I2 + I0′ . I1′ . I2′
O1 = I0′ . I1′ . I2′ + I0′ . I1′

82
Principles of Logic
3.7.6 Read-only-Memory (ROM) Circuits I
ROM is an example of use of Programmable Logic Devices (PLD). It stores the
binary information using a combinational circuit. The RAM follows the simple
sequence. Figure 3.32 shows a ROM of size 4 × 2, which has 4 lines of 2 bits each.
Please note the use of 2 × 4 decoder. Also note that wherever the line will be
connected an output will appear. These connections are embedded within the
hardware. Thus the information of ROM is not lost even after the power failure..
A 1 A0
00
A1 01
2×4
A0 Decoder 10

11

O1 O0

Figure 3.32: ROM Design


Please note that the ROM shown in Figure 3.32 have the following content:
Address Line ROM content
selection O1 O0
A1 A 0
00 10
01 00
10 10
11 10

Please note the number of words in the ROM is 4 which is 22 = 4 as it has 2


address lines. A ROM with 3 address lines will have 8 words. (23 = 8).
The size of word can be chosen by the designer and, in general, it can be 8, 16,
32 or 64 bit as per the machine firmware designer. The address lines of ROM
select any one line as per the address.
Check Your Progress 3
1) Design a combinational circuit, which takes four bit input and produces an
output 1 if the input contains three consecutive 1 bits.
………………………………………………………………………………..
………………………………………………………………………………..

2) Draw the logic diagram of the function as above using


(i) AND-OR-NOT gates &
83
Introduction to Digital (ii) NAND gate
Circuits
………………………………………………………………………………..
……………………………………………………………………………….
……………………………………………………………………………….
3) Consider the circuit of Figure 3.26, what would be the output of the circuit if:
(i) Input a is 1010 and input b is 1100 and mode bit is 0
(ii) Input a is 0010 and input b is 0100 and mode bit is 1
………………………………………………………………………………..
……………………………………………………………………………….
……………………………………………………………………………….

4) Why is PLA needed?


……………………………………………………………………………….
……………………………………………………………………………….
……………………………………………………………………………….
5) Design a full adder using two half adder circuits.
………………………………………………………………………………
………………………………………………………………………………
………………………………………………………………………………

3.8 SUMMARY

This Unit introduces you to some of the basic concepts relating to computer logic. The
Unit first introduces the concept of logic gates, the most fundamental unit of logic
circuits. The Unit then explains the process of making simple logic circuits, including
combinational circuit. The mathematical foundation of the logic circuit design, the
Boolean algebra is also introduced. The Karnaugh's map was used to design simpler
circuit. The Unit also explains the desing of different kinds of adders circuit,
highlghting , how complex circuit can be desingned using K-map. Finally, the Unit
explains some of the most fundamental combinational circuits like decoder,
multiplexer, encodes, PLA's etc. It may be noted that the objective of this Unit is not
to make you a computer hardware designer, but to introduce you to some of the basic
concepts of circuit design.
You can refer to latest trends of design and development including VHDL (a hardware
design language) in the further readings.

3. 9 SOLUTIONS/ANSWERS

Check Your Progress 1


1) A logic gate is most fundamental circuit that can be fabricated on a silicon chip.
A logic gate operates at signal level to produce simple logic like AND, OR,
NOT etc. A Universal gates can be used to implement each and every kind of
logic circuit. Two examples of universal gates are NAND and NOR.
84
Principles of Logic
Circuits I
2)
I1 I2 I3 I1+I2.I3 (I1+I2).(I1+I3)
0 0 0 0 0
0 0 1 0 0
0 1 0 0 0
0 1 1 1 1
1 0 0 1 1
1 0 1 1 1
1 1 0 1 1
1 1 1 1 1

3) F = ((A′+B) ′+(A.B′)′)′
= ((A′+B) ′)′ . ((A.B′)′)' (by Demorgan's Law)
= (A′+B) . (A.B′) (as (a')' = a)
= ((A′+B).A).( (A′+B).B')
= ((A'.A)+(B.A)).((A'.B')+B.B'))
= ((0+A.B).(A'.B' +0)
= (A.B.A'.B')=0

4) Draw the logic diagram of the function before simplification.

A' (A'+B)'

A F
(A.B')'
B'

5) Just the F can be connected to logical 0 input as F=0


Check Your Progress 2
1)
A B C F=A′.B.C′+A.B.C+A.B.C′+B.C+A.C F= (A+B) . (A′+C′) . (C′+B′)
0 0 0 0 0
0 0 1 0 0
0 1 0 1 1
0 1 1 1 0
1 0 0 0 1
1 0 1 1 0
1 1 0 1 1
1 1 1 1 0

2 (i) F(A,B) = (A′.B′+B′)′


= (A'.B')' . (B')' (DeMorgan's Law)
= ((A')'+(B')').B (DeMorgan's Law )
= (A+B).B
85
Introduction to Digital = A.B + B.1=B.(A+1)= B
Circuits

B F

(ii) F(A,B) = (A.B+A′.B′)′


= (A.B)' . (A'.B')'
= (A'+B').(A+B)
= (A'+B').A + (A'+B').B
= A'.A+B'.A+A'.B+B'.B
= A.B' + A'.B
The logic diagram is same as sum bit of half adder. It is also the A XOR B.

3) Simplify the following boolean functions in SOP and POS forms using K-Maps.
Draw the logic diagram for the resultant function.
F (A,B,C,D) =  (0,2,5,7,12,13,15)

CD 00 01 11 10
AB
0 1 3 2
00 1 1
4 5 7 6
01 1 1
12 13
11 1 1 1 15 14

8 9 11 10
10

Three adjacencies
i) Cells 0 and 2: The variables does not change A' B' D'
ii) Cells 12 and 13; The variable does not change A B C'
iii) Cells 5,7,13,15; two variables does not change B D
The expression is F=A'.B'.D' + A.B.C' + B.D
Check Your Progress 3
1) The Truth table:

Decimal A B C D F
0 0 0 0 0 0
1 0 0 0 1 0
2 0 0 1 0 0
3 0 0 1 1 0
4 0 1 0 0 0
5 0 1 0 1 0
6 0 1 1 0 0
7 0 1 1 1 1
8 1 0 0 0 0
9 1 0 0 1 0
10 1 0 1 0 0
11 1 0 1 1 0
12 1 1 0 0 0
13 1 1 0 1 0
14 1 1 1 0 1
15 1 1 1 1 1

86
The K-map for the Truth table: Principles of Logic
Circuits I
CD 00 01 11 10
AB
0 1 3 2
00
4 5 7 6
01 1
12 13 15 14
11 1 1
8 9 11 10
10

Only two adjacencies: Cells 7 and 15 B C D and


Cells 15 and14 ABC
Therefore, the function is: F = A.B.C + B.C.D
2) Logic diagram of the function as above
(i)
A
B
C
F
B
C
D

(ii)
A
B
C F
B
C
D
3) (i) Input a is 1010 and input b is 1100 and mode bit is 0
Bit wise addition will be as follows:
a 1 0 1 0
b 1 1 0 0
c 0 0 0 0
sum bit 0 1 1 0
carry out of sign bit 1
carry in to sign bit NOT equal to carry out of sign bit,
OVERFLOW
(ii) Input a is 0010 and input b is 0100 and mode bit is 1
Bit wise addition will be as follows:
a 1 0 1 0
b (1's complement) 0 0 1 1
c 0 1 1 1
sum bit 1 1 1 0
carry out of sign bit 0
carry in to sign bit IS EQUAL to carry out of sign bit,
NO OVERFLOW

87
Introduction to Digital 4) PLA’s can be fabricated as a chip that can be customised as per the need of the
Circuits
SOP logic.

5) A half adder adds two addend bits, whereas a full adder adds the two addends
and previous carry bit, therefore, one half adder will be needed to add two
addend bits, and second half adder will be needed to sum the sum of first half
adder and previous carry bit. The output carry will be set, if any of the two half
adder produce the carry out. The following block diagram shows this
construction:

Carry in bit
Sum bit
A
Half Half
Adder Sum bit Adder
B

Carry out

88
Principles of Logic
UNIT 4 LOGIC CIRCUITS – SEQUENTIAL Circuits II
CIRCUITS

Structure Page Nos.


4.0 Introduction
4.1 Objectives
4.2 Sequential Circuits: The Definition
4.3 Latches and flip-flops
4.3.1 Latches
4.3.2 Flip-Flop
4.3.3 Excitation Tables
4.3.4 Master Slave Flip Flops
4.3.5 Edge Triggered Flip-flops
4.4 Sequential Circuit Design
4.5 Examples of Sequential Circuits
4.5.1 Registers
4.5.2 Counters Circuit
4.5.3 Synchronous Counters
4.5.4 Random Access Memory
4.6 Summary
4.7 Solutions/ Answers

4.0 INTRODUCTION
The first Unit of this Block explained the basic structure and process of instruction
execution. Unit 2 provided a detailed description of data representation and Unit 3
presented the concepts of basic functional unit of a computer, viz. the logic gates and
combinational circuits. In this unit, you will be introduced to one of the most
fundamental circuit that can store one bit of data called flip flops. The unit also
explains how flip-flops and additional logic circuit can be used to make registers,
counters, sequential circuits etc. Finally, the Unit also introduces you to simple design
of a sequential circuit.

4.1 OBJECTIVES
After going through this unit you will be able to:

 explain the functioning of flip-flops;


 determine the behaviour of various latches;
 construct excitation table of a flip-flop;
 explain circuits of a computer system like registers, counters etc.

4.2 SEQUENTIAL CIRCUITS: THE DEFINITION


A sequential circuit is an interconnection of combinational circuits and storage
elements. The storage elements, called flip-flops, store binary information that
indicates the state of sequential circuit at that point of time.

89
Introduction to Digital Figure 4.1 highlights that a sequential circuit may involve combinational circuits
Circuits
(which were discussed in Unit 3) the flip-flops (which are discussed in this unit) and a
system clock, which is a useful timing device of a computer system.

Combinational Circuits Output


Input

Figure 4.1: Block Diagram of sequential circuits.


(Ref: M. Morris Mano, Charles R. Kime: Logic and compute design fundamental, 2nd Edition,
Pearson Education)

The sequential circuits are time dependent. The present state of a combinational
circuit is identified by the present output of flip-flop. This output may change over a
passage of time and can also be used as one of the input. This change in state can
occur either in synchronous or asynchronous manner with respect to system clock.
Synchronous circuits use flip-flops and their state can change only at discrete
intervals. Asynchronous sequential circuits are regarded as combinational circuit with
feedback path. Such circuits may unstable at times, when the propagation delays of
output to input are small. Thus, complex asynchronous circuits are difficult to design.
Clock Pulse and sequential circuits
A sequential circuit uses clock pulse generator, which gives continuous clock pulse to
synchronize change in the state of the circuit. Figure 4.2 shows the form of a clock
pulse.

Clock pulse
Figure 4.2: Clock signals of clock pulse generator

A clock pulse can have two states, viz. 0 or 1, which are also called disabled or active
state. Flip-flops are allowed to change their states, in general, with the rising or falling
edge of the clock pulse, so as to make stable changes in states of the flip-flops.

4.3 LATCHES AND FLIP-FLOPS


A latch is a storage element with basic logic circuit, which can store 1-bit of data. It
itself is a sequential circuit. Flip-flops are constructed using latches and have more
complex timing sequences than latches. Therefore, in order to learn flip-flops,
learning the basic concept of latches in very useful, which is discussed next.

90
Principles of Logic
Circuits II
4.3.1 Latches
A basic latch can be constructed using either two NOR or two NAND gates. Figure
4.3 (a) shows logic diagram for S-R latch using NOR gates. This latch has two inputs
viz. S and R for Set and Reset respectively; and one output Q. Please note Q′ output is
complement of the output Q. This flip flop exhibits two states called SET state (when
the flip-flop output Q is1, that is Q′=0) and RESET state or clear state (Q=0; Q′=1).

R a Q
S R Q Q′ Comment
0 0 0/1 0/1 No Change in State
0 1 0 1 Reset State
1 0 1 0 Set State
S b Q′ 1 1 - Undefined Input

(a) Logic Diagram (b) Truth Table

Figure 4.3: SR Latch using NOR gates


The following table shows the truth table for NOR gates using in the S-R latch of
Figure 4.3 (a).

The Truth tables for NOR gates of Figure 4.3


NOR gate Marked ‘a’ NOR gate Marked ‘b’
Input Output Input Output
R Q′ Q S Q Q′
0 0 0 1 0 0 0 1
1 0 1 0 1 0 1 0
2 1 0 0 2 1 0 0
3 1 1 0 3 1 1 0

Let us examine the latch in more details. Assume that initially latch is in clear state,
i.e. Q=0 and Q′=1; also assume that both S and R input are 0. The states of the latch
will be as follows (refer to the NOR gate truth table given above):

Gate ‘a’
Input R Q′ :: 0 1 ⇒ Output (Q) 0
Gate ‘b’ Output of latch stays in CLEAR state
Input S Q :: 0 0 ⇒ Output (Q′) 1

(i) Setting the latch:


Now assume that S is changed to 1and R remains 0 during this time, then the
output of Gate ‘b’ will change first:
S Q :: 1 0 ⇒ Q′ will become 0
Gate ‘a’ now has the following input:
R Q′ :: 0 0 ⇒ Q will be set to 1.
Gate ‘b’ now has the following input SET state
S Q :: 1 1 ⇒ Q′ will stay at 0.
Thus, Flip-flop will be in SET state.
Finally, after some time S will become 0;
At that time, gate ‘a’
R Q′ :: 0 0 Q stays at 1
Gate ‘b’ Latch will stay in SET State
S Q :: 0 1 Q′ stay at 0

91
Introduction to Digital (ii) Reset the latch:
Circuits
Now assume that input S remains at 0 and input R is changed to 1, also
assume that at this time the latch is in Set state (Q = 1 & Q′ = 0), then the
output of Gate ‘a’ will change as
Gate ‘a’
R Q′:: 1 0 ⇒ Q will become 0.
Gate ‘b’ Latch is in Reset state.
S Q :: 0 0 ⇒ Q′ will become 1
Once again, when S and R both input will become 0, latch will remain in
RESET state.

iii) When both S and R become 1 simultaneously, then?

A basic S-R latch, in general, changes state at any time, which may result in
asynchronous changes in Q output, which can make system unstable.
Therefore, latches are constructed with controlled input using clock. This is
explained next.
SR latch with Clock
The following diagram shows an SR latch which changes its data only with the
occurrence of a clock pulse.

R
a Q

Clock
b Q′
S

SR latch
(a) Logic Diagram
S R Present State Qt Next State/Qt+1 Comments
Clock(c) before the clock after occurrence
pulse of clock pulse.
0 Any Any 0/1 0/1 No change in state
1 0 0 0/1 0/1 No change in state
1 0 1 0/1 0 Reset the latch
1 1 0 0/1 1 Set the flip-flop
1 1 1 0/1 - Not defined.
(b) Characteristic Table

Figure 4.4: R-S latch with clock.

Operations on this clocked SR latch are given below:

1) If no clock signal i.e. clock=0 ⇒ No change in state of latch.


2) Presence of clock signal
(i) if S=0 and R=0, No change in state/output stays same as earlier state.
(ii) if S=1, R=0, then next state is the SET state Q=1 & Q′ =0
(iii) if R=1 S=0, then next state is the RESET state Q=0 & Q′=1
(iv) if both S and R become 1, then next state/output is not defined.

92
D Latch Principles of Logic
Circuits II
The D (data) latch is modification of RS latch. D latch only uses one input named D, it
stores the value of D in the latch, e.g. if the D input is 1, then the next state of latch
will also be 1. Figure 4.4 shows the clocked D latch.

D Qt Qt+1 When clock


Clock
pulse occurs
0 0/1 0 Clear the latch
Q′ 1 0/1 1 Set the latch

D
(a) Logic Diagram I (b) Characteristic Table

Figure 4.5: D latch with clock


You may please go through the circuit and identify various changes in Q, Q′ with D as
shown for SR latch.

4.3.2 Flip-Flops
Latches suffer from the problem due to frequent changes of output, e.g. the output of
latch may change depending on the value of R and S input, which may change from 1
to 0 or vice-versa during a single clock pulse. Therefore, they are less suitable for
sequential circuits. Flip-flops add more circuitry in latches so that changes in states
occur during the rising or falling edge of clock pulse (these are called edge triggered
flip-flop). R-S latch with clock can be used with additional circuits to make R-S flip-
flop. The flip-flops can also be represented using a block diagram. Figure 4.6 shows
the block diagram of basic flip-flops. Please note that in the block diagram the arrow
head in front of the clock signal represents that the flip-flop will respond to input
during the leading or rising edge (when transition from 0 to 1 takes place) of the clock

S Q Q
D
Clock
R Q′ Clock Q′

S R Flip D Flip Flop

J Q
T Q
Clock
K Clock Q′
Q′

J K Flip T Flip Flop


(Edge trigger is shown by arrow in clock)

Figure 4.6: Graphical Symbols of basic Flip-Flops

93
Introduction to Digital JK flip is almost identical to SR flip-flop, except the last combination of J = 1 and K
Circuits
= 1 is used to complement the current state of the flip-flop. T-flip-flop is obtained by
joining the J and K input, thus, it shows just two input values. When T = 0, there is no
change of state and at T = 1, the current state is complemented. The following figure
shows the characteristics table for the basic flip-flops shown in Figure 4.7

SR Flip-flop JK Flip-Flop
S R Qt+1 Comments J K Qt+1 Comments
0 0 Qt No Change in state 0 0 Qt No Change in state
0 1 0 Clear state 0 1 0 Clear state
1 0 1 Set state 1 0 1 Set state
1 1 - Not Defined 1 1 Q′t Complement of Qt

D Flip-flop T Flip-flop
D Qt+1 Comments T Qt+1 Comments
0 0 Clear State 0 Qt No Change in state
1 1 Set State 1 Q′t Complement of Qt
Figure 4.7: Characteristic Table for flip-flops

4.3.3 Excitation Tables


The characteristic tables of flip-flops as shown in Figure 4.7 show how the state of
flip-flop will change to the next state based on the present state and input values. The
characteristic tables are used for analysis of the sequential circuits. While designing
sequential circuits, you need to consider for what transition what possible input
combinations would be required. This is done with the help of excitation table. Figure
4.8 shows excitation tables for the basic flip-flops.

Qt Qt+1 J K Qt Qt+1 S R
0 0 0 X 0 0 0 X
0 1 1 X 0 1 1 0
1 0 X 1 1 0 0 1
1 1 X 0 1 1 X 0

(a) JK Flip flop (b) SR Flip flop

Qt Qt+1 D Qt Qt+1 T
0 0 0 0 0 0
0 1 1 0 1 1
1 0 0 1 0 1
1 1 1 1 1 0

(c) D Flip flop (d) T Flip flop


X-denotes DONOT CARE condition.

Figure 4.8: Excitation Tables for basic flip-flops


Qt and Qt+1 indicate present and next state of a flip flop, respectively. Symbol X in the
table means do not care condition i.e. it does not matter whether the input is 0 or 1.
How these excitation tables are created? This is explained with an example of creation
of excitation table of JK Flip flop.

94
a) The state transition from Qt = 0 to Qt+1= 0 Principles of Logic
Circuits II
(i) As both Qt and Qt+1 are 0 it means that there is no change in the state of
flip flop, which can be achieved by J=0, K=0;
(ii) Using the input, J=0, K=1, the flip flop can be RESET, i.e. Qt+1 = 0.
b) The state transition from Qt = 0 to Qt+1 = 1
(a) Using the input, J=1, K=0, the flip flop is SET, i.e. Qt+1 = 1
(b) Using the input, J=1, K=1, the flip flop is complemented from Qt having a
value 0 to Qt+1 = 1
c) State transition from Qt = 1 to Qt+1 = 0
(a) Using the input, J=0, K=1, flip flop is RESET, i.e. Qt+1 = 0
(b) Using the input, J=1, K=1, the flip flop is complemented from Qt having a
value 1 to Qt+1 = 0

d) For state transition from Qt = 1 to Qt+1 = 1


(a) Using the input, J=0, K=0, no change in flip flop so Qt+1 = 1
(b) Using the input, J=1, K=0, flip flop is SET, i.e. Qt+1 = 1
These entire set of input for various transitions can be summarized in the table below:
Present State Next State Input J and K Input using DONOT
(Qt) (Qt+1) CARE
(i) J=0, K=0
0 0 J=0, K=X
(ii) J=0, K=1
(i) J=1, K=0
0 1 J=1, K=X
(ii) J=1, K=1
(i) J=0, K=1
1 0 J=X, K=1
(ii) J=1, K=1
(i) J=0, K=0
1 1 J=X, K=0
(ii) J=1, K=0

The excitation table has been derived for J-K flip-flop as above. You may draw the
excitation table for all other flip-flops using the same method.
Check Your Progress 1
1. What is a sequential circuit? How are sequential circuits different from
combinational circuits?
.........................................................................................................................................
…………………………………………………………………………………………
…………………………………………………………………………………………
2. What is a latch? How is different from a flip-flop?
.........................................................................................................................................
…………………………………………………………………………………………
…………………………………………………………………………………………

3. What is an excitation table? Draw the excitation table for SR, D and T flip-
flops.
.........................................................................................................................................
…………………………………………………………………………………………

95
Introduction to Digital
Circuits
4.3.4 Master-Slave Flip-Flop
The master slave flip-flop is constructed using two or more latches. Figure 4.9 shows
how two S-R flip-flops can be used to construct a master-slave flip-flop.
Q
S S Q
Master Slave
R R Q′
Q′

Clock
Figure 4.9: Master – Slave flip- flop
You may please note that you can construct a master-slave flip-flop using D or JK
flip-flop also. This flip-flop consists of master which changes state when clock pulse
occurs. The slave flip flop goes to the state of master flip-flop when the clock signal is
0. (Refer to figure 4.9) This is explained below:
The flip-flop operates is two steps:
(i) When a clock pulse input is 1: As this time the Master flip-flop, based on the
value of S and R, goes to Set or Clear state as the case may be. At this time the
slave flip-flop cannot change its state as it receives the inverse of clock pulse.
Thus, on the occurrence of clock pulse ‘Master’ flip-flop goes to the next state
(Qt+1), whereas the output from slave flip-flop is the present state (Qt).
(ii) When the clock pulse input is 0: In this time the input to Master flip-flop will not
have any effect on the Master flip-flop output, which has been put in the next
state (Qt+1) in the previous step. However, now this Qt+1 output of master flip-flop
will be applied on the slave flip, which will result in transition of state of slave
flip flop to Qt+1. Thus, on completion of a clock cycle master and slave flip-flops
both will be in Qt+1. Please note that for slave flip flop only following transitions
are possible:

Master output ≡ Slave input Slave Output

Q Q′ S R Q Q′
1 0 1 0 1 0 (Set)
0 1 0 1 0 1 (Reset)

4.3.5 Edge-Triggered flip-flops


An edge-triggered flip-flop triggers the change either during the rising edge or
positive transition (0 to 1 transition) or the falling edge or negative transition of the
clock (1to 0 transition). Fig 4.10 shows the clock pulse signal in positive & negative
edge-triggered flip-flops.

No Change
in output No Change
in output

Positive …
Transition Negative Transition
(a) Positive edge-triggering (b) Negative edge triggering
Figure 4.10: Clock Pulse Signal
96
Principles of Logic
Circuits II
The following figure shows the block diagram of edge triggered D flip-flop.

D Q D Q

Clock Q′ O Clock Q′

(a) Positive edge-triggered D flip-flop (b) Negative edge–triggered D flip-flop


Figure 4.11: Edge triggered and master slave D flip-flop
More detailed discussion on these flip-flops are beyond the scope of this unit. You
may refer to further readings for the same.
Check Your Progress 2
1. List the advantages of master- slave flip-flop.
.........................................................................................................................................
.........................................................................................................................................

2. How edge- triggered flip-flops are different to master-slave flip-flops?


.........................................................................................................................................
.........................................................................................................................................

4.4 SEQUENTIAL CIRCUIT DESIGN


A sequential circuit not only consists of external input and external output, but also an
internal state which is characterised by the state of flip flops internal to the circuit.
The state of sequential circuit changes as per its design based on some control signal
like the clock control. Therefore, design of a sequential circuit is required to address
the changes in the internal state of itself. Therefore, in addition to the logic circuit,
sequential circuit design requires the information about the changes in state of flip-
flops. The process of design of a sequential circuit is explained with the help of an
example of design of a 2-bit counter circuit given below.

Example: Design a 2-bit counter circuit.


Solution: A counter is a special circuit which counts the timing sequences. A 2-bit
counter will require two flip-flops. The state sequence of 2-bit counter would be 00,
01, 10, 11, 00 and so on. Thus, using a 2-bit counter, you can have 4 distinct internal
states of the circuit and counter should move in each transition from one state to next
as:
00 01 10 11 00
Assuming that these transitions are triggered by a control signal, say Z which can be a
clock signal or any other signal generated for this purpose, a state change sequence
can be presented as shown in the diagram given below:

97
State
Introduction to Digital Z=1 Z=1 Z=1
Circuits 00 01 10 11
Z=0 Z=0 Z=0 Z=0 Z=1

This circuit uses two bits to store the state, therefore, requires two flip-flops. The state
of the circuit changes to next state, when Z=1, else it stays in the same state. Thus, in
this sequential circuit, you require 2 flip-flops and one control signal Z. But, what
would be other input and output to this sequential circuit. Well! The other input will
be the current states of flip-flops which will govern the next states of flip-flops.
Next, you may take D flip-flop to design the circuit then a Rough design of the circuit
would be:

D Q
x X
Q′
Z
Dy Q
Y
Q′

In order to design the logic circuit, which generates the signal Dx and Dy, let us first
draw a truth table for flip-flop’s X and Y. This truth table is shown in the following
table:
Present States of Next State of
Required value of Dx for transition of
Flip-Flops Flip-Flops
X and Dy for the transition of Y
Flip-flops Input Flip-flops
Qt of Qt of Qt+1 of Qt+1 of
Z Dx Dy
X Y X Y
0 0 0 0 0 0 0 0
1 0 0 1 0 1 0 1
2 0 1 0 0 1 0 1
3 0 1 1 1 0 1 0
4 1 0 0 1 0 1 0
5 1 0 1 1 1 1 1
6 1 1 0 1 1 1 1
7 1 1 1 0 0 0 0

Interestingly, it is the Dx and Dy input that should be generated from the present state
and Z input, so that the Next state (Qt+1) of the flip-flops can be derived from the
present state of the flip-flop (Qt). Thus, for the design of counter circuit, you can draw
K-map for the design of Dx and Dy with input Qt (X), Qt (Y) and Z. TheK-maps for Dx
and Dy can be drawn as:

98
Principles of Logic
Circuits II

Dx Dy
Z Z
Qt (x)Qt(y) 0 1 Qt (x)Qt(y) 0 1
0 1 0 1
00 00 1
2 3 2 3
01
1 01 1
6 7
6 7
11 1
11 1
4 5
4 5
10 1 1 10 1

Dx = Terms of (Adjacency of 4,5 + Adjacency of 4,6 + Cell 3)


Dx = Qt (X) . Qt′ (Y) + Qt (X) . Z′ + Qt ′ (X) . Qt (Y) . Z
Dy = Terms of (Adjacency of 2,6 + Adjacency of 1,5)
Dy = Qt (Y) . Z′ + Q′ (Y) . Z
Thus, the final 2-bit counter circuit will be drawn as shown in Figure 4.12

D Q
X
Q′

D Q
Y
Q′

Z
Figure 4.12: 2-bit counter

99
Introduction to Digital
Circuits 4.5 EXAMPLES OF SEQUENTIAL CIRCUITS
Let us now explain the basic function of some of the useful examples of sequential
circuits like registers, counters etc.

4.5.1 Registers
Registers are the basic storage unit of a computer. Since register temporarily
stores certain values, therefore, it requires flip-flops. The size of registers is
computed using number of bits it stores. One bit storage requires, at least, one
flip-flop. Thus, in general, an n bit register would use n flip-flops. Two
common operations on register are:
 To load all bits of a register simultaneously or parallel load.
 Shifting of bits, of register, towards left or right
Figure 4.13 shows a parallel load register..

I3 I2 I1 I0

Q Q Q Q
D3 D2 D1 D0
bit 3 bit 2 bit 1 bit 0

Clear Clear Clear Clear

Clock
Signal
Clear
Signal

O3 O2 O1 O0

Figure: 4.13 A register with parallel load.


Please note the following point about the register circuit as above:
(1) The 4-bit register is made up of 4 D flip-flops.
(2) Clock signal is applied to all flip-flops simultaneously; therefore, loading
operation will load the values I3, I2, I1, and I0 respectively into the four flip-
flops, simultaneously.
(3) Special clear signal is used, which can clear all the bits of the register
simultaneously, if needed.
(4) The output of register O3, O2, O1, O0 can be used for any arithmetic
operations. Please note that registers output changes synchronously.

Shift register: Shift operation is very special operation for a computer ALU. A
shift register is capable of shifting the content of a register either to left or to
the right by one bit at a time. The following figure shows a right shift register,
however, you can construct a left shift register in a similar manner.

100
Principles of Logic
Circuits II

Input D3 D2 D1 D0
bit 3 bit 2 bit 1 bit 0

Clear Clear Clear Clear

Shift enable
Clear Input

O3 O2 O1 O0
Figure 4.14: 4-bit Right Shift Register

Please note the following points.


 The external input is applied to D3. The output of D3 is applied to D2,
and so on.
 The shift enable is applied as clock input. It enables the shift operation.
 For example, assume the shift register had state 1 0 0 1
and input bit is 1, then after the right shift operation the output will
change as:

I O3 O2 O1 O0
Before Shift 1 1 0 0 1

After Shift 1 1 1 0 0

A single registers can be included with the facility of left shift, right shift and
parallel load. Such a register is called bi-directional shift register with parallel
load. You may create its block diagram as an exercise.

4.5.2 Counters Circuit


Counters are sequential circuits, which produce output in a sequence on the
occurrence of a transition signal. The counters may be used in keeping sequence such
as steps of execution of a single instruction. There are two types of counters-
asynchronous and synchronous.
In synchronous counter the flip-flop change their state one by one, while in
synchronous counter all flip-flops may change the state simultaneously
Asynchronous Counter: An asynchronous counter is also called a ripple counter as
the changes in the state of flip-flop is done one by one like a wave. Figure 4.17 shows
a 3-bit ripple counter using T-flip flop (Please note the earlier 2-bit counter was
designed using D flip-flop). These counters also have all clear input but for simplicity
it is omitted in the figure.

101
Introduction to Digital
Circuits
Logical 1

Q Q
T T T
bit 0 bit 1 bit 2
Clock

O0 O1 O2

Figure: 4.15: 3-bit ripple counter


Please note the following points in the figure.
(i) The bit 0 will complemented each time a clock pulse occurs as it is
connected to clock.
(ii) For bit 1 flip flop the transition will be triggered, if Qt of bit 0 flip-flop is
1. In that case Qt+1 of bit 1 will be complemented. This occurs because the
transition signal of (bit 1) flip-flop is connected to Qt output of bit 0 flip-
flop. Similarly, the transition of bit 2 flip flop will occur, if Qt of bit 1
flip-flop was in state 1.
(iii) The transition is expected to occur with the falling edge (indicated by o
before the clock input).
(iv) Please note change in states would be as follows. Assuming initial state to
be 0 0 0

bit 0 bit 1 bit 2


0 0 0
1 0 0
0 1 0
1 1 0
0 0 1
1 0 1
0 1 1
1 1 1
0 0 0

} indicates the falling edge.


4.5.3 Synchronous Counter:
The flip-flops of the synchronous counter can change their state
simultaneously. A 3-bit synchronous counter with rising edge of clock signal is
shown in figure 4.16

102
Principles of Logic
Q Circuits II
Q Q
Logical 1 T T T
bit 0 bit 1 bit 2

Clock Signal
O0 O1 O2

Figure 4.16: Synchronous Counter


Please note in the figure above
(i) bit 0 is complement on occurrence of clock pulse
(ii) bit 1 is complemented, if (Qt of bit 0) was 1.
(iii) bit 2 is complemented, if Qt of bit 0 and Qt of bit 1, both are 1.
Flip-Flop
bit 0 bit 1 bit 2
0 0 0
1 0 0
0 1 0
1 1 0
0 0 1
1 0 1
0 1 1
1 1 1
0 0 0

4.5.4 Random Access Memory


In this section, a general configuration for flip-flop based random access memory
(RAM) is proposed. A RAM essentially stores bits, therefore, it (especially DRAM
technology) may be a sequential circuit. Two basic operations are performed on
RAM: Reading information from RAM, this operation requires decoding operation,
which identifies the cells or lines that are to be read; and writing to RAM, which in
addition to identifying the cell, also requires changing the state of selected RAM flip-
flops based on the input value. The figure 4.21 shows the block diagram and logic
diagram and of a RAM cell, which is a single flip-flop.
Select (S)

Input (I) Output (O)


Cell

Read/Write′ (R/W′)
(a) Block Diagram

103
Introduction to Digital
Circuits
Select

Output
a K Q c

Input

b J Q

Read/Write′ (R/W′ )

(b) Logic Diagram

Figure 4.17: Binary Cell


A RAM cell as shown consists of one flip-flop. The behavior of this cell is
exhibited in the following table.

(i) Read/Write′ bit 1 ⇒ Operation is Read

Select bit Read bit Qt Output of cell (c flip-flop)


0 1 0/1 Not activated
1 1 0/1 Qt

(ii) Read/Write′ bit 0 ⇒ Write Operation

Assume Input bit to the circuit in Figure 4.17 (b) is I


Select Input Qt of Gate Flip-flop Input Qt+1
Comments
Bit (S) Bit (I) ‘a’ ‘b’ J K
0 - Any Any 0 0 Qt Not selected
1 0 0 1 0 1 0 Clear memory flip-flop
1 1 1 0 1 0 1 Set the memory cell

The write operation as shown in the table above changes the content of
memory cell to the value of Input (I), or in other words memory cell has been
written into by the value of input (I).

In addition to read/write′ to memory cell, additionally a RAM is to be


organized as an array of RAM cells, so as to decode the address of the cells,
which is shown in Figure 4.18.

104
Principles of Logic
Bit Bit Circuits II
1 0

Address of the
word 00 S

1 0

Input of
address Address of the
selection word 01 S
A2 × 4 Decoder

(2 lines)
1 0

Address of the
word 10 S

1 0

Address of the
word 11 S

1 0

Read/Write'

Bit Bit
1 0

Output
Figure 4.18: Two-dimensional Array based 4  2 RAM
The RAM has 4 words, which are decoded by the address decoder. Please note
as there are 4 words or lines, therefore, you require 2×4 decoder. This logic
can be extended, e.g. a RAM of size 1024×8, would require 10×1024 decoder
as 210 = 1024. So it will have 10 address lines which will decide which word of
the RAM array is to be selected.
For this implementation, the number of bits stored in each word would be 2
only, that is why every memory line will have 2 cells. Please note that for a
word size of 2 bits, the RAM array would require 2 input and 2 output lines.
For this memory array, in case an address 01 is given as input of address
selection bits, it will activate the Select input of cells of address 01 for read or
105
Introduction to Digital write operation. Please note that current RAM chip design is not a 2
Circuits
dimensional design as shown in Figure 4.18. It may follow a different more
optimal organization, discussion on which is beyond the scope of this unit.
Check Your Progress 3
1) What are the differences between synchronous & asynchronous counters?
.........................................................................................................................................
.........................................................................................................................................
.........................................................................................................................................
2) Is ripple counter same as shift register?
.........................................................................................................................................
.........................................................................................................................................
.........................................................................................................................................
3) Design a two bit counter, which has the states 00, 01, 10, 00, 01, 10…..
.........................................................................................................................................
.........................................................................................................................................
.........................................................................................................................................

4.6 SUMMARY
This unit introduces you the concepts of sequential circuits which is the foundation of
digital design. Flip-flops are also a sequential circuit and the basic storage unit of a
computer system. This unit also explains the working of a latch, which is the basic
circuit that can be used for storing one bit of information. The sequential circuit can
be formed using combinational circuits (discussed in the last unit) and flip flops. The
unit also discusses the construction of some of the important sequential circuits like
registers, counters, RAM. For more details, the students can refer to further reading.

4.7 SOLUTIONS / ANSWERS


Check Your Progress 1
1. A sequential circuit is designed to process and store data. Therefore, it consists
of flip-flops for storing a state representing 0 or 1 and additional combinational
circuit that may result in change of state depending on the combinational logic.
Example of sequential circuits are - registers, counters etc. The main difference
is that a sequential circuit also has a state.
2. Latch is a basic asynchronous sequential circuit designed with feedback to
exhibit two different states, viz. 0 or 1. These states can be modified as per the
input to latch. Example of latch is SR latch. Latches can change their state at
any point of time based on input, whereas flip-flops are designed to change their
states at specific time, for example on the occurrence of a clock pulse.
Therefore, flip-flops have more complex circuitry than latch.
3. Excitation table are used for analysis and design of sequential circuits. They
represent different combination of input to a flip-flop that may cause a specific
state transition in the flip-flop. The following are the excitation tables of
different flip-flops:
SR Flip-flop

106
Present State Next State Input S and R Input using DONOT Principles of Logic
Circuits II
(Qt) (Qt+1) CARE
(i) S=0, R=0
0 0 S=0, R=X
(ii) S=0, R=1
0 1 S=1, R=0 S=1, R=0
1 0 S=0, R=1 S=0, R=1
(i) S=0, R=0
1 1 S=X, R=0
(ii) S=1, R=0

D Flip-flop
Next State Input D Input using DONOT
Present State (Qt+1) CARE
(Qt)
0 0 D=0 D=0
0 1 D=1 D=1
1 0 D=0 D=0
1 1 D=1 D=1

T Flip-flop
Next State Input D Input using DONOT
Present State (Qt+1) CARE
(Qt)
0 0 T=0 T=0
0 1 T=1 T=1
1 0 T=1 T=1
1 1 T=0 T=0

Check Your Progress 2


1. The master-slave flip is a simple structure, which changes its output during the
clock pulse, when it is 0, thus, will result in synchronous state transitions of
flip-flop.
2. An edge- triggered flip-flop changes it state either during the rising or falling
edge of the clock pulse, thus, has a different construction than master-slave slip-
flop.
Check Your Progress 3
1. All the flip-flops in the synchronous counter may change their state
simultaneously, whereas in asynchronous counter, change of state of previous
flip-flop may cause that effect to take place.

2. No, shift register causes shifting of state of a flip-flop to next flip-flop, whereas
ripple counter is governed by the change of state.

3. The states 00, 01, 10, 00, 01, 10…..

00 01 10 00 01
Assuming the control signal, say Z , state transitions are:

State
Z=1 Z=1 Z=1
00 01 10
Z=0 Z=0 Z=0 Z=1

107
Introduction to Digital
Circuits

Rough design of the circuit would be:

D Q
x X
Q′
Z
Dy Q
Y
Q′

Truth table for flip-flop’s X and Y:


Present States of Next State of
Required value of Dx for transition of
Flip-Flops Flip-Flops
X and Dy for the transition of Y
Flip-flops Input Flip-flops
Qt of Qt of Qt+1 of Qt+1 of
Z Dx Dy
X Y X Y
0 0 0 0 0 0 0 0
1 0 0 1 0 1 0 1
2 0 1 0 0 1 0 1
3 0 1 1 1 0 1 0
4 1 0 0 1 0 1 0
5 1 0 1 0 0 1 1
6 1 1 0 - - X X
7 1 1 1 - - X X

The K-maps for Dx and Dy can be drawn as:

Dx Dy
Z Z
Qt (x)Qt(y) 0 1 Qt (x)Qt(y) 0 1
0 1
0 1
00 00 1
2 3
2 3
01
1 01 1
6 7
6 7
11 X X
11 X X
4 5
4 5
10 1 1
10 1

Dx = Terms of (Adjacency of 4,5 + Adjacency of 3,7)


Dx = Qt (X) + Qt (Y) . Z
Dy = Terms of (Adjacency of 2,6 + Adjacency of 1,5)
Dy = Qt (Y) . Z′ + Q′ (Y) . Z

108
Thus, the final counter circuit for the given states would be: Principles of Logic
Circuits II

D Q
X
Q′

D Q
Y
Q′

109
The Memory System
UNIT 5 THE MEMORY SYSTEM
Structure Page Nos.
5.0 Introduction
5.1 Objectives
5.2 The Memory Hierarchy
5.3 SRAM, DRAM, ROM, Flash Memory
5.4 Secondary Memory and Characteristics
5.4.1 Hard Disk Drives
5.4.2 Optical Memories
5.4.3 Charge-coupled Devices, Bubble Memories and Solid State Devices
5.5 RAID and its Levels
5.6 Summary
5.7 Answers

5.0 INTRODUCTION
In the previous block, fundamentals of a computer system were discussed. These
fundamentals included discussion on von-Neumann architecture based machines,
instruction execution, representation of digital data and logic circuits etc. This Block
explains the most important component of memory and Input/output systems of a
computer. This unit covers the details of the Memory. This unit discusses issues
associated with various components of the memory system, the design issues of main
memory and the secondary memory. Various characteristics of secondary memory and
its types that are used in a computer system, would also be discussed. The unit also
defines how multiple disks can be used to create a redundant array of disks that can be
used to provide a faster and reliable storage.

5.1 OBJECTIVES
After going through this Unit, you will be able to:

 explain the key characteristics of various types of memories and memory hierar-
chy;
 explain and differentiate among various types of random access memories;
 explain the characteristics of secondary storage devices and technologies;
 explain the latest secondary storage technologies;
 identify the various levels of RAID technologies

5.2 THE MEMORY HIERARCHY


In computers, memory is a device used to store data in binary form. Smallest unit of
binary data is called ‘bit’. Each bit of binary data is stored in a different cell or storage
unit and collection of these cells is defined as the memory. A memory system is
composed of a memory of fixed size and procedures which tells how to access the
data stored in the memory. Based on the persistence of the stored data, memory is
classified into two categories:

 Volatile memory: which loses its data in the absence of power.

 Non-volatile memory: Do not lose data when power is switched off.


Another classification of memory devices, which is also the objective of this unit is
based on the way they interact with the CPU which can be determined from figure 5.1
Main/ Primary memory interact directly with the CPU e.g. RAM and ROM.
5
Basic Computer Organisation Auxiliary/ secondary memory need I/O interface to interact with the CPU e.g.
magnetic disks and magnetic tapes. There are other memories like cache and registers,
which directly interacts with the CPU. Such memories are used to speed up the
program execution. For execution, a program must be loaded into the main memory
and should be stored on the secondary storage when it completes its execution.
Auxiliary memory is used as a backup storage, whereas main memory contains data
and program only when it is required by the CPU.

Figure 5.1: Memory Interaction with CPU

Various memory devices in a computer system forms a hierarchy of components


which can be visualised in a pyramidal structure as shown in Figure 5.2. As you can
observe in the Figure 5.2 that at the bottom of the pyramid, you have magnetic tapes
and magnetic disks; and registers are at the top of the pyramid. Main memory lies at
the middle as it can interact directly with the CPU, cache memory and the secondary
memory. As you go up in the pyramid, the size of the memory device decreases, the
access speed, however, increases and cost per bit also increases. Different memories
have different access speeds. CPU registers or simply registers are fastest among all
and are used for holding the data being processed by the CPU temporarily but because
of very high cost per bit they are limited in size. Instruction execution speed of the
CPU is very high as compared to the data access speed of main memory. So, to
compensate the speed difference between main memory and the CPU, a very high
speed special memory known as cache is used. The cache memory stores current data
and program plus frequently accessed data which is required in ongoing instruction
execution.

You may note the following points about memory hierarchy:


 The size of the memory increases as you go down the memory hierarchy.
 The cost of per unit of memory increases as you go up in the memory hierarchy
i.e. Memory tapes and auxiliary memory are the cheapest and CPU Registers are
the costliest amongst the memory types.
 The amount of data that can be transferred between two consecutive memory
layers at a time decreases as you move up in the pyramid. For example, from
main memory to Cache transfer one or few memory words of size in Kilobytes
are accessed at a time, whereas in a hard disk to main memory transfer, a block
data of size of 1 Megabyte is transferred in a single access.
 One interesting question about the memory hierarchy is why having faster
smaller memories does not slow down the computer? This is primarily due to the
6
fact that there is very high probability that a program may access the instructions The Memory System
and data in the closed vicinity of presently executing instruction and data. This
concept is further explained in next unit.

Figure 5.2: Memory Hierarchy

In subsequent sections and next unit, we will discuss various types of memories in
more detail.

5.3 SRAM, DRAM, ROM, FLASH MEMORY


The main memory is divided into fixed size memory blocks called words. Size of the
memory word may be limited by the communication path and the processing unit size.
As word size/ length denotes the amount of bits that can be processed by the processor
at one time. Each memory word is addressed uniquely in the memory. A 32-bit
processor uses a word size of 32 bits whereas 64-bit processor uses a word of 64 bits.
RAM (random access memory) is a volatile memory i.e. content of the RAM vanishes
when power is switched off. RAM is a major constituent of the main memory. Both
read and write operations can be performed on RAM, therefore, it is also known as
read-write memory. Access time of each memory word is constant in random access
memory. RAM can be constructed from two types of technologies - Static Random
Access Memory (SRAM) and Dynamic Random Access Memory (DRAM). The main
difference being that DRAM loses its content even if power is on, therefore requires
refreshing of stored bits in DRAM. Thus, DRAM is slower than SRAM, however, the
DRAM chips are cheaper. In general, DRAM is used as the main memory of the
computer, while SRAM is used as the Cache memory, which is discussed in details in
the next unit.
SRAM
SRAM can be constructed using flip-flops. It is a sequential circuit. A SRAM cell
using SR flip flop is shown in figure 5.3. As you can observe, this sequential circuit
has three inputs: select, read/write, and input and single output: output. When select
input is high “1” circuit is selected for read/write operation and when select input is
low “0” neither read nor write operation can be performed by the binary cell. Thus,
select input must be high in order to perform read/write operation by the binary cell.
Binary cell reads a bit when read/write input is low “0” and writes when read/write
input is high “1”. Third input input is used to write into the cell. The only caution over
here is that when read/write input is low “0” i.e. we want to perform a read operation,
then read operation must not be affected by the input input. This is ensured by
7
Basic Computer Orga
anisation inverted inpput to the firstt AND gate which
w guaranttees the inputt to both R annd S to be
low and thuus prevents anny modificatioon to the flip flop value. The
T characterristic table
of SR flip flop
f is given iin Unit 4 Blo
ock 1 for bettter understandding of the fuunctioning
of the binaryy cell.

Figure 5.3: Logic


L Diagram of
o RAM cell

Read operation: select iss high “1”, reaad/write is low


w “0” and inpput is either low “0” or
high “1” theen input to R and S will be
b 0 and flip flop
f will keepp its previouss state and
that will be the output.
Write operaation: select is
i high “1”, reead/write is high
h “1” and if
i input is loww “0” then
R will be hiigh “1” and S will be low w “0” and flipp flop will stoore “0” and if input is
high “1” theen R will go low”0” and S will go high “1” and flip flop
f will storee “1”.

A RAM chhip is composed of severral read/writee binary cellss. A block diiagram of


2mx n RAM M is shown in Figure 5.4. The T RAM shoown has a totaal capacity off 2m words
and each word is n bits long e.g. in 64 x 4 RAM M, the RAM has h 64 words and each
word is 4 biits long. To adddress 64 i.e. 26words, we need 6 address lines. So inn a 2m × n
RAM, we haveh 2m wordds where each word has n bits and RA AM has m-bbit address
which requiires___m addresss lines. The RAM
R is funcctional only when
w chip sellect (CS1)
signal =1 annd CS2 = 0. If chip sellect signal is not enabled or chip select signal is
enabled andd neither reaad nor write input is ennabled then data d bus willl in high
impedance state
s and no operation caan be perform med. During high impedaance state,
other input signals
s will be
b ignored whhich means ouutput has no lo ogical significance and
does not carrry a signal

4: Block Diaggram of 2mx n RAM


Figure 5.4
The Memory System
DRAM
Dynamic Random Access Memory (DRAM) is a type of RAM which uses 1 transistor
and 1 capacitor (1T1C cell) for storing one bit. A block diagram of a single DRAM
cell is shown in Figure 5.5. In DRAM, transistor is used as a gate which opens and
closes the circuit and thus stops and allows the current to flow. Charging level of the
capacitor is used to represent the bit “1” and bit “0”. As capacitors tends to discharge
in a very short time period DRAM cells need to be refreshed periodically to store the
binary information despite continuous power supply. Hence they are called dynamic
random access memory. With low power consumption and very compact in size
because of 1T1C architecture DRAM offers larger storage capacity in a single chip.
Each DRAM cell in the memory is connected with Word Line (Rows) and Bit Line
(Columns) as shown in Figure 5.5. Word line (rows) controls the gates of the transfer
lines while Bit lines (columns) are connected to sense amplifiers i.e. to determine “0”
or “1”.

Figure 5.5: A DRAM cell


Figure 5.6 presents the general block diagram of 2M 2M × N DRAM, where binary
cells are arranged in a square of 2M × 2M words of N bit each. For example, 4 megabit
DRAM is represented in a square arrangement of (1024 × 1024) or (210 × 210 ) words
of 4 bit each. Thus, in the given example we have 1024 horizontal/ word lines and
1024 × 4 column/ bit lines. In other words, each element, which consists of 4 bits of
array, is connected by horizontal row lines and vertical column lines.

Figure 5.6: Block Diagram of DRAM


9
Basic Computer Organisation
Selection and role of various signals for read and write operation is as follows:

1. RAS (Row Address Strobe): On the falling edge of RAS signal, it opens or strobe
the address lines (rows) to be addressed.

2. /CAS (Column Address Strobe): Similar to /RAS, on the falling edge, this enables a
column to be selected as mentioned in the column address from the rows opened by
the /RAS to complete the read-write operation.

3. R/(/W), Write enable: This signal determines whether to perform a read operation
or a write operation. While the signal is low, write operation is enabled and data input
is also captured on falling edge of /CAS whereas high enables the read operation.

4. Sense amplifier compares the charge of the capacitor to a threshold value and
returns either logic “0” or logic “1”.

For a read operation once the address line is selected, transistor turns ON and opens
the gate for the charge of the capacitor to move to the bit line where it is sensed by the
sense amplifier. Write operation is performed by applying a voltage signal to the bit
line followed by the address line allowing a capacitor to be charged by the voltage
signal.

ROM (Read-Only Memory)


Another constituent of the main memory is ROM (read only memory). Unlike RAM,
which is read-write memory and volatile, ROM’s are read only and non-volatile
memory i.e. content of the ROM persist even if power is switched-off. Once data is
stored at the time of fabrication, it cannot be modified. This is why, ROM is used to
store the constants and the programs that are not going to change or get modified
during their lifetime and will reside permanently in the computer. For example,
bootstrap loader, which loads the part of the operating system from secondary storage
to the main memory and starts the computer system when power is switched on, is
stored in ROM.

A block diagram of 2m× n ROM looks similar to that of RAM. As ROM is a read-only
memory there is no need of explicit read and write signals. Once the chip is selected
using chip select signals a data word is read and placed on to the data bus. Hence, in
the case of ROM, you need an unidirectional data bus i.e. only in output mode as
shown in figure 5.7. Another interesting fact about ROM is that, ROM offers more
memory cells and thus, memory as compared to the RAM for same size chip.

Figure 5.7: Block Diagram of 2mx n ROM

10
T Memory Syystem
The
m m
As shhown in Figu ure 5.7, 2 × n ROM has 2 words of n bits each foor which it haas m
addreess lines and n output datta lines. For example, in 128 × 8 ROM M, you have 128
memmory words off 8-bit each. For
F 128 × 8 ROM R i.e. 2m = 27, m = 7, yoou need 7 adddress
lines (minimum number
n of bitss required to represent
r 1288) and 8-bit ou
utput data buss.
Figurre 5.8 shows a 32×8 ROM M.

Figure 5..8: Internal diagram of 332x8 ROM

Unlikke RAMs, which


w are seequential circcuits, ROMss are combinnational circuuits.
Typiccally, to design a RAM of o specific sizze you need a decoder andd OR gates. For
exammple, to desiggn a ROM of size 32 x 8 bits
b you need a decoder off size 5×32 annd 8
OR gates.
g 5×32 decoder
d will have
h 5 input lines, which will act as 5 address linees of
the ROM,
R the deccoder will con
nvert 5-bit in
nput address tot 32 differennt outputs. Figgure
5.8 shows the connstruction of 32
3 × 8 ROM using
u 5×32 ddecoder and eiight OR gatess for
data output. ROM Ms of other sizes can be constructedd similarly. For examplee, to
consttruct a ROM of 64 × 4 RO OM, you need d a 6×64 decooder and fourr OR gates annd to
consttruct a ROM of size 256×88, you need 8×256 decoderr and 8 OR gaates.
As discussed,
d ROOMs are non-vvolatile mem mory and conteent of the ROOM once writtten,
cannnot be changedd. Therefore, ROMs are ussed to store thhe look-up tabbles for consttants
to sp
peed up the computation.
c In addition, ROM can stoore the boot loader progrrams
and gaming
g progrrams. All this requires, zeero error in writing
w of succh programs and
thereefore, ROM device
d fabricaation requires very high precision. Consstructing a ROOM,
as shhown in figurre 5.8, requires decision about
a which iinterconnectioons in the cirrcuit
shouuld be open annd which interconnections should be cloosed. There are
a four ways you
can program
p a RO
OM which aree as follows:
1. Mask
M ROM (MROM):
( Maasking of RO OM is done bby the device manufactureer in
the
t very last phase
p of the fabrication prrocess on cusstomers speciial request. Mask
M
ROMs
R are cuustomised ass per the useer requiremennts, thus, aree very costlyy as
different
d maskks are requireed for differennt specificatioons. Because of very high cost
of
o masking, thhis customizaation is generaally used in m manufacturing g of ROM at vvery
large
l scale.
2. Programmabl
P e ROM (PR ROM): MRO OMs are noot cost effeective for sm mall
productions,
p P
PROMs are preferred
p for small quantitties. PROMs are programm med
using
u a special hardware which
w blow fuses
f with a very high vooltage to prodduce
logic
l “0” andd intact fuse defines
d logic “1”. The conntent of PROM is irreverssible
once
o program
mmed.
3. Erasable
E PRO
OM (EPROM M): EPROM Ms are third type of RO OMs which are
restructured
r or med using shortwave radiaations. An ulttraviolet lightt for
o reprogramm
Basic Computer Organisation a specific duration is applied to the EPROM, which destroys/ erases the internal
information and after which EPROMs can be programmed again by the user.
4. Electrically EPROM (EEPROM) : EEPROMs are similar to EPROMs except of
using ultraviolet radiations for erasing PROM, EEPROM uses electrical signals to
erase the content. EEPROM can be erased or reprogrammed by the user without
removing them from the socket.

Flash Memory
Flash memory is a non-volatile semiconductor memory which uses the programming
method of EPROM and erases electrically like EEPROM. Flash memory was
designed in 1980s. Unlike, EEPROM where user can erase a byte using electrical
signals, a section of the memory or a set of memory words can be erasable in flash
memory and hence the name flash memory i.e. which erases a large block of memory
at once. Flash memory is easily portable and mechanically robust as there is no
mechanical movement in the memory to read-write data. Flash memory is widely used
in USB memory, SD and micro SD memory cards used in cameras and mobile phones
respectively.
There are two types of flash memory, viz. NAND flash memory, where read operation
is performed by paging the contents to the RAM i.e. only a block of data is accessed
not an individual byte or word; and NOR flash memory, which are able to read an
individual memory byte/word or cell.

The features of various semiconductor memories are summarised in the Table 1.

Erase
Write Volatile/
Memory Type Mechanism/
Mechanism Non- Volatile
Level
Random-access
Read–Write Electrical/ Byte Electrical Volatile
Memory (RAM)
Read –only
Read–Only Not Applicable Masks Non-volatile
Memory (ROM)
Programmable
Read–Only Not Applicable Electrical Non-volatile
ROM (PROM)
Erasable PROM Read-
UV light/ Chip Electrical Non-volatile
(EPROM) mostly
Electrically
Read- Electrical/
Erasable Electrical Non-volatile
mostly Byte
(EEPROM)
Read- Electrical/
Flash memory Electrical Non-volatile
mostly Block

Table 1: Features of Semiconductor Memories

Check Your Progress 1


1. Differentiate among RAM, ROM, PROM and EPROM.
……………………………………………………………………………………

……………………………………………………………………………………

2. What is a flash memory? Give a few of its typical uses.


……………………………………………………………………………………

……………………………………………………………………………………
12
3. A memory has a capacity of 16K  16 The Memory System
(a) How many data input and data output lines does it have?
(b) How many address lines does it have?
……………………………………………………………………………………

……………………………………………………………………………………
4. A DRAM that stores 4K bytes on a chip and uses a square register array. Each
array is of size 4 bits. How many address lines will be needed? If the same
configuration is used for a chip which does not use square array, then how many
address lines would be needed?
………………………………………………………………………………………
………………………………………………………………………………………
5. How many RAM chips of size 256K  4 bit are required to build 1M Byte
memory?
……………………………………………………………………………………...
……………………………………………………………………………………...

5.4 SECONDARY MEMORY AND


CHARACTERISTICS
In previous section, we have discussed various types of random access and read only
memories in detail. RAM and ROM together make the main memory of the computer
system. You know that a program is loaded into main memory to complete its
execution. Computational units or CPU can directly interact with the main memory.
Hence, faster main memory, which can match with the speed of CPU, is always
desirable. In the previous section configuration of two types of RAMs, viz SRAM and
DRAM were discussed. As you may observe the SRAM consists of flip-flop based
circuits, therefore, is quite fast in comparison to DRAM However, the cost per bit of
DRAM is much less than the SRAM. Thus, you may observe the size of main memory
is much more than cache. It is discussed in more details in the next unit. To achieve
high speed, cost per bit of main memory is generally high which also limits its size.
On the other hand, as we have discussed, RAM, which is a major constituent of the
main memory, is volatile i.e. content of the main memory is lost when power is
switched off. Because of above mentioned issues, you need a low cost and high
capacity, non-volatile memory to store program files and the data for later use.
Secondary memory devices which are at the bottom of the memory hierarchy pyramid
are ideal for the said purpose. We will discuss various secondary storage devices in
this section.

5.4.1 Hard Disk Drive


In the era of Big Data, in which variety of data is generated rapidly, large secondary
storage has become an important component of every computer system. Today, hard
disk drives (HDD) is the primary type of secondary storage. The size of hard disk
drives in modern computer system ranges from Gigabytes (GB) to Terabytes (TB).
Internal hard drives extends the internal storage of a computer system whereas
external hard drives are used for back up storage.
HDD are electro-mechanical storage devices, which store digital data in the form of
small magnetic fields induced on the surface of the magnetic disks. Data recorded on
the surface of magnetic disks is read by disks read/write head, which transforms
magnetic signal to electrical signal for reading and electrical signal to magnetic field

13
Basic Computer Orga
anisation for writing.. HDD is coomposed of manym concenntric magneticc disks mounnted on a
central shaft
ft as shown in Figure 5.8.

Figuure 5.8: Interrnal structurre of Hard diisk drives (H


HDD)

Figure 5.8 shows the innternal structture of an H HDD. An HD DD is made of o several


concentric magnetic
m diskks mounted on o a central sshaft called spindle.
s Each magnetic
disk is madee of either glaass or an alum
minium disk called platterr. Each platterr is coated
with ferrom magnetic maaterial for storing
s data. Platter itseelf is made of non-
ferromagnettic material soo that its ownn magnetic fieeld should nott interfere thee magnetic
field of the data. Generaally, both sidees of the plattter is coated with
w magneticc material
for good stoorage capacityy at low cost.
Data recordded on the disk is accessedd through a read/write headd. Each side oof the disk
has its own read write heead. Each readd/write head isi positioned at a distance of tens of
nanometer called
c flying height to thee platter so thhat it can eassily sense or detect the
polarization
n of the magneetic field.

Figuure 5.9: Read/ Write Heaad

Two motorss are used in HDD. First one o is called the spindle motor,
m which is used to
d motor is used to move
rotate the sppindle on whiich all the plattters are mouunted. Second
the read/write heads across the entire surface of the platter radially and is called The Memory System
actuator or access arm.
Magnetic Read and Write Mechanisms
During a read/ write operation, read/write head is kept stationary while platter is
rotated by the spindle motor. As you know, data on the disk is recorded in the form of
magnetic field. The current is passed through the read/write head which induces a
magnetic field on the surface of platter and thus, records a bit on the surface. Different
directions of current generates magnetic fields with different polarities and hence are
used for storing “1” and “0”. Similarly, to read a bit from the surface, the magnetic
field is sensed by the read/write head which produces an electric current of the same
polarity and hence the bit value is read.
Data Organization and Formatting
As discussed and shown in figure 5.8, hard disk drives consists of number of
concentric platters which are mounted on a spindle forming a cylindrical structure.
Data is written in the form of magnetic fields on both surfaces of these platters and is
read by read/write head which is connected to an actuator. In this section, we will
discuss structure of magnetic disk in detail.
Structure of the disk is shown in figure 5.10. As you know, each magnetic disk is a
circular disk mounted on a common spindle but entire disk space is not used for data.
Disk surface is divided in to thousands of concentric circular regions called tracks.
The width of every track is kept the same. Data is stored in these tracks. Magnetic
field of one track should not affect the magnetic region in the other track thus two
tracks are kept apart with each other by a constant distance. Further, each track is
divided into number of sectors and two sectors are kept apart using inter-sector gap.
Data is stored in these sectors. Each track forms a cylindrical structure with other
tracks on other platters below or above it. For example, an outer most cylinder will
have outer most track of all the platters. So, if we have n tracks in a platter then there
will be n concentric cylinders too.
Components of the drive are controlled by a disk controller. Now a days, disk
controllers are built in to the disk drive. A new or blank magnetic disk is divided into
sectors. Each sector has three components: header, 512 byte (or more) data area and a
trailer. This process of is called physical / low level formatting. Header and trailer
contains metadata about the sectors e.g. sector number, error correcting code etc. Disk
controller uses this information whenever it writes or reads a data item on to a sector.
Data is stored in series of logical blocks. The disk controller maps the logical blocks
on to the physical disk space and also manages sectors which have been used for
storing data and which are still free. This is done by the operating system after
partitioning the disk in to one or more groups of cylinders. Disk controller stores the
initial data structure file of every sector on to the disk. This data structure file contains
a list of used and free sectors, list of bad sectors etc. Windows uses File Allocation
Table (FAT) for the said purpose.

15
Basic Computer Orga
anisation

Figuree 5.10: Magn


netic Disk Sttructure of CAV
C

There are tw wo arrangemments with whhich platters aare divided innto tracks annd sectors.
The first arrrangement is called as connstant linear vvelocity (CLVV), in which thhe density
of bits per track
t is kept uuniform, i.e. outer tracks aare longer thaan the inner tracks
t and
hence contaains more num mber of sectoors and data. O Outermost traacks are generally 40%
longer than the innermosst track. In this arrangemennt, in order to o maintain unniform bit/
mong tracks, tthe rotation speed
data rate am s is increeased from ouutermost to innner most
track. This approach
a is used by CD-ROM and DVD D-ROM drivees.

In another approach
a callled as constaant angular velocity
v (CAV)
V), the densityy of bits /
data per track is decreassing as we move from innnermost trackk to outermost track by
keeping thee disk rotationn speed consttant. As disk is moving att a constant speed,s the
width of thee data bits inccreases in thee outer trackss, which resullts in the connstant data
rate. Figuree 5.10 shows that the widdth of sectors in outer traacks is increasing and
density of bits is decreasiing.

Disk Perforrmance

Data is readd and written on the disks by the operatting system for f usage at laater stage.
A disk storees the program ms and relateed data. How
wever, disk is a much slow wer device
and the proograms storedd on it cannoot be executeed by the prrocessing uniit directly.
Therefore, the
t programs and its related data, whicch are not in the main meemory, are
loaded in thhe main mem mory from thhe secondaryy storage. Sinnce, the speeed of disk
read/write iss very slow inn compared tot RAM, timee to read or write
w a byte from
fr or on
to the disk affects the coomputer overrall efficiencyy. Therefore, in a single read/write
r
operation on disk data ofo one or morre sectors is transferred
t too/from the meemory. An
operating syystem, in geneeral, request for
f read/writee to one or moore sectors onn the disk.
The time takken by the dissk to complette a read/ writte request of the
t operatingg system is
known as diisk access timme. There are number of faactors which affect the perrformance
of the disk. These factorss are:
1. Seek Tiime: It is deffined as a tim
me taken by tthe read/writee head, or simmply as a
head, to
o reach the desired
d track on which thee requested sector
s is locaated. Head
should reach
r the desired track in minimum tim
me. Shorter seeek time leadds to faster
I/O operration.
2. Rotationnal Latency: Since, every track consistts of a numbeer of sectors, therefore,
the readd/write operaation can beb completedd only when the desired sector is
availablle under the read/write head
h for the I/O operatio on. It dependds on the
rotational speed of the spindle and is defined as a time taken by a particular sector The Memory System
to get underneath the read/write head.
3. Data Transfer Rate: Since, large amount of data is transferred in one read/write
operation, therefore, the data transfer rate is also a factor for I/O operation. It is
defined as the amount of data read or written by the read/write head per unit time.
4. Controller overhead: It is the time taken by the disk controller for mapping logical
blocks to physical storage and keep track of which sectors are free and which are
used.
5. Queuing Delay: time spent waiting for the disk to be free.
The disk access time is defined as the summation of seek time, rotational latency, data
transfer rate, controller overhead and queuing delay and is given by the equation.

𝑎𝑐𝑐𝑒𝑠𝑠 𝑠𝑒𝑒𝑘 𝑟𝑜𝑡𝑎𝑡𝑖𝑜𝑛𝑎𝑙 𝑑𝑎𝑡𝑎 𝑐𝑜𝑛𝑡𝑟𝑜𝑙𝑙𝑒𝑟


𝑞𝑢𝑒𝑢𝑖𝑛𝑔

Out of the five parameters mentioned in the above equation, most of the time of the
disk controller goes in moving the read/write to the desired location and thus seeking
the information. If the disk access requests are processed efficiently then performance
of the system can be improved. The aim of disk scheduling algorithm is to serve all
the disk access requests with least possible head movement. There are number of disk
scheduling algorithms which are presented here in brief.

First Come First Serve (FCFS) scheduling: This approach serves the disk access
request in the order they arrived in the queue.

Shortest Seek Time First (SSTF) scheduling: Shortest Seek Time First disk scheduling
algorithm selects the request from the queue which requires least movement of the
head.

SCAN scheduling: The current head position and the head direction is the necessary
input to this algorithm. Disk access requests are serviced by the disk arm as disk arm
starts from one end of the disk and moves towards the other end. On reaching the
other end the direction of the head is reversed and requests are continued to be
serviced.

C-SCAN scheduling: Unlike SCAN algorithm, C-SCAN does not serve any request in
the return trip. Instead, on reaching to the end, it reverses back to the beginning of the
disk and then serves the requests.

LOOK scheduling: LOOK is similar to SCAN algorithm with only a single difference,
after serving the last request, LOOK algorithm does not go till the end instead it
immediately reverses its direction and moves to the beginning of the other end.

5.4.2 Optical Memories


So far, the storage devices you have studied are based on either electric charge or
magnetic field. Magnetic memories are primarily used as a secondary storage device,
but they can easily be damaged. However they have lower cost per bit than solid state
devices.
First laser based memory was developed in 1982 by Phillips and Sony. Laser based
storage devices uses a laser beam to read or write data and are called as optical
memories or optical storage devices. As laser beams can be controlled more precisely
and accurately than magnetic read/write heads. Data stored on optical drives remains
unaffected by the magnetic disturbances in its surrounding.

17
Basic Computer Orga
anisation Initially, theese optical stoorage devices commonly known
k as com
mpact disk (CDD) or CD-
DA (Digitall Audio) weree used to storre only audioo data of 60 minute
m duratiion. Later,
huge comm mercial succeess of CD leead to devellopment of low cost opptical disk
technology. These CDs can be used as auxiliary storage and can store anny type of
digital data. A variety off optical-disk devices have been introduuced. We brieffly review
some of these types.

Compact Disk
D ROM (C
CD-ROM)
Compact Disk
D or CD-R ROM are mad de of a 1.2 m mm thick sheeet of a polyycarbonate
material. Eaach disk surfaace is coated with a reflecctive materiall generally alluminium.
The standarrd size of a coompact disk is 120 mm in diameter. Ann acrylic coat is applied
on top of thee reflective suurface to prottect the disk fr
from scratchess and dust.

F
Figure 5.11:: Outer Layoout of a CD

Unlike maggnetic disks, data on an optical


o disk iss recorded inn a spiral shaape tracks.
Each track is separated by a distancee of 1.6 mm.. Data in a trrack is recordded in the
form of landd and pit as shown
s in Figuure 5.13. Whhen a focused laser beam inn incident
on to the opptical disk, thhe disk is burrned as per thhe digitally reecorded data forming a
pit and landd structure. Thhe data is reaad from the suurface by meaasuring the inntensity of
the reflected
d beam. The pit area scattters the incideent beam, whhereas the lannd reflects
the incident beam, whichh are read as “0”“ and “1” reespectively.

Figure 5.112: Spiral traack of CD Figure 5.133: Land & Piit formation in CD trackk

As shown in i CD are in spiral shape. The tracks inn CDs are


n Figure 5.122, the tracks in
further divid
ded into secttors. All sectoors in CDs arre equal in leength. This means
m that
density of data
d recorded on the disk is uniform acrross all the traacks. Inner trracks have
less numberr of sectors whereas
w outer tracks
t have m
more sectors. CD-ROM devvices uses
constant lineear velocity ((CLV) methodd for reading the disk content. In this method,
m the
disk is rotated at lower velocity as we move away from the center of the disk. This The Memory System
ensures a constant linear velocity at each track of the CD. The format of a sector of
CD is show in Figure 5.14.

SYNC HEADER DATA L-ECC

12 Bytes 4 Bytes 2048 Bytes 288 Bytes

Figure 5.14: Sector format of CD


Data on the CD-ROM are stored in a track as a sequence of sectors. As shown in the
Figure 5.14 each sector has four fields viz. sync, header, user data followed by error
correcting codes. Each part of the sector is described below:

• Sync: It is the first field in every sector. The sync field is 12 byte long. The
first byte of sync field contains a sequence of 0s followed by 10 bytes of all 1s
and 1 byte of all 0s.

• Header: Header is four byte field in the sector. Three bytes are used to
represent the sector address and one byte is used to represent the mode i.e.
how subsequent fields in the sector are going to use. There are 3 modes:

• Mode Zero: Specifies a no user data i.e. blank data field.


• Mode One: Specifies an user data of 2048 bytes followed by 288 bytes of
error correcting code.
• Mode Two: No error correcting code will be used thus subsequent field
will contain 2336 bytes of user data.

• Data: Data field contains the user 2048 byte of user data when mode is 1 or
mode 2.

• L-ECC: Layered error correcting code field is 288 byte long field which is
used for error detection and correction in mode 1. In mode 2, this field is used
to carry an additional 288 bytes of user data.

Compact Disk Recordable (CD-R)


CD-Recordable are the compact disks which are capable of storing any type of digital
data. The physical structure of CD-R is same as that of CD-ROM as discussed in
previous section except that polycarbonate disk has a very thin layer of an organic dye
before the Aluminum coating. CD- R can record user data only once but user can read
the data many times thus these are also known as CD-WO (write once), or WORM
(write once read many). Many CD writers allow the users to write CD-R in multiple
session until CD is full. In each writing session, a partition is created in the CD- R.
But once written, data on CD-R cannot be changed or deleted. There are three types of
organic dyes used in CD-R.
Cyanine dyes are the most sensitive dye amongst the three types. CD-Rs have cyanine
dyes are green in color. Very sensitive to UV rays and even can lose the data if
exposed to direct sunlight for few days.
Phthalocyanine dye does not need a stabilizer as compared to cyanine dyes. They are
silver, gold or light green in color. They are very less sensitive as compared to cyanine
dyes but if exposed to direct sunlight for few weeks, it may lose the data.
Azo dye is the most stable among all types. It is most resistant to UV rays but if
exposed to direct sunlight for 3-4 weeks, the CD -R may lose the data.
19
Basic Computer Organisation Compact Disk Rewritable (CD-RW)
The CD-RW are re-writable optical disks. The data on CD-RW can be read or written
multiple times. But for writing again on the already written CD-RW, the disk data
must be erased first. There are two approaches of erasing the data written on CD-RW.
In the first approach, the entire disk data is erased completely i.e. all traces of any
previous data is erased. This is called full blanking. Whereas in another approach
called as fast blanking, only the meta data is erased. The later approach is faster and
allows rewriting the disk. The first approach is used for confidentiality purposes.
The phase change technology is used in CD-RW. The phase change disk uses a
material that has significantly different reflectivity in two different phase states. There
is an amorphous state, in which the molecules exhibit a random orientation and which
reflects light poorly; and a crystalline state, which has a smooth surface that reflects
light well. A beam of laser light can change the material from one phase to the other.
The phase change technology of CD-RW uses a 15-25 % degree of reflection whereas
CD-R works on 40-70 % degree of reflection.

Digital Versatile Disk (DVD)


Digital versatile disk commonly known as DVD is also an optical storage device like
CD, CD-R, CD-WR. Among the three DVDs have highest storage capacity ranges
from 1.4 GB to 17 GB on a single side. The higher storage capacity is enabled by the
use of laser beams of shorter wavelength as compared to compact disks. DVD uses a
laser beam of 650 nm whereas compact disk uses a laser beam of 780 nm. Shorter
wavelength laser beam creates shorter pits on the polycarbonate disk, thus offers
higher storage capacity for similar dimensions. DVD-Audio and DVD-video are a
standard format for recording audio and video data on DVDs. Like compact disks,
DVD also comes in various variants like DVD-ROM, DVD-R, DVD-WR etc.

Blue Ray Disk


A blue ray disk is a digital disk that can store several hours high definition videos.
Blue ray disks is of the same size of DVD, but can store 25 GB to 128 GB of data. A
blue ray disk is designed to replace DVD technology. It has its applications in gaming
applications, which uses very high quality animations.

5.4.3 Charge-coupled Devices, Bubble Memories and Solid State


Devices
Charge-coupled Devices CCDs (CCDs)
Charge couple devices are photo sensitive devices which are used to store digital data.
CCD is an integrated circuit of MOS-capacitors called cells, which are arranged in an
array like structure in which each cell is connected with its neighbouring cell. Each
capacitor can hold the charge which is used to represent the logic “1”. While reading
the array of capacitors, the capacitor moves its charge to the neighbouring capacitor
with next clock pulse. CCD arrays are mainly used in representing images and video
data, where presence and absence of charge in the capacitor represents the
corresponding pixel intensity.
As mentioned, CCD are highly photo-sensitive in nature and thus, produces a good
quality picture even if light is dim or in low illumination intensity. Now a days, CCDs
are widely used in digital cameras, satellite imagery, radar images and other high
resolution imagery applications.

Magnetic Bubble Memories


Working principle of magnetic bubble memory is similar to that of charge coupled
devices (CCD) discussed in the previous section. Magnetic bubble memory is an
20
arrangement of small magnetic area called bubble on a series of parallel track made of The Memory System
magnetic material. Each bubble represents a binary “1” and absence of a bubble on
magnetic material is interpreted as “0”. Binary data is read from the memory by
moving these bubbles towards the edge of a track under the influence of external
magnetic field. Magnetic field produces as bubbles remain persistent and do not
demagnetise by its own. So, magnetic bubble memories are non-volatile type
memories.

Solid State Devices (SSD)

Solid state drives also known as solid state storage devices are based on flash memory.
As discussed, flash memory, a non-volatile type memory uses semiconductor devices
to store the data. The major advantage of SSD is that it is purely an electronic device
i.e. unlike HDD, SSD does not have mechanical read/ write head other mechanical
components. Hence, reading and writing through SSD is faster than HDD. Now a
days, SSD have replaced HDD in computer systems, however, SSD disks are more
expensive than HDDs.

Check Your Progress 2


1. What will be the storage capacity of a disk, which has 8 recording surfaces, 32
tracks with each track having 64 sectors. Also, what would be the size of one
cylinder of the disk? You may assume that each sector can store 1 MB data.
……………………………………………………………………………………

……………………………………………………………………………………
2. What would be the rotation latency time for the disk specified above, if it has a
rotational speed of 6000 rpm?
……………………………………………………………………………………

……………………………………………………………………………………

3. What are the advantages and disadvantages of using SSD over HDD?
………………………………………………………………………………………………

………………………………………………………………………………

4. What are the differences between CLV and CAV disks?


……………………………………………………………………………………...
……………………………………………………………………………………...

5.5 RAID AND ITS LEVELS


Continuous efforts have been made by researchers to enhance the performance of the
secondary storage devices. As pointed out in previous sections performance of the
secondary storage is inversely affected by disk access time. Lower the disk access
time higher would be the performance. What about an idea of providing parallel
access to a group of disks? With the use of parallel access the amount of data that can
be accessed per unit time can be enhanced by a significant factor. A mechanism which
splits the data on multiple disk is known as data striping. Data access through parallel
access allows users to access data stored at multiple disks simultaneously, thus
reduces effective reading time. Does data striping ensure protection of data against
disk failure?

21
Basic Computer Organisation Another important factor for secondary storage is the reliability of data storage
system. Storing same data on more than one disks enhances reliability. If one disk
fails, then data can be accessed through another disk. Replicating data on multiple
disks is called mirroring. Mirroring brings redundancy in data. So many schemes have
been employed to enhance the performance and reliability of data and collectively
they are called as redundant arrays of inexpensive disks (RAID). Based on the trade-
off between reliability and performance RAID schemes have been categorises into
various RAID levels.
Data striping increases the data transfer speed as different data bytes are accessed in
parallel from different disks in a single disk access time. Whereas mirroring protects
data from disk failures. If one disk fails then same data is accessed from the copy of
the data stored in other disk.

RAID Levels

RAID Level-0: RAID level-0 implements block splitting of data with no protection
against disk failures. In block splitting, each block is stored in a different disk in the
array. For example, ith block of a file will be store in ( i mod n ) + 1 disk, where n is
the total number of disks in the array. In this case, a significant enhancement on the
performance can be observed as n blocks can be accessed (one each from each disk) in
a single disk access time.

(a) RAID Level 0


RAID Level-1: This level protects data by implementing mirroring. If a system has 2
disks then each block of information will be stored in both of the disks. This ensures,
if one disk fails then same copy of the block is accessed from the second disk.
Mirroring introduces redundancy unlike level-0 which increases the data transfer rate.

Strip 0 Strip 1 Strip 2 Strip 3 Strip 0 Strip 1 Strip 2 Strip 3


Strip 4 Strip 5 Strip 6 Strip 7 Strip 4 Strip 5 Strip 6 Strip 7
Strip 8 Strip 9 Strip 10 Strip 11 Strip 8 Strip 9 Strip 10 Strip 11

Strip 12 Strip 3 Strip 14 Strip 15 Strip 12 Strip 13 Strip 14 Strip 15

(b) RAID Level 1


RAID Level-2: This level uses error detection and correction bits, which are extra bits
used for detection and correction of a single bit error in a byte. This is why this level
is also known as memory-style error correction code organization. If one of the disk
fails then parity bits and remaining bits of the byte are used to recover the bit value.

22
The Memory System

b0 b1 b2 b3 f0(b) f1(b) f2(b)

(c) RAID 2 (Redundancy through Hamming Code)


RAID Level-3: Single parity disk is used in this scheme. Parity bit for a sector is
computed and stored in a parity disk. During the access, parity bit of the sector is
computed and if computed parity bit is equal to the stored parity, the missing bit is 0
otherwise it is 1. This RAID level is also known as bit-interleaved parity organization.
Thus has an advantage over level-2 that only single parity disk is used as compare to
number of parity disks in level-2. The biggest drawback of this approach is that all the
disks are used for single I/O operation in computation of the parity bit which slows
down the disk access and also restricts parallel access.

b0 b1 b2 b3 Parity(b)

(d) RAID Level 3

RAID Level-4: This level uses block striping and one disk is used to keep parity
block. This is also called block-interleaved parity organization. The advantage of
block interleaving is that parity block along with corresponding blocks on other disks
is used to retrieve the damaged block or the blocks of the failed disk. Unlike in level-
3, block access reads one disk which allows parallel access to other blocks stored in
other disks in the array.

Block 0 Block 1 Block 2 Block 3 Parity (1-3)

Block 4 Block 5 Block 6 Block 7 Parity (4-7)

Block 8 Block 9 Block 10 Block 11 Parity (8-11)

Block 12 Block 13 Block 14 Block 15 Parity (12-15

(e) RAID 4 (Block level Parity)

23
Basic Computer Organisation RAID Level-5: This level stores block of data and parity in all the disks in the array.
One disk store the parity while data is spread out on different disks in the array. This
structure is also known as block-interleaved distributed parity.

Block 0 Block 1 Block 2 Block 3 Parity (0-3)

Block 4 Block 5 Block 6 Parity (4-7) Block 7

Block 8 Block 9 Parity (8-11) Block 10 Block 11

Block 12 Parity (12-15) Block 13 Block 14 Block 15

Parity (16-19) Block 16 Block 17 Block 18 Block 19

(f) RAID 5 (Block-level Distributed Parity)


RAID Level-6: Level-6 uses error correcting codes for recovery of damaged data
while other levels uses parity. It also provides protection against multiple disks
failures. For the recovery purposes, this arrangement is used to store redundant data
on some of the disks, hence it is also called as p + q redundancy scheme. Here, p is the
number of disks that store the error correcting codes while q is the number of disks
that store redundant data.

Block 0 Block 1 Block 2 Block 3 Parity (0-3) Parity (0-3)

Block 4 Block 5 Block 6 Parity (4-7) Q (4-7) Block 7

Block 8 Block 9 P (8-11) Q (8-11) Block 10 Block 11

Parity 12 Parity (12- Q (12-15) Block 13 Block 14 Block 15


15)

(g) RAID Level 6

Table below summarises characteristics of various RAID levels.


I/O Data
Request Transfer
RAID Typical
Category Features Rate Rate
Level Application
(Read (Read
/write) /write)
Applications
a) The disk is divided Large Small which requires
0 Striping into blocks or sectors. Blocks: Blocks: high performance
b) Non-redundant. Excellent Excellent for non-critical
data
a) Mirror disk which
contains the same data
is associated with every
disk. Good / May be used for
1 Mirroring Fair /fair
b) Data Recovery is fair critical files
simple. On failure, data
is recovered from the
mirror disk.

24
a) All member disks The Memory System
participate in every I/O
request.
b) Synchronizes the
spindles of all the disks
to the same position. Not useful for
Parallel
2 c) The blocks are very Poor Excellent commercial
Access
small in size (Byte or purposes.
word).
d) Hamming code is
used to detect double-
bit errors and correct
single-bit error.
a) Parallel access as in
level 2, with small data
Large I/O request
blocks.
Parallel size application,
3 b) A simple parity bit is Poor Excellent
Access such as imaging
computed for the set of
CAD
data for error
correction.
a) Each member disk
operates independently,
which enables multiple
input/output requests in
parallel. Not useful for
Independent Excellent/ Fair /
4 b) Block is large and commercial
access fair poor
parity strip is created purposes.
for bits of blocks of
each disk.
c) Parity strip is stored
on a separate disk.
a) Allows independent
access as in level 4.
b) Parity strips are
distributed across all High request rate
Independent Excellent Fair /
5 disks. read intensive,
access / fair poor
b) Distribution avoids data lookup
potential input/output
bottleneck found in
level 4.
Also called the p+q
redundancy scheme, is
Application
much like level 5, but
Independent Excellent/ Fair / requiring
6 stores extra redundant
access poor poor extremely high
information to guard
availability
against multiple disk
failures.

Check Your Progress 3


1. What is the need of RAID?
……………………………………………………………………………………

……………………………………………………………………………………

2. Which RAID levels provide good data transfer rate?


……………………………………………………………………………………
25
Basic Computer Organisation ……………………………………………………………………………………
3. Which RAID level is able to fulfil large number of I/O requests?
……………………………………………………………………………………

……………………………………………………………………………………

5.6 SUMMARY

This unit introduces the concept of memory hierarchy, which is primarily required due
to the high cost per bit of high speed memory. The processing unit have register,
cache, main memory and secondary or auxiliary memory. The main memory consists
of RAM or ROM. This unit explains the logic circuit and organisation of RAM and
ROM. The unit also explains several different types of secondary storage memories.
The unit provide details on hard disk and its characteristics. It also gives details of
different kind of optical disk. The concept of access time and constant linear and
angular velocity has also been explained in details. For larger computer systems
simple hard disk is not sufficient, rather an array of disks called RAID are used for
such systems to provide good performance and reliability. The concept of RAID and
various levels of RAID has been defined in this unit. The next unit will introduce you
to the concept of high speed memories.

5.7 ANSWERS

Check Your Progress 1


1. RAM is a sequential circuit, volatile, requires refreshing (DRAM) and is a read/
write memory; ROM, PROM and EPROM are mostly non-volatile memories.
ROM is a combinational circuits. All these ROMs are written mostly once and
read many times.
2 Flash memory is a non-volatile semiconductor memory, where a section of the
memory or a set of memory words can be erased. They are portable and
mechanically robust as there is no mechanical movement in the memory to read-
write data. Flash memory is used in USB memory, SD and micro SD memory
cards used in cameras and mobile phones respectively.
3. (a) Since a word of data is 16 bits, it will have 16 data input and 16 data output
lines, if not multiplexed.
(b) The number of words are 16K, which is 214. Thus, 14 address lines would be
required.
4. The memory must select one of the 4K bytes, which is 212. In case a square array
is used (as shown in Figure 5.6), then 6 row address and 6 column address lines
would be needed, which can be multiplexed. So just 6 address lines be sufficient.
However, for a non square memory you may require all 12 address lines.
5. Two chips will be required to make 256  8 memory. 4 such combinations would
be required to make 1 MB memory. Thus, you will require 8 such chips.
Check Your Progress 2
1. Storage capacity of a disk = recording surfaces × tracks per surface × sectors per
track × size of each sector
Storage capacity of a disk = 8× 32 × 64 × 1 MB = 23×25×26×220 = 234 = 16 GB
One cylinder will have = = 8× 64 × 1 MB = 23×26×220 = 512 MB

26
2. The time of one rotation = 1/6000 min = 60/6000 sec = 1/100 sec= 10 millisec The Memory System
Rotational latency = on an average time of half rotation = 5 ms

3. SSD drives does not require any mechanical rotation, therefore are less prone to
failure. In addition, they are much faster than HDD. But they are more expensive
than HDD

4. The size of sectors on CLV disks is same on the entire disk, therefore, these disks
are rotated a different speed. Density of data is same in all the sectors. In CAV
disks the rotation speed is same, thus, sector size is more in the outer tracks.
However, reading/writing process, in general, is faster.
Check Your Progress 3
1. RAID are a set of storage devices put together for better performance and
reliability. Different kind of RAID levels have different objectives.

2. Good data transfer rate are provided by RAID level 0, 2 and 3.

3. Large number of I/O requests are fulfilled by RAID level 0, 1, 4,5 ,6.

27
The Memory System
UNIT 6 ADVANCE MEMORY
ORGANISATION
Structure Page Nos.
6.0 Introduction
6.1 Objectives
6.2 Locality of Reference
6.3 Cache Memory
6.4 Cache Organisation
6.4.1 Issues of Cache Design
6.4.2 Cache Mapping
6.4.3 Write Policy
6.5 Associative Memory
6.6 Interleaved Memory
6.7 Virtual Memory
6.8 Summary
6.9 Answers

6.0 INTRODUCTION
In the last unit, the concept of Memory hierarchy was discussed. The Unit also
discussed different types of memories including RAM, ROM, flash memory,
secondary storage technologies etc. The memory system of a computer uses variety of
memories for program execution. These memories vary in size, access speed, cost and
type, such as volatility (volatile/ non-volatile), read only or read-write memories etc.
As you know, a program is loaded in to the main memory for execution. Thus, the size
and speed of the main memory affects the performance of a computer system. This
unit will introduce you to concepts of cache memory, which is small memory between
the processing unit and main memory. Cache memory enhances the performance of a
computer system. Interleaved memory and associative memories are also used as
faster memories. Finally, the unit discusses the concept of virtual memory, which
allows programs larger than the physical memory.

6.1 OBJECTIVES
After going through this Unit, you will be able to:
 explain the concept of locality of reference;
 explain the different cache organisation schemes;
 explain the characteristics of interleaved and associative memories;
 explain the concept of virtual memory.

6.2 LOCALITY OF REFERENCE


Memory system is one of the important component of a computer. A program is
loaded in to the main memory for execution. Therefore, a computer should have a
main memory, which should be as fast as its processor and should have large size. In
general, the main memory is constructed using DRAM technology which is about 50
to 100 times slower than the processor speed. This may slow down the process of
instruction execution of a computer. Using SRAM may change this situation as it is
almost as fast as a processor, however, it is a costly memory. So, what can you do? Is
it possible to use large main memory as DRAM, but use a faster small memory
between processor and main memory? Will such a configuration enhance performance
of a computer? This section will try to answer these questions.

5
Basic Computer Orga
anisation The importaant task of a computer
c is to
t execute insstructions. It has
h been observed that
on an average 80-85 percent of thee execution ttime is spennt by the proocessor in
accessing thhe instructionn or data fromm the main mmemory. The situation
s becoomes even
worst whenn instruction tto be executeed or data to be processedd is not present in the
main memoory.
Another facctor which has been observved by analyssing various programs
p is thhat during
the programm execution, the processsor tends to access a seection of thee program
instructions or data for a specific timee period. For example, wh hen a program m enters in
a loop struccture, it conttinues to acceess and execute loop stattements as loong as the
looping conndition is satiisfied. Similaarly, wheneveer a program calls a subrooutine, the
subroutine statements
s aree going to exxecute. In another case, when
w a data ittem stored
in an array or
o array like sstructure is acccessed then it is very likeely that eitherr next data
item or prevvious data iteem will be accessed by thee processor. AllA these phennomenons
are known as
a Locality off Reference orr Principle off Locality.
So, accordinng to the priinciple of loccality, for a specific timee period, the processor
tends to maake memory references
r cloosed to each oother or accesses the samee memory
addresses again
a and again. The earrlier type is known as sppatial locality ty. Spatial
locality speccifies if a datta item is acccessed then daata item storeed in a nearbyy location
to the data item just acccessed may be b accessed inn near futuree. There can bbe special
case of spaatial locality, which is terrmed as sequuence localityy. Consider a program
accesses thee elements oof a single dim mensional arrray, which is a linear data structure,
in the sequeence of its inddex. Such acccesses will reead/write on a sequence of memory
locations onne after the otther. This typee of locality, which
w is a case of spatial locality,
l is
referred to as
a sequence loocality.
Another typpe of localitty is the tem mporal localiity, if a dataa item is accessed or
referenced at
a a particularr time, then thhe same data item is expected to be acccessed for
i near future. Typically it is observedd in loop stru
some time in uctures and subroutine
s
call.
As shown in n Figure 6.1, when the proogram enters iin the loop strructure at linee 7, it will
execute the loop statemeents again andd again multipple times till the loop termminates. In
this case, processor
p needs to access instructions 9 and 10 freequently. On the other
hand, when a program acccesses a dataa item store inn an array, thhen in the nexxt iteration
it accesses a data item stoored in an adjjacent memorry location to the previous one.

Figure 6.1: Loop struucture


The localityy of referencce, be it spaatial or tempporal, suggessts that in most
m cases
accesses to a program innstruction annd data confinnes to a locallity, hence, a very fast
memory thaat captures thhe instructionss and data neearer to the current instrucctions and
data accessees can potenttially enhance the overall performancee of a compuuter. Thus,
attempts are continuously made to utilize the precious time of the processor efficiently The Memory System
A high speed memory, called cache memory, was developed. Cache memory utilises
the principle of locality to reduce the memory references to the main memory by
keeping not only the currently referenced data item but also the nearby data items. The
cache memory and its organisation is discussed in the next sections.

6.3 CACHE MEMORY


A processor makes many memory references to execute an instruction. A memory
reference to the main memory is time consuming as main memory is slower compared
to the processing speed of the processor. These memory references, in general, tends
to form a cluster in the memory, whether it is a loop structure, execution of a
subroutine or an access to a data item stores in an array.

If you keep the content of the cluster of expected memory references in a small,
extremely fast memory then processing time of an instruction can be reduced by a
significant amount. Cache memory is a very high speed and expensive memory as
compared to the main memory and its access time is closer to the processing speed of
the processor. Cache memory act as a buffer memory between the processor and the
main memory.

Because cache is an expensive memory so its size in a computer system is also very
small as compared to the main memory. Thus, cache stores only those memory
clusters containing data/ instructions, which have been just accessed or going to be
accessed in near future. Data in the cache is updated based on the principle of locality
explained in the previous section.

How data access time is reduced significantly by using cache memory?

Data in main memory is stored in the form of fixed size blocks/pages. Cache memory
contains some blocks of the main memory. When processor wants to read a data item
from the main memory, a check is made in the cache whether data item to be accessed
is present in the cache or not. If data item to be accessed is present in the cache then it
is read by the processor from the cache. If data item is not found in the cache, a
memory reference is made to read the data item from the main memory, and a copy of
the block containing data item is also copied into the cache for near future references
as explained by the principle of locality. So, whenever processor attempts to read the
data item next time, it is likely that the data item is found in the cache and saves the
time of memory reference to the main memory.

Figure 6.2: Cache Memory

As shown in the Figure 6.2, if requested data item is found in the cache it is called as
cache hit and data item will be read by the processor from the cache. And if requested
data item is not found in cache, called cache miss, then a reference to the main
memory is made and requested data item is read and block containing data item will
also be copied into the cache.
7
Basic Computer Organisation Average access time for any data item is reduced significantly by using cache then
without using cache. For example, if a memory reference takes 200 ns and cache takes
20 ns to read a data item. Then for five continuous references will take:
Time taken with cache : 20 (for cache miss) + 200 (memory reference)
+ 4 x 20 (cache hit for subsequent access)
= 300 ns

Time without cache : 5 x 200 = 1000 ns


In the given example, the system first looks into the cache for the requested data item.
As it is the first reference to the data item it will not be present in the cache, called as
cache miss, and thus, requested data item will be read from the main memory. For
subsequent requests of the same data item, the data item will be read from the cache
only and no references will be made to the main memory as long as the requested data
remains in the cache.

Effective access time is defined as the average access time of memory access, when a
cache is used. The access time of memory access is reduced in case of a cache hit,
whereas it increases in case of cache miss. In the above mentioned example processor
takes 20 + 200 ns for a cache miss, whereas it takes only 20 ns for each cache hit.
Now suppose, we have a hit ratio of 80%, i.e. 80 percent of times a data item would
be found in the cache and 20 % of the times it would be accessed from the main
memory. So effective access time (EAT) will be computed as :

effective access time = (cache hit x data access time from cache only )
+(cache miss x data access time from cache and main memory)

effective access time = 0.8 (hit ratio) x 20 (cache hit time)


+ 0.2( miss ratio) x 220 (cache miss and memory reference)

effective access time = 0.8 x 20 + 0.2 x 220


= 16 + 44
= 60 ns

From the example it is clear that cache reduces the average access time and effective
access time for a data item significantly and enhance the computer performance.

Check Your Progress 1

1. What is the importance of locality of reference?


……………………………………………………………………………………
……………………………………………………………………………………
……………………………………………………………………………………
2. What is block size of main memory for cache transfer?
……………………………………………………………………………………
……………………………………………………………………………………
……………………………………………………………………………………

3. Hit ration of computer system is 90%. The cache has an access time of 10ns,
whereas the main memory has an access time of 50ns. Computer the effective
access time for the system.
……………………………………………………………………………………
……………………………………………………………………………………
……………………………………………………………………………………
8
The Memory System

6.4 CACHE ORGANISATION


The main objective of using cache is to decrease the number of memory references to
a significant level by keeping the frequently accessible data/ instruction in the cache.
Higher the hit ratio (number of times requested data item found in cache / total
number of times data item is requested), lower would be the references to the main
memory. So there are number of questions that need to be answered while designing
the cache memory. These Cache design issues are discussed in the next subsection.

6.4.1 Issues of Cache Design


In this section, we present some of the basic questions that should be asked for
designing cache memory.

What should be the size of cache memory?

Cache is an extremely fast but very expensive memory as compared to the main
memory. So large cache memory may shoot up the cost of the computer system and
too small cache might not be very useful in real time. So, based on various statistical
analyses, if a computer system has 4 GB of main memory then the size of the cache
may go up to 1MB.

What would be the block size for data transfer between cache and main memory?

Block size directly affects the cache performance. Higher block size would ensure
only fewer number of blocks in cache, whereas small block size contains fewer data
items. As you increase the block size, the hit ratio first increases but it decreases as
you further increase the block size. Further increase in block size will not necessarily
result in access of newer data items, as probability of accessing data items in the block
with larger number of data items tends to decrease. So, optimal size of the block
should be chosen to maximise the hit ratio.

How blocks are going to be replaced in cache?

As execution of the process continues, the processor requests for new data items. For
new data items and thus, new blocks to be present in the cache, the blocks containing
old data items must be replaced. So there must be a mechanism which may select the
block to be replaced which is least likely to be needed in near future.

When changes in the blocks will be written back on to the main memory?

During the program execution, the value of a data item in a cache block may get
changed. So the changed block must be written back to the main memory in order to
reflect those changes to ensure data consistency. So there must be a policy, which may
decide when the changed cache block is written back to the main memory.
In certain computer organisations, the cache memory for data and instruction are
placed separately. This results in separate address spaces for the instructions and data.
These separate caches for instructions and data are known as instruction cache and
data cache respectively. If processor requests an instruction, then it is provided by the
instruction cache, whereas requested data item is provided by the data cache. Using
separate cache memories for instruction and data enhances computer performance.
While some computer systems implements different cache memories for data and
instructions other implements multiple level of cache memories. Two level cache
popularly known as L1 cache and L2 cache is most commonly used. Size of level 1
cache or L1 cache is smaller than the level 2 or L2 cache. Comparatively more
frequently used data/ instructions are stored in L1 cache.
9
Basic Computer Organisation

As discussed earlier, the main memory is divided into blocks/ frames/ pages of k
words each. Each word of the memory unit has a unique address. A processor requests
for read/write of a memory word. When a processor's request of a data item cannot be
serviced by cache memory, i.e. a cache miss occurs, the block containing requested
data item is read from the main memory and a copy of the same is stored in cache
memory. A cache memory is organised as a sequence of line. Each cache line is
identified by a cache line number. A cache line stores a tag and a block of data. Cache
and main memory structure is shown in Figure 6.3. General structure of cache
memory having M lines and N=2n main memory size is shown in figure 6.3(a) and
figure 6.3(b) respectively.

 (a) Cache structure

 
 (b) Main Memory structure
Figure 6.3: Structure of Cache and Main Memory
An example of cache memory of size 512 words is shown in Figure 6.4. The example
shown in Figure 6.4 has a main memory of 64 K words of 16 bits each and cache
10
memory can have 512 words of 16 bits each. To read a data item processor sends a 16 The Memory System
bit address to the cache and if cache misses then the data item/ word is fetched from
the main memory and accessed data item/ word is also copied into the cache. Please
note that the size of block is just 1 memory word in this case.

Figure 6.4: Example of Cache and Main Memory


6.4.2 Cache Mapping
As discussed earlier, a request of processor to access a main memory is to checked in
cache memory. Where can a block of main memory be places in cache and how
processor can determine, if the data requested is present in cache? Answer to these
questions are provided by cache mapping scheme. A mapping mechanism maps the
block from the main memory to a cache line. The mapping is required as cache is
much smaller in size than the size of the main memory. So only few blocks from the
main memory can be stored in cache. There are three types of mapping in cache:
Direct Mapping:
Direct mapping is the simplest amongst all three mapping schemes. It maps each
block of main memory into only one possible cache line. Direct mapping can be
expresses as a modulo M function, where M is the total number of cache lines as
shown:
𝑖 𝑗 𝑚𝑜𝑑𝑢𝑙𝑜 𝑀
where, i = number of cache line to which main memory block would
be mapped.
j = the block address of main memory, which is being requested
M = total number of cache lines
So, line 0 of the cache will store block 0, M, 2M….. and so on. Similarly, line 1 of
cache will store block 1, M + 1, 2M + 1, and so on.
An address of main memory word, as shown in Figure 6.3(b), consists of n bits. This
address of each word of main memory has two parts: block number (n-k bits) and
word number within the block (k bits). Here, each block of the main memory contains
2k number of words. The cache memory interprets n-k bit block number in to two parts
11
Basic Computer Organisation as: tag and line number. As indicated in Figure 6.3(a), the cache memory contains M
lines. Assuming m address bits (2m = M) are used to identify each line, then most
significant (n-k) - m bits of (n-k) bit block number are interpreted as tag and m bits are
used as line number of the cache. Please note the tag bits are used to identify, which of
the main memory block is presently in that cache line. The following diagram
summarizes the bits of main memory address and related cache address:
Main Memory Address (n bits)

Main Memory Block Address ((n-k) bits) Block address size (k bits)

Address bits for identifying


Address for Tag Address mapping to Cache Line
a word in a Block

((n-k)-m) bits for tag m bits to identify the Cache line Block address size (k bits)

Figure 6.5: Main Memory address to Cache address mapping in Direct


mapping Scheme
Please note the following points in the diagram given above.
The size of Main Memory address: n bits
Total number of words in main memory: 2n
The size of a Main memory block address: most significant (n-k) bits
Number of words in each block: 2k
In case, a referenced memory address is in cache, bits in the tag field of the main
memory address should match the tag filed of the cache.
Example: Let us consider a main memory of 16 MB having a word size of a byte and
a block size is of 128 bits or 16 words (one word is one byte in this case). The cache
memory can store 64 KB of data. Determine the following:
a) Size of various addresses and fields
c) how will you determine that a hexadecimal address is in cache memory,
assuming a direct mapping cache is used?
Solution:
a) Size of main memory= 16MB = 224 Bytes
Each block consists of 16 words, thus total number of blocks in main
memory would be = 224/ 16 = 220 blocks, thus n = 24 and
k = 4 (as 24=16). Therefore, main memory block address (n-r) = 20
Data size of cache is 64 KB = 216 Bytes
Total number of cache lines (M) = cache size/ block size = 216 / 24 = 212 ,
therefore, number of cache lines = 212 and m = 12
Length of address field to identify a word in a Block (k) = 4 bits
Length of address to identify a Cache line (m) = 12 bits
Length of Tag field = (24- 4) - 12 = 8 bits.
Thus, a main memory address to cache address mapping for the given example would
look like this:
Main Memory Address n = 24 bits
Address of a Block of data = 20 bits k=4 bits
Address mapping for direct cache mapping scheme
Tag = 8 bits Cache line number address m = 12 bits k=4 bits
Figure 6.6: Direct Cache mapping
12
The Memory System

b) Consider a 24 bit main memory address in hexadecimal as FEDCBA. The


following diagram will help in identifying, if this address is in the cache memory or
not in case direct mapping scheme is used.

Main Memory Address n = 24 bits = 6 hex digits


F E D C B A
Address of a Block of data = 20 bits k=4 bits
FEDCB A
Address mapping for direct cache mapping scheme
Tag = 8 bits Cache line number address m = 12 bits k=4 bits
FE DCB A
1111 1110 1101 1100 1011 1010
Figure 6.7: Direct Cache mapping example

Now, the following steps will be taken by the processing logic of processing unit and
hardware of Cache memory:
1. The tag number (FE in this case) is compared against the Tag number of data
stored in the cache line (DCB in this case).
2. In case both are identical
then (this is the case of cache hit): Ath word from the cache line DCB is
accessed by the processing logic.
else (this is a case of cache miss): The cache line 16 words data is read to
cache memory line (DCB) and its tag number is now FE. The
required Ath word is now accessed by the processing logic
Direct mapping is very easy to implement but has a disadvantage as location in which
a specific block is to be stored in cache is fixed. This arrangement leads to low hit
ratio as when processor wants to read two data items belongs to two different blocks,
which map to single cache location, then each time other data item is requested, the
block in the cache must be replaced by the requested one. This phenomenon is also
known as thrashing.
Associative Mapping:
Associative mapping is the most flexible mapping in cache organisation as it allows to
store any block of the main memory in any of the cache line/or location. It uses
complete (n-k) bits of block address field as a tag field. Cache memory stores (n-k)
bits of Tag and (2k × Word Size in bit) data. When a data item/ word is requested, (n-
k) bit tag field is used by the cache control logic to search the all the tag fields stored
in the cache simultaneously. If there is a match (cache hit) then corresponding data
item is read from the cache, otherwise (cache miss) the block of data that contains the
word to be accessed is read from the main memory. It replaces any of the cache line.
In addition, the block address of the accessed block from the main memory replaces
the tag of the cache line. It is also the fastest mapping amongst all types. Different
block replacement policies are used for replacing the existing cache content by newly
read data, however, those are beyond the scope of this unit. This mapping requires
most complex circuitry, as it requires all the cache tags to be checked simultaneously
with the block address of the access request.
Main Memory Address :
Address bits for identifying
Address of a block of data is same as Tag
a word in a Block
(n-k) bits k bits

Every line of Associative Cache has the following format:

13
Basic Computer Organisation
Tag Data Block of k words

(n-k) bits Data bits

Figure 6.8: Associative mapping


The following example explains the set associative mapping.
Example: Let us consider a main memory of 16 MB having a word size of a byte and
a block size is of 128 bits or 16 words (one word is one byte in this case). The cache
memory can store 64 KB of data. Determine the following size of various addresses
and fields, if associative mapping is used
Solution:
Size of main memory= 16MB = 224 Bytes
Each block consists of 16 words, thus total number of blocks in main
memory would be = 224/ 16 = 220 blocks, thus n = 24 and
k = 4 (as 24=16). Therefore, main memory block address (n-r) = 20
Data size of cache is 64 KB = 216 Bytes
Total number of cache lines (M) = cache size/ block size = 216 / 24 = 212 ,
therefore, number of cache lines = 212 and m = 12
Length of Tag field = (24- 4) = 20 bits.
Size of data = 2k × Word Size in bit = 24 × 8 = 128 bits.
Thus, size of one line of cache = 128+20=148 bits.
Set Associative Mapping:
The major disadvantage of direct mapping is that location of the cache line onto which
a memory block is going to be mapped is fixed which results in poor hit ratio and
unused cache locations. The associative mapping removes these hurdles and any block
of memory can be stored anywhere in cache location. But associative cache uses
complex matching circuit and big tag size. Set Associative mapping reduces the
disadvantages of both the above mentioned cache mapping techniques and is built on
their strengths. In set associative mapping scheme, cache memory is divided into v
sets where each set contains w cache lines. So, total number of cache lines M is given
as:
𝑀 𝑣𝑥𝑤
where v is the number of sets and w is the number of cache lines in v
The cache mapping is done using the formula:
𝑖 𝑗 𝑚𝑜𝑑𝑢𝑙𝑜 𝑣
where i is the set number and j is the block address of word to be accessed.
Cache control logic interprets the address field as a combination of tag and set fields
as shown:

Tag Set Word

((n-k)-d) bits d bits k bits

Figure 6.8: Set Associative mapping

14
Cache mapping logic uses d-bits to identify the set as 𝑣 2 and ((n-k)-d)) bits are The Memory System
used to represent the tag field. In set-associative mapping, a block j can be stored at
any of the cache line of set i. To read a data item, the cache control logic first
simultaneously looks into all the cache lines using ((n-k)-d)) bits of tag field of the set
identified by d-bits of the set field, otherwise a data item is read from the main
memory and corresponding data is copied into the cache accordingly. Set associative
mapping is also known as w-way set-associative mapping. It uses lesser number of
bits (((n-k)-d) bits) as compare to (n-k) bits in associative mapping in tag field.
A comprehensive example showing possible locations of main memory blocks
in Cache for different cache mapping schemes is discussed next.

Example: Assume a main memory of a computer consists of 256 bytes, with


each memory word of one byte. The memory has a block size of 4 words. This
system has a cache which can store 32 Byte data. Show how main memory
content be mapped to cache if (i) Direct mapping (ii) Associative mapping and
(iii) 2 way set associative memory is used.
Solution:
Main memory size = 256 words (a word = one byte) = 28 ⇒ n=8 bits
Block Size = 4 words = 22 ⇒ k=2 bits
The visual representation of this main memory:
Block Number of Memory Location Address Assume data
memory Block Word stored
in equivalent decimal Address Address in the location
000000 00 1001010
000000 01 1101010
0
000000 10 0001010
000000 11 0001010
000001 00 1111010
000001 01 0101010
1
000001 10 1001010
000001 11 1101010
000010 00 1101010
000010 01 0001010
2
000010 10 0101010
000010 11 0011010
… … … …
000111 00 0000010
000111 01 0000011
7
000111 10 0000011
000111 11 0001110
… … … …
111111 00 1111010
111111 01 1111011
63
111111 10 0101011
111111 11 0101110
Figure 6.9: An example of Main Memory Blocks

(i) Direct Mapping Cache:


The size of cache = 32 bytes
The block size of main memory = words in one line of cache =4 ⇒ k=2 bits
The cache has = 32 /4 = 8 lines with each line storing 32 bits of data (4 words)
Therefore, m=3 as 23 = 8
15
Basic Computer Organisation Thus, Tag size = (n-k) - m = (8 - 2) - 3 = 3

The address mapping for an address: 11111101


Block Address of Main Address of a word in a
Memory Block
111 111 01
111 111 01
Tag Line
Number
Line Number = 111 = 7 in decimal
Tag = 111

The address mapping for an address: 00001011


Block Address of Main Address of a word in a
Memory Block
000 010 11
000 010 11
Tag Line
Number
Line Number = 010 = 2 in decimal
Tag = 000
The following cache memory that uses direct mapping shows these two words
(along with complete block in the cache)
Line Number Contents of Cache Memory
of Cache Tag of Data in Cache = 4 words = 32 bits
in Decimal Data Word 11 Word 10 Word 01 Word 00
0
1
2 000 0011010 0101010 0001010 1101010
3
4
5
6
7 111 0101110 0101011 1111011 1111010
Figure 6.10: An example Cache memory with Direct mapping

The access for an address: 00011110


Block Address of Main Address of a word in a
Memory Block
000 111 10
000 111 10
Tag Line
Number

In case, a word like 00011110 is to be accessed, which is not in the cache


memory and as per mapping should be mapped to line number 111 =7, the
cache access logic will compare the tags, which are 000 for this address, and
111 in the cache line 7. This is the situation of cache miss, so accordingly this
block will replace the content stored in line 7, which after replacement is
shown below:
7 000 0001110 0000011 0000011 0000010
16
Please note the change in data value in the cache line 7. The Memory System

(ii) Associative Mapping Cache:


The size of cache = 32 bytes
The block size of main memory = words in one line of cache =4 ⇒ k=2 bits
Therefore, cache has = 32 /4 = 8 lines with each line storing 32 bits of data (4
words)
Tag size = n-k = (8 - 2) = 6
The address mapping for an address: 11111101
Block Address of Main Address of a word in a
Memory Block
111111 01
111111 01
Tag

The address mapping for an address: 00001011


Block Address of Main Address of a word in a
Memory Block
000010 11
000010 11
Tag

The following associative cache shows these two words.


Line Number Contents of Cache Memory
of Cache Tag of Data in Cache = 4 words = 32 bits
in Decimal Data Word 11 Word 10 Word 01 Word 00
0 111111 0101110 0101011 1111011 1111010
1 000010 0011010 0101010 0001010 1101010
2 000111 0001110 0000011 0000011 0000010
3
4
5
6
7
Figure 6.11: An example Cache memory with Associative mapping

The access for an address: 00011110


Block Address of Main Address of a word in a
Memory Block
000111 10
000111 10
Tag Number

A word like 00011110 can in any cache line, for example, in the cache memory
shown above it is in line 2 and can be accessed.
(iii) 2way set associative Mapping:
The size of cache = 32 bytes
The block size of main memory = words in one line of cache =4 ⇒ k=2 bits
The number of lines in a set (w) = 2 (this is a 2 way set associative memory)
The number of sets (v) = Size of cache in words/(words per line × w )
= 32/(4×2) =4
Thus, set number can be identified using 2 bits as 22 = 4
17
Basic Computer Organisation Tag size = (n-k)-v = (8 - 2) - 2 = 4
The address mapping for an address: 11111101
Block Address of Main Memory Address of a word in a
Block
1111 11 01
1111 11 01
Tag Set Number
Set number = 11 = 3 in decimal
Tag = 1111
The address mapping for an address: 00001011
Block Address of Main Memory Address of a word in a
Block
0000 10 11
0000 10 11
Tag Set Number
Set number = 10 = 2 in decimal
Tag = 0000

Contents of Cache Memory Way 0 Set Contents of Cache Memory Way 1


Tag # Tag Data in Cache = 4 words = 32
bits
Word Word Word Word Word Word Word Word
11 10 01 00 11 10 01 00
0
1
2 0000 0011010 0101010 0001010 1101010
1111 0101110 0101011 1111011 1111010 3 0001 0001110 0000011 0000011 0000010
Figure 6.12: An example Cache memory with Set Associative mapping

The access for an address: 00011110


Block Address of Main Address of a word in a
Memory Block
0001 11 10
0001 11 10
Tag Set
Number
Set number = 11= 3 in decimal
Tag = 0001
Word 00011110 can be stored and accessed from the cache set 11 at the second
line (way 1).

6.4.3 Write Policy


Many processes read and write data in cache and main memory either by the
processor or by the input/ output devices. Multiple read possess no challenge to the
state of the data item, as you know, cache maintains a copy of frequently required data
items to improve the system performance. Whenever a process writes/ updates the
values of the data item in cache or in main memory, it must be updated in the copy as
well. Otherwise it will lead to an inconsistent data and cache content may become
invalid. Problems associated with writing in cache memories can be summarised as:

• Caches and main memory can be altered by multiple processes which may
result in inconsistency in the values of the data item in cache and main
memory.
18
The Memory System
• If there are multiple CPUs with individual cache memories, data item written
by one processor in one cache may invalidate the value of the data item in
other cache memories.
These issues can be addressed in two different ways:
1. Write through: This writing policy ensures that if a CPU updates a cache,
then it has to write/ or make the changes in the main memory as well. In
multiple processor systems, other CPUs-Cache need to keep an eye over the
updates made by other processor's cache into the main memory and make
suitable changes accordingly. It creates a bottleneck as many CPUs try to
access the main memory.
2. Write Back: Cache control logic uses an update bit. Changes are allowed to
write only in cache and whenever a data item is updated in the cache, the
update bit of the block is set. As long as data item is in the cache no update is
made in the main memory. All those blocks whose update bit is set is replaced
in the main memory at the time when the block is being replaced in the cache.
This policy ensures that all the accesses to the main memory are only through
cache, and this may create a bottleneck.
You may refer to further readings for more details on cache memories.
Check Your Progress 2
1. Assume that a Computer system have following memories:
RAM 64 words with each word of 16 bits
Cache memory of 8 Blocks (block size of cache is 32 bits)
Find in which location of cache memory a decimal address 21 can be found if
Associative Mapping is used.
……………………………………………………………………………………
……………………………………………………………………………………

2. For the system as given above, find in which location of cache memory a decimal
address 27 will be located if Direct Mapping is used.
…………………………………………………………………………………………
………………………………………………………………………………
3. For the system as given above, find in which location of cache memory a decimal
address 12 will be located if two way set associative Mapping is used.
……………………………………………………………………………………
……………………………………………………………………………………

6.5 MEMORY INTERLEAVING


As you know that cache memory is used as a buffer memory between processor and
the main memory to bridge the difference between the processor speed and access
time of the main memory. So, when processor requests a data item, it is first looked
into the cache and if data item is not present in the cache (called cache miss), only
then main memory is accessed to read the data item. To further enhance the
performance of the computer system and to reduce the memory access time of the
main memory, in case of cache miss, the concept of memory interleaving is used.
Memory interleaving is of three type, viz. lower order memory interleaving, higher
order memory interleaving and hybrid memory interleaving. In this section we will
discuss the lower order memory interleaving only. Discussion on other memory
interleaving techniques is beyond the scope of this unit.
In memory interleaving technique, main memory is partitioned into n number of equal
sized modules called as memory banks and technique is known as n-way memory
19
Basic Computer Organisation interleaving. Where each memory module has its own memory address register, base
register and instruction register, thus each memory bank can be accessed individually
and simultaneously. Instructions of a process are stored in successive memory banks.
So, in a single memory access time n successive instructions of the process can be
accessed from n memory banks. For example, suppose main memory is divided into
four modules or memory banks denoted as M1, M2, M3 and M4 then first n
instructions of a process will be stored as: first instruction in M1, second instruction in
M2, third instruction in M3, fourth instruction in M4 and again fifth instruction in M1
and so on.
When processor issues a memory fetch command during the execution of the
program, memory access system creates n consecutive memory addresses and places
them in respective memory address register of all memory banks in the right order.
Instructions are read from all memory modules simultaneously and loads them into n
instruction registers. Thus, each fetch for a new instruction results in the loading of n
consecutive instructions in n instruction registers of the CPU, in the time of a single
memory access. Figure 6.13 shows the structure of 4-way memory interleaving. The
address is resolved by interpreting the least significant bits to select the memory
module, and rest of the most significant bits are the address in the memory module.
For example, in an 8-bit address and 4-way memory interleaving, two least significant
bits will be used for module selection and six most significant bits will be used as an
address in the module.
8-bit address in 4-way memory interleaving
Address in the module Module Selection
6 bits 2 bits

Figure 6.13: Address mapping for Memory interleaving


The following example demonstrates how the main memory words be distrib-
uted to different interleaved memory modules. For this example, only a four bit
address of main memory is used.
Main Memory Module 00 Module 01
Address Data ---- Address Data ---- Address Data
0000 10 00 10 00 20
0001 20 01 50 01 60
0010 30 10 46 10 25
0011 40 11 23 11 78
0100 50
0101 60
0110 80
0111 76
1000 46
1001 25
1010 58 Module 10 Module 11
1011 100 Address Data Address Data
1100 23 00 30 00 40
1101 78 01 80 01 76
1110 35 10 58 10 100
1111 11 11 35 11 11

Figure 6.14: Example of Memory interleaving


20
Please note in the figure above how various data values are distributed in the modules. The Memory System

6.6 ASSOCIATIVE MEMORY


Though cache is a high speed memory but still it needs to search the data item stored
in it. Many search algorithms have been developed to reduce the search time in a
sequential or random access memory. Searching time of a data item can be reduced
further to a significant amount of data item is identified by its content rather by the
address. Associative memory is a content addressable memory (CAM), that is
memory unit of associative memory is addressed by the content of the data rather by
the physical address. The major advantage of this type of memory is that memory can
be searched in parallel on the basis of data. When a data item is to be read from an
associative memory, the content of the data item, or part of it, is specified. The
memory locates all data items, which matches the specified content, and marks them
for reading. Because of the architecture of the associative memory, complete data item
or a part of it can be searched in parallel.

Hardware Organization
Associative memory consists of a memory array and logic for m words with n bits per
word as shown in block diagram in Figure 6.15. Both argument register (A) and key
register (K) have n bits each. Each bit of argument and key register is for one bit of a
word. The match register M has m bits, one each for each memory word.
The key register provides a mask for choosing a particular field or key in the argument
word. The entire argument is compared with each memory word only if the key
register contains all 1s. Otherwise, only those bits in the argument that have 1s in their
corresponding positions of the key register are compared. Thus, the key provides a
mask or identifying information, which specifies how reference to memory is made.
The content of argument register is simultaneously matched with every word in the
memory. Corresponding bits in the mach register is set by the words that have match
with the content of the argument register. Set bits of the matching register indicates
that corresponding words have a match. Thereafter, memory is accessed sequentially,
to read only those words whose corresponding bits in the match register have been set.

21
Basic Computer Organisation

Figure 6.15: Block diagram of associative memory

Example: Consider an associative memory of just 2 bytes. The content register and
argument registers are also shown in the diagram.

Description The content of associative Memory Match Word


Argument Register 0 1 1 0 0 0 0 1
Key Register 1 1 1 1 0 0 0 0
Bits to be matched 0 1 1 0
Word 1 0 1 1 0 0 1 1 0 Match
Word 2 1 0 0 1 1 0 0 0 Not machted
Figure 6.16: An Example of Associative matching

Please note as four most significant bits of key register are 1, therefore only they are
matched.

6.7 VIRTUAL MEMORY


As we know that a program is loaded into the main memory for execution. The size of
the program is limited by the size of the main memory i.e. cannot load a program in
to the main memory whose size is larger than the size of the main memory. Virtual
memory system allows users to write programs even larger than the main memory.
Virtual memory system works on the principle that portions of a program or data are
loaded into the main memory as per the requirement. This gives an illusion to the
programmer that they have very large main memory at their disposal. When an
address is generated to reference a data item, virtual address generated by the
processor is mapped to a physical address in the main memory. The translation or
mapping is handled automatically by the hardware by means of a mapping table.

22
Let us say, you have a main memory of size 256K (218)words. This requires 18-bits to The Memory System
specify a physical address in main memory. A system also has an auxiliary memory as
large as the capacity of 16 main memories. So, the size of the auxiliary memory is
256K ×16 = 4096 K which requires 24 bits to address the auxiliary memory. A 24-bit
virtual address will be generated by the processor which will be mapped into an 18-bit
physical address by the address mapping mechanism as shown in Figure 6.17.

Figure 6.17: Virtual Address Mapping to Physical Address


In a multiprogramming environment, programs and data are transferred to and from
auxiliary memory and main memory based on demands imposed by the processor. For
example program 1 is currently being executed by the CPU. Only program 1 and a
portion of its associated data as demanded by the processor are loaded from secondary
memory into the main memory. As programs and data are continuously moving in and
out of the main memory, space will be created and thus both program or its portions
and data will be scattered throughout the main memory.

Check Your Progress 3


1. How can interleaved memory can be used to improve the speed of main memory
access?
……………………………………………………………………………………
…………………………………………………………………………………
2. Explain the advantages of using associative memory?
……………………………………………………………………………………
……………………………………………………………………………………

3. What is the need of virtual memory.


……………………………………………………………………………………
……………………………………………………………………………………

6.8 SUMMARY
This unit introduces you to the concept relating to cache memory. The unit defines
some of basic issues of cache design. The concept of cache mapping schemes were
explains in details. The direct mapping cache uses simple modulo function, but has
limited use. Associative mapping though allows flexibility but uses complex circuitry
and more bits for tag field. Set-associative mapping uses the concept of associative
and direct mapping cache. The unit also explain the use of memory interleaving,
which allows multiple words to be accessed in a single access cycle. The concept of
23
Basic Computer Organisation content addressable memories are also discussed. The cache memory, memory
interleaving and associative memories are primarily used to increase the speed of
memory access. Finally, the unit discusses the concept of virtual memory, which
allows execution of programs requiring more than physical memory space on a
computer. You may refer to further readings of the block for more details on memory
system.

6.9 ANSWERS
Check Your Progress 1

1. While executing a program during a period of time or during a specific set of


instructions, it was found that memory reference to instructions and data tend to
cluster to a set of memory locations, which are accessed frequently. This is
referred to as locality of reference. This allows you to use a small memory like
cache, which stores the most used instructions and data, to enhance the speed of
main memory access.
2. Typical block size of main memory for cache transfer may be 1, 2, 4, 8, 16, 32
words.

3. effective access time = 0.9 (hit ratio) x 10 (cache hit time)


+ 0.1( miss ratio) x (50+10) (cache miss and memory reference)
effective access time = 0.9 x 10 + 0.1x 60
= 9+6
= 15 ns
Check Your Progress 2

1. Main memory size = 64 words (a word = 16 bits) = 26 ⇒ n=6 bits


Block Size = 32 bits = 2 words = 21 ⇒ k=1 bit
The size of cache = 8 blocks of 32 bits each = 8 lines
Tag size for associative mapping = n-k = (6 - 1) = 5
The address mapping for an address: 21 in decimal that is 010101
Block Address Address of a word in a
Block
01010 1
01010 1
Tag

In set associative memory the given tag can be stored in any of the 8
lines.
2. Main memory size = 64 words (a word = 16 bits) = 26 ⇒ n=6 bits
Block Size = 32 bits = 2 words = 21 ⇒ k=1 bit
The size of cache = 8 blocks of 32 bits each = 8 lines ⇒ m=3 bits
Tag size for direct mapping = (n-k) - m = (6 - 1) - 3 = 2
The address mapping for an address: 27 in decimal that is 011011

Block Address of Main Address of a word in a


Memory Block
01 101 1
01 101 1
Tag Line Number
The required word will be found in line number 101 or 5 (decimal)

3. Main memory size = 64 words (a word = 16 bits) = 26 ⇒ n=6 bits


Block Size = 32 bits = 2 words = 21 ⇒ k=1 bit
24
The number of sets (v) = 4 sets of 2 lines each, thus, d = 2 The Memory System

Tag size for direct mapping = (n-k) - d = (6 - 1) - 2 = 3


The address mapping for an address: 12 in decimal that is 001100
Block Address of Main Address of a word in a
Memory Block
001 10 0
001 10 0
Tag Set Number
Set number = 10 = 2 in decimal
Thus, required word can be in any of the line in set number 2.

Check Your Progress 3

1. Memory interleaving divides the main memory into modules. Each of these
module stores the words of main memory as follows (example uses 4 modules
and 16 word main memory.
Module 0: Words 0, 4, 8, 12 Module 1: Words 1, 5, 9, 13
Module 2: Words 2, 6, 10, 14 Module 3: Words 3, 7, 11, 15
Thus, several consecutive memory words can be fetched from the interleaved
memory in one access. For example, in a typical access words 4, 5, 6, and 7 can
be accessed simultaneously from the Modules 0, 1, 2 and 3 respectively.

2. Associative memory do not use addresses. They are accessed by contents. They
are very fast.

3. Virtual memory is useful, when large programs are to be executed by a computer


having smaller physical memory.

25

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy