Chapter 1

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 24

Computer Organization and Architecture compiled by: Fikru Tafesse (MSc)

Chapter 1

Data Representation

Data in computers is represented in binary form. The represented data can be number, text, movie, color
(picture), sound, or anything else. It is up to the application software that presents the data to portray the
data accordingly. We enter data into a computer using letters, digits & special symbols. But inside the
computer, there is no color, letter, digit or any other character inside the computer system unit.
Just like any other electrical device, computers understand and respond to only the flow of electrical
charge. They also have storage devices that work based on magnetism. This shows that the overall
structure of computers work only in binary conditions (the semi-conductors are conducting or not
conducting, a switch is closed or opened, a magnetic spot is magnetized or demagnetized). Hence, data
must be represented in the form of binary code that has a corresponding electrical signal.
The form of binary data representation system we are seeking is similar to the binary number system in
mathematics. Nevertheless we humans are not accustomed to the use of binary numbers. The main focus
of this chapter is on how data is represented in computers. Since understanding of binary number system
is essential to understand binary data representation, conversion of numbers from a given number base
to another base, is also discussed here. The number systems (bases) we will discuss are: decimal, binary,
octal, and hexadecimal.

3.1 Number systems


Basically, there are two types of number systems.
I. Non-positional number system: The symbols of the number have the same value regardless of its
position in the number. The value of a symbol (digit) in a number does not depend on the position of
the digit in number.
II. Positional number system: The value of a symbol in the number is determined by its position, the
symbol and the base of the number system. In all positional number systems, the base has the
following properties
I. It determines the number of different symbols it has. For example, there are 10 different symbols in
base 10 (decimal) number system and there are 2 symbols in base 2 (binary) number system.
II. The maximum value of a single digit is one less than the base value. For example, the largest single
digit number in the decimal (base 10) number system is 9.
III. The positional value of each symbol is expressed by the power of the base. For example, the value of
the symbol 7 in the decimal number 75 is 70 (7×10 1) while the value of 7 in the decimal number 756
is 700 (7×10 ). The source of the variation is the position of the digit in the number and this value is
2

expressed as a multiple of powers of the base.


In positional number system, the base of a number is indicated by writing it as a subscript at the right
side of the number. For instance the number 251 10 is the decimal number two hundred fifty one, and
2518 is two-five-one octal (note that it is not read as two hundred fifty one octal). Often the base of a
decimal number is not indicated. If the base of a number is not shown, it is assumed to be a decimal
Computer Organization and Architecture compiled by: Fikru Tafesse (MSc)
number.

3.1.1 Decimal number system


The decimal number system, also called the base 10 number system, is the number system we use in our
day-to-day life. The preference of this number system by humans is attributed to their nature that
humans have 10 fingers. It is believed that humans start counting using their fingers. This fact is the
basis for the preference of the decimal number system by humans.
The decimal number system has 10 different symbols identified as 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9. All
decimal numbers are written as a combination of these 10 digits.

III.1.2 Binary number system


Although the number system is easily understood by humans, it cannot be used to represent data in
computers because there are only two (binary) states in a computer system. On the other hand, the
binary number system, also known as base 2 number system, has two digits 0 and 1. This makes it useful
for data representation in computers. The two digits of the binary number system correspond to the two
distinct states of the digital electronics. A binary digit is referred to as a bit. We can associate the two
digits of the binary number system with two states of electrical systems, magnetic systems, and
switches. The following table shows the conventional association. It is also possible to exchange the
association but it becomes confusing since it is the opposite of the convention (what is agreed on).
Data representation using the binary number system results in a large string of 0s and 1s. This makes
the represented data large and difficult to read. Writing such a binary string becomes tedious as well.
For the sake of writing the binary strings in a short hand form and make them readable, the octal and
the hexadecimal number systems are used.

0 1
Electronic No current There is current
Magnetic Demagnetized Magnetized
Switch Off On

III.1.3 Octal Number System


Octal number system also called base 8 number system, has 8 different symbols: 0, 1, 2, 3, 4, 5, 6, and7.
The octal number system is used to write binary numbers in short form. An octal umber has about one-
third of the digits in its binary equivalent.

III.1.4 Hexadecimal Number System


The hexadecimal number system, also called base 16 number system, has 16 different symbols 0, 1, 2, 3,
4, 5, 6, 7, 8, 9, A, B, C, D, E, and F. The hexadecimal number system is usually referred as hex for short.
It is used to write binary numbers in short form. A hex number has about one-fourth of the digits in its
binary equivalent. Memory addresses and MAC addresses are usually written in hex.

III.2 Converting from one Base to Another Base


i. Conversion from Decimal to Base m
Computer Organization and Architecture compiled by: Fikru Tafesse (MSc)
Step 1: Divide the given decimal number by m (the desired base). The result will have a quotient and a
remainder.
Step 2: Divide the quotient by m. Still you get a quotient and a remainder.
Step 3: Repeat step 2 until the quotient becomes 0. You should note that we are conducting integer
division. In integer division n/m, the quotient is 0 whenever n < m.
Step 4: Collect and arrange the remainders in such a way that the first remainder is the least significant
digit and the last remainder is the most significant digit (i.e., RnRn-1 … R2R1).
Example: Convert the following decimal number 47 into binary, octal, and hexadecimal.
a. Conversion to binary
In order to convert the given decimal numbers into binary (base 2), they are divided by 2.
Quotient Remainder
47 ÷ 2 23 1
23 ÷ 2 11 1
11 ÷ 2 5 1
5÷2 2 1
2÷2 1 0
1÷2 0 1
Since the quotient becomes 0 at the last division, the division has to stop and we should collect the
remainders starting from the last one. Hence the result is 101111 2. Note that, starting from the second
division, at each consecutive division the quotient of the previous division is used as the dividend.
b. Conversion to Octal
Here the numbers are divided by 8 because the required base is octal (base 8).
Quotient Remainder
47 ÷ 8 7 5
7÷8 0 7
Therefore, 47 = 578
c. Conversion to Hexadecimal
Since the conversion now is into hexadecimal (base 16) the given decimal numbers are divided by 16.
Quotient Remainder
47 ÷ 16 2 15
2 ÷ 16 0 2
Remember that the remainders are all in decimal and when you write the result in the required base, the
remainders has to be converted into that base. The hexadecimal equivalent for the decimal 15 is F and
that of 2 is 2. For conversion of decimal numbers into binary and octal or vice versa, there is no problem
of looking for the equivalent of each remainder. You need such conversion of a remainder only when the
remainder is a double digit number. Therefore, 47 = 2F16.
Note: For numbers from 10 to 15 for we use latters to represent hexadecimal value as follows:
A=10, B= 11, C=12, D=13, E=14, F=15.
ii. Conversion from Base m to Decimal
Step 1: Multiply each digit by its positional value.
Step 2: Calculate the sum of the products you get in step 1. The resulting sum you get is the decimal
Computer Organization and Architecture compiled by: Fikru Tafesse (MSc)
equivalent for the given number in base m.
Example 1: Convert the binary number 110001 into decimal.
1100012 = (1 × 25) + (1 × 24) + (0 × 23) + (0 × 22) + (0 × 21) + (1 × 20)
= (1 × 32) + (1 × 16) + (0 × 8) + (0 × 4) + (0 × 2) + (1 × 1)
= 32 + 16 + 0 + 0 + 0 + 1
= 49
Therefore, 1100012 = 49
It is evident that calculating the product of 0s since the product is 0 and do not contribute anything to the
final result. However, you should remember to skip the positional value as well.
Example 2: Convert the octal number 22 into decimal.
228 = (2 × 81) + (2 × 80)
= (2 × 8) + (2 × 1)
= 16 + 2
= 18
Therefore, 228 = 18
Example 3: Convert the hexadecimal number D1 into decimal.
D116 = (13 × 161) + (1 × 160); you should be aware that the calculations are in decimal thus the hex digit
D must first be converted into its decimal equivalent (13).
= (13 × 16) + (1 × 1)
= 208 + 1
= 209
III. Conversion from Binary to Octal
It is possible to use decimal number system as an intermediate base to convert from any base to any
other base. However, for conversion from binary to octal or vice versa, there is a very simple method.
Example: Convert the binary numbers 110011 and 1101111 to octal.
A. 110 011
110 011
6 3
The bits are grouped in three with the equivalent octal digit given below the three bit group.
Thus, 1100112 = 638
B. 1101111
001 101 111
1 5 7
Since we are left with a single bit at the leftmost position, two 0s are added at the front to make
create a three-bit group. The result shows that 11011112 = 1578.
IV. Conversion from Octal to Binary
Step 1: For each octal digit, find the equivalent three digit binary number.
Step 2: If there are leading 0s for the binary equivalent of the leftmost octal digit, remove them.
Example: Find the binary equivalent for the octal numbers 73 and 160.
A. 73
7 3
Computer Organization and Architecture compiled by: Fikru Tafesse (MSc)
111 011
Since there are no leading 0s at the leftmost position (the bits for octal 7), there is no 0 to
remove. Therefore, 738 = 1110112.
B. 160
1 6 0
001 110 000
The binary equivalent for the leftmost octal digit 1 has two 0s. To get the final result, remove
them and concatenate the rest. Therefore, 1608 = 11100002.
V. Conversion from Binary to Hexadecimal
One possible way to convert a binary number to hexadecimal, is first to convert the binary
number to decimal and then from decimal to hex. Nevertheless, the simple way to convert binary
numbers to hex is by grouping as used in conversion to octal. Here a single group has 4 bits.
Step 1: Starting from the rightmost bit, group the bits in 4. If the remaining bits at the leftmost
position are fewer than 4, add 0s at the front.
Step 2: For each 4-bit group, find the corresponding hexadecimal number.
Example: Convert the binary numbers 1110110001 and 10011110 to hexadecimal.
A. 1110110001
00011 1011 0001
3 B 1
Therefore, 11101100012 = 3B116

B. 10011110
1001 1110
9 E
Therefore, 100111102 = 9E16
VI. Conversion form Hexadecimal to Binary
Step 1: For each hexadecimal digit, find the equivalent four digit binary number.
Step 2: If there are leading 0s for the binary equivalent of the leftmost hexadecimal digit, remove
them.
Example: Find the binary equivalents for the hexadecimal numbers 1C and 823.
A. 1C
1 C
0001 1100
After removing the leading 0s for the binary equivalent of the leftmost hexadecimal number 1,
the result becomes 11100. Therefore, 1C16 = 111002.
B. 823
8 2 3
1000 0010 0011
There is no leading 0s for the binary equivalent of the hexadecimal number 8; we simply
concatenate the binary digits to get the final result. Hence, 82316 = 1000001000112.
VII. Conversion from Octal to Hexadecimal of Vice Versa
Computer Organization and Architecture compiled by: Fikru Tafesse (MSc)
The decimal number system can be used as an intermediate conversion base. As it is shown in
the above sections, however, it the binary number system is quite convenient for conversion to or
from octal and hexadecimal. To convert an octal number to a hexadecimal number or from
hexadecimal to octal, the binary number system is used as an intermediate base.
Step 1: Convert the given number into binary.
Step 2: Convert the binary number you got in step 1 into the required base.
Example 1: Convert the octal number 647 to hexadecimal.
Step 1: Convert 6478 to binary
6 4 7
110 100 111
Step 2: Convert 1101001112 to hexadecimal
0001 1010 0111
1 A 7
Therefore, 6478 = 1A716
Example 2: Find the octal equivalent for the hexadecimal number 3D5
Step 1: Convert 3D516 to binary
3 D 5
0011 1101 0101
Step 2: Convert 11110101012 to octal
001 111 010 101
1 7 2 5
Therefore, 3D516 = 17258

3.3 Binary Representation of Signed Number


If the numbers we want to represent are only positive (unsigned) integers, the solution is straight
forward; simply represent the unsigned integer with its binary value. For example, 34 is
represented as 00100010 in 8 bits. In this section, the discussion is on representation of signed
integers. Signed integers can be are represented in several alternative ways. These alternatives
are used for various purposes based on their convenience for the applications.

Computer storage has a limited capacity to hold data. The number of bits available for data
representation determines the range of integers we can represent. With 4 bits, it is possible to
represent a total of 16 integers. If the number of available bits increases to 5, we can represent 32
integers. In fact with every bit added, the number of possible integers we can represent is
doubled. In general, the number of integers we can represent with n bits is 2n. Singed integers
include positive integers, negative integers, as well as zero. The 2n places are partitioned among
the negative, positive, and zero. For example, with 8 bits, it is possible to represent 256 different
integers. Typically, 128 of them are positive integers and zero while the rest 128 of them are
negative integers.

Signed integer representations discussed in this section are sign magnitude, 1’s complement, 2’s
complement, and excess-N. We assume the number of bits available for representation is 8
Computer Organization and Architecture compiled by: Fikru Tafesse (MSc)
unless explicitly specified.

3.3.1 Sign-Magnitude Representation


In mathematics, positive integers are indicated by a preceding + sign (although usually it is
avoided) a preceding – sign identify the integer as a negative number. In computers, there is no
place for a + or a – sign; there are only 0s and 1s. A similar way of representing + and – signs is
to treat the most significant bit as a sign bit the remaining bits are used to represent the
magnitude of the integer. By convention a 0 on the sign bit indicate the integer is positive and a 1
indicate the integer is a negative.
A problem of this representation method is that there are two representations for 0; a positive 0
and a negative 0. An integer with all the magnitude bits and the sign bit set to 0 is a positive 0
while an integer with all the magnitude bits set to 0 and a 1 on its sign bit is a negative 0. This
type of ambiguous representation is undesirable and as a solution, the negative zero can be
ignored. The biggest problem of this representation method is, however, its inconvenience in
binary arithmetic. Early computers, for instance IBM 7090, used this representation.
As example, the sign-magnitude representation of 79 and -79 in 8 bits are 01001111 and
11001111 respectively. The only difference is on the sign (first) bit; for 79 it is 0 while for -79 it
is 1. Figure 1, shows the representation scheme with 8 bits. In sign magnitude, the lower half
range of values represent the positive integers starting at positive 0 while the higher half values
represent negative integers starting at negative 0.

Figure 1: Sign-magnitude representation number line; (a) the unsigned decimal value of the
binary number, (b) the binary numbers, (c) the actual value the binary numbers represent in sign-
magnitude representation

3.3.2 One’s Complement Integer Representation


Every number system has two complement systems. For a given base n the complements are n’s
complement and (n-1)’s complement. Thus, in decimal numbers system (base 10), the
complement systems are 10’s complement and 9’s complement. Similarly in binary number
system, the complements are 2’s complement and 1’s complement. Using complements in binary
number systems makes subtraction and logical negation very simple. Calculating the
complement of a binary integer is trivial. Furthermore, using complements makes arithmetic
operations simple.
Computer Organization and Architecture compiled by: Fikru Tafesse (MSc)
The one’s complement of a binary integer is found by inverting all 0s to 1s and all 1s to 0s. This
makes complementing operation simple at hardware level. In one’s complement integer
representation, the negative of an integer is represented by its complement.
For example: the one’s complement representation of 16 and -16 in 8 bits are 00010000 and
11101111 respectively.
As with sign-magnitude representation, there are two representations for 0. A one’s complement
representation with all 0s is a positive 0 while one with all 1s is a negative 0. Still representations
for positive integers begin with 0 while that of negative integers start with 1. The arrangement of
numbers on the number line is in such a way that it starts with positive 0, puts positive numbers
starting with 1, arranges negative numbers beginning with the smallest one (with the highest
magnitude) and ends with negative 0. Figure 2 shows the arrangement of integers with one’s
complement representation on the number line.

Figure 2: One’s complement number line; (a) the unsigned decimal value of the binary number,
(b) the binary numbers, (c) the actual value the binary numbers represent in one’s complement
representation.

3.3.3 Two’s Complement Integer Representation


The two’s complement of an integer is found by adding 1 to its one’s complement. As a
reminder, in binary arithmetic, 0+1 = 1 and 1+1 = 0 with a carry of 1 to the next higher
significant bit. A shortcut method to find the two’s complement of a number is to keep all the
bits up to and including the first 1 from the right and invert all the others. Two’s complement
representation of 19 and -19 in 8 bits are 00010011 and 11101101 respectively.
There is only one representation for 0 where all the bits are set to 0. The consequence is that the
number of negative integers is one more than their positive counter parts. The arrangement of
two’s complement integers on the number line is similar to that of one’s complement except that
the last value is -1 rather than -0. Figure 3 shows this.
Computer Organization and Architecture compiled by: Fikru Tafesse (MSc)
Figure 3: Two’s complement number line; (a) the unsigned decimal value of the binary number,
(b) the binary numbers, (c) the actual value the binary numbers represent in two’s complement
representation.

3.4 Floating Point Numbers


In common real number expression the location of the radix point is indicated by placing a dot
(or a comma in some countries) character. For decimal number system, the radix point is
commonly known as decimal point and in binary number system it is named as binary point.
The term radix point is used to refer to this point irrespective of the number system.

In mathematics numbers are written in two common ways: either placing the radix point at a
fixed location or by locating the point at any location in the number and providing an extra data
which indicates the actual location of the radix point. With integers the radix point is implicitly
known to be at the right end of the number. This is an example of representation with the radix
point located at a fixed location in the number. In science very large and very small numbers are
quite common. Writing them with the radix point at its actual position is inconvenient. Therefore
the common and convenient way of writing such numbers is the scientific notation. In scientific
notation, a number is written with the radix point put after the first non-zero digit and the actual
position of the radix point is indicated by the exponent of the number. The scientific notation
allows the radix point to “float” anywhere in the number. This makes the representation of very
small and very large numbers effective and efficient and makes representation of numbers over a
large range of magnitude. Manipulation of fixed point representation is less costly than floating
point numbers but the range of numbers that can be represented by fixed point representation is
quite narrow.
In computers, representation of numbers similar to the scientific notation is available and is
known as floating point number. Unlike the scientific notation, there is no radix point character
in floating point numbers. For the representation of integers, the only required information is the
magnitude and the sign of the number. To represent floating point numbers or in scientific
notation we need the following information:
 the magnitude of the number
 the sign of the number
 the magnitude of the exponent
 the sign of the exponent
 the base of the number
Of these required pieces of information, only the first four are necessary to represent floating
point numbers in computers. The base of the number is not necessary for representation because
it is always 2 as the number system of computers is binary.
Among the variety of floating number representations in computers, the IEEE 754 standard is the
most widely used. It is commonly referred as the IEEE floating point. Two formats of IEEE
floating point are:
 Single precision: 32 bits wide representation with 23 bits significand, one sign bit, and 8
bits of exponent. The accuracy of binary numbers with 23 bits significant is equivalent to
Computer Organization and Architecture compiled by: Fikru Tafesse (MSc)
accuracy of 7 digits of decimal number.
 Double precision: 64 bits wide representation of which 52 bits are for the signed and,
one bit for the sign of the number and 11 bits for the exponent. Double precision numbers
are accurate to 16 digits of decimal number.
Modern high level languages have numerical data types for both single and double precision
IEEE floating point format. For example the float data type in C and C++ is for single precision
representation and the data type double is for the double precision representation. Figure 4
shows the layout of single and double precision formats of the IEEE floating point
representation. The two formats are exactly the same except for the number of bits they have.
Notice the layout of bits for the sign, exponent and significand.

(a) Single precision

(b) Double precision


Figure 4: Single and double precision IEEE floating point formats

3.4.1 The IEEE Floating Point


It is already mentioned that to represent a floating point number, we need to have representation
for the magnitude of the number (significand), the sign of the number, the magnitude of the
exponent, and the sign of the exponent. Of these four items, only three have place in the IEEE
floating point. The magnitude of the exponent and its sign has to be combined together and
placed within the exponent bits. The magnitude of the number has its own place within the
representation. The set of bits for the magnitude is commonly referred as the significand. The
word mantissa is also used as a synonym to the word significand. The sign of the number (the
sign of the significand has a separate bit. A 0 on the sign bit shows the number is positive while a
1 is used for negative numbers.
In line with the above discussion, sign-magnitude representation is used for the significand while
for the exponent a representation system without a separate sign bit is use. One’s complement,
two’s complement, and excess-N represent signed numbers without a separate sign bit. Of these
three alternatives, excess-N is used in representation of IEEE floating point. The single precision
format is uses exess-127 and the double precision format uses excess-1023 representation for the
exponent.
Computer Organization and Architecture compiled by: Fikru Tafesse (MSc)
The IEEE 754 standard requires the significand to be in normal form. A number is in normal
form when the first digit of the number is non-zero (to be exact, the first digit should be 1, since
the only non-zero digit in the binary number system is 1.) This implies that we need a special
representation for 0. As the first digit of the significand is always 1, we need not represent it
explicitly. This gives us an extra 1 bit data for the significand. Thus with the 23 bits for the
significand in single precision, the actual number of bits of the significand’s magnitude is 24 bits
and that of the double precision is 53 bits. In other words, there is an implied 1 at the beginning
of every floating point number.
The range of numbers that can be represented with single precision is from 10 -38 to 1038 as the

range of the exponent of single precision is [-127, 127]. 2 -127 is roughly 10-38 since -127 × log102
≈ -38 and 2127 is approximately 238. With similar argument, the range of values of double
precision is roughly from 10-308 to 2308. However, this range does not include all the possible
values of numbers because there are some numbers that cannot be represented exactly. A good
example is the real number 2/3.

3.4.1 Binary Coded Decimal (BCD)


For systems with much I/O and little computation of numbers, it is efficient to use a mechanism
in which each digit of a number is represented in their binary equivalent. The BCD (Binary
Coded Decimal), also called packed decimal, representation is based on this idea. In order to
have representations for the ten digits of the decimal number system, we need a four bit string.
Thus 0 = 0000, 1 = 0001, 2 = 0010… and 9 = 1001. In BCD, multiples of 8 bits, in which the bits
are grouped in 4, are used to represent decimal numbers. Thus the decimal number 461 is
represented as 0000 0100 0110 0001.
Although BCD avoids complex conversions of binary numbers to decimal and vice versa, it is
inefficient in memory space usage. As the total number of unique combinations with 4 bit strings
is 16, 10 of them are used to represent the numbers. Two more bit strings can be used to
represent positive and negative signs (although the bit string for 0 and 9 are used as positive and
negative signs respectively). The remaining bit strings are wasted. Performing arithmetic in BCD
is similar to that of other representations but the circuitry needs is also much complex. Sign-
magnitude as well as complement systems can be used in BCD. The complement system is either
9’s complement or 10’s complement. Since complements have the advantage of simplicity in
arithmetic, they are preferred.

3.4.2 Characters
Text documents contain strings of characters. Characters refer to letters of the alphabet, the ten
digits (0 through 9), punctuation marks, characters that are used to format the layout of text on
pages such as the newline, space, and tab characters, and other characters that are useful for
communication. The most widely used character code is the International Reference Alphabet
(IRA). The American version of IRA is called American Standard Code for Information
Interchange (ASCII). Even though IRA used 8 bits, each character is represented by 7 bits; hence
a total of 128 characters are represented.
Computer Organization and Architecture compiled by: Fikru Tafesse (MSc)
Another character encoding system is the EBCDIC (Extended Binary Coded Decimal for
Interchange Code) is used on IBM mainframe. It uses 8 bits per character (and a ninth parity bit),
thus represents 256 characters. As with IRA, EBCDIC is compatible with BCD. In the case of
EBCDIC, the codes 11110000 through 11111001 represent the digits 0 through 9.
ASCII is a standard for use in the United States. Many countries adapted their own versions of
ASCII. There are also 8-bit versions of ASCII which allow having additional 128 rooms for
more characters, especially of those languages based on the Latin character. To allow encoding
of characters of all the languages in the world, a character set known as the Unicode is devised.
The Unicode character has variants known as UTF-8, UTF-16, and UTF-32. UTF-8 is the same
as ASCII. UTF-16 and UTF-32 use 16-bit and 32-bit per character respectively, thus, have
incorporated much more characters. For backward compatibility with ASCII, the first 128
characters of UTF-16 and UTF-32 are similar to those of ASCII.
Computer Organization and Architecture compiled by: Fikru Tafesse (MSc)

Chapter 2: Boolean Algebra and Digital Logic Circuits


2.1 Boolean Algebra

The digital circuitry in digital computers and other digital systems is designed, and its behavior is
analyzed, with the use of a mathematical discipline known as Boolean algebra. The name is in honor of
an English mathematician George Boole, who proposed the basic principles of this algebra in 1854 in
his treatise, An Investigation of the Laws of Thought on Which to Found the Mathematical Theories of
Logic and Probabilities. In 1938, Claude Shannon, a research assistant in the Electrical Engineering
Department at M.I.T., suggested that Boolean algebra could be used to solve problems in relay- switching
circuit design. Shannon‟s techniques were subsequently used in the analysis and design of electronic
digital circuits.
Boolean algebra turns out to be a convenient tool in two areas:
Analysis: It is an economical way of describing the function of digital circuitry.
Design: Given a desired function, Boolean algebra can be applied to develop a simplified
implementation of that function.
Boolean algebra makes use of logical variables and logical operators. The possible values for a logical
variable are either TRUE or FALSE. For ease of use, these values are, conventionally, represented by 1
and 0 respectively. A system in which the only possible values are 0 and 1 is the binary number system.
Likewise, it is similar to the binary states of digital electronics and that is why Boolean algebra is used
to analyze digital circuits. The logical operators of Boolean algebra are AND, OR, and NOT, which are
symbolically represented by dot (∙), plus sign (+), and over bar (¯). Often the dot is omitted in Boolean
expression. Hence, A∙B is written as AB without the dot.
The operation AND yields true (binary value 1) if and only if both of its operands are true. The
operation OR yields true if either or both of its operands are true. The unary operation NOT inverts the
value of its operand. Other useful derived operators are NAND, NOR, and XOR. NAND (stands for
NOT - AND) is a combination of AND and NOT. It is the opposite of AND. NOR (NOT - OR) is
formed by combining OR and NOT. NOR is the opposite of OR. Similarly XOR is equivalent to the
equationX∙Y + X∙Y. Operator NOT is a unary operator and others are binary operators. XOR yields 1
when only if exactly one of the operands has the value 1. Except NOT, all operators and can be used
with more than two variables.
Table 1 shows the truth tables for these Boolean operators. A truth table shows the results of
an operation for every possible combination of values for its variables.
Table 2 shows important identities of Boolean algebra. These identities are useful in simplifying
Boolean functions in order to find simple circuit designs
Computer Organization and Architecture compiled by: Fikru Tafesse (MSc)

Table 1-1: Truth table for Boolean operators

Table 2: Basic identities of Boolean algebra


Basic Postulates
A+0=A A.1=A Identity law
A+1=1 A.0=0 Boundedness Law
A+A=A A. A = A Idempotent law
A + A' = 1 A . A'=0 Compliment law
A + B = B+A A.B = B.A Commutative law
A+ (B . C) = (A+B) . (B+C) A . (B+C) = (A . B) + (A . C) Distributive law
A+ (A . B) = A A . (A+B) = A Absorption law
(A+B)+C = A+(B+C) (A . B) . C = A. (B. C) Associate law
(A+B) = A . B (A . B) = A + B De Morgan's law
A =A Involution law

2.2 Logic Gates


The fundamental building block of all digital logic circuits is the gate. Logical functions are
implemented by the interconnection of gates. A gate is an electronic circuit that produces an output
signal that is a simple Boolean operation on its input signals. The basic gates used in digital logic are
AND, OR, NOT, NAND, NOR, and XOR. Figure 1-1 depicts these six gates. Each gate is defined in
three ways: graphic symbol, algebraic notation, and truth table. The symbology used here is the IEEE
standard, IEEE Std 91. Note that the inversion (NOT) operation is indicated by a circle.
Each gate shown in Figure 1 has one or two inputs and one output. However, it is already stated that, all
of the gates except NOT can have more than two inputs. Thus, they can be implemented with thee
inputs. When one or more of the values at the input are changed, the correct output signal appears almost
instantaneously, delayed only by the propagation time of signals through the gate (known as the gate
delay). In some cases, a gate is implemented with two outputs, one output being the negation of the
other output.
Here we introduce a common term: we say that to assert a signal is to cause signal line to make a
transition from its logically false (0) state to its logically true (1) state. The true (1) state is either
a high or low voltage state, depending on the type of electronic circuitry.
Computer Organization and Architecture compiled by: Fikru Tafesse (MSc)

Figure 1: Basic logic gates


Typically, not all gate types are used in implementation. Design and fabrication are simpler if
only one or two types of gates are used. Thus, it is important to identify functionally complete
sets of gates. This means that any Boolean function can be implemented using only the gates in
the set. The following are functionally complete sets:
AND, OR, NOT
AND, NOT
OR, NOT
NAND
NOR
It should be clear that AND, OR, and NOT gates constitute a functionally complete set, because
they represent the three operations of Boolean algebra. For the AND and NOT gates to form a
functionally complete set, there must be a way to synthesize the OR operation from the AND and
NOT operations.
This can be done by applying DeMorgan‟s theorem:
A+ B = A ∙ B

A OR B = NOT ((NOT A) AND (NOT B))


Computer Organization and Architecture compiled by: Fikru Tafesse (MSc)
Similarly, the OR and NOT operations are functionally complete because they can be used
to synthesize the AND operation. Figure 2 (a) shows how the AND, OR, and NOT
functions can be implemented solely with NAND gates, and Figure 2 (b) shows the same
thing for NOR gates. For this reason, digital circuits can be and frequently are,
implemented solely with NAND gates or solely with NOR gates. Although this may not
be the minimum-gate implementation, it has the advantage of regularity, which can
simplify the manufacturing process.
With gates, we have reached the most primitive circuit level of computer hardware. An
examination of the transistor combinations used to construct gates departs from that realm
and enters the realm of electrical engineering. For our purposes, however, we are content
to describe how gates can be used as building blocks to implement the essential logical
circuits of a digital computer.

Figure 2: (a) The use of NAND gate (b) The use of NOR gate

2.3 Integrated Circuits


Integrated circuit (IC) is the basic building block of digital circuits. An integrated circuit is a
small silicon semiconductor crystal, called a chip. The various gates are interconnected inside the
chip to form the required circuit. As the technology of ICs has improved, the number of gates
that can be put in a single chip has increased.
Small-scale integration (SSI) devices contain several (usually less than 10) independent gates in
a single package.
Medium-scale integration (MSI) devices contain approximately 10 to 200 gates in a single
package.
E.g To form decoders, adders, and registers.
Large-scale integration (LSI) devices contain between 200 and a few thousands gates in a
single package.
Eg. Processors, memory chips, and programmable modules.
Very-large-scale integration (VLSI) devices contain thousands of gates in a single package.
E.g large memory arrays and complex microcomputer chips.
Computer Organization and Architecture compiled by: Fikru Tafesse (MSc)
Digital integrated circuits are also classified based on the specific circuit technology to which
they belong. The basic circuit in each technology is either a NAND, a NOR, or an inverter gate.
The most popular logic families of integrated circuits are:
TTL Transistor-transistor logic
 Has been in operation for many years and is considered as standard.
ECL Emitter-coupled logic
 Has an advantage in systems requiring high-speed operation.
MOS Metal-oxide semiconductor
 Is suitable for circuits that need high component density.
CMOS Complementary metal-oxide semiconductor
 Preferable in systems requiring low power consumption.

2.3.1 Combinational Circuits


A combinational circuit is an interconnected set of gates whose output at any time is a function
only of the input at that time. As with a single gate, the appearance of the input is followed
almost immediately by the appearance of the output, with only gate delays. In general terms, a
combinational circuit consists of n binary inputs and m binary outputs. As with a gate, a
combinational circuit can be defined in three ways:
Truth table: For each of the 2n possible combinations of input signals, the binary value
of each
of the m output signals is listed.
Graphical symbols: The interconnected layout of gates is depicted.
Boolean equations: Each output signal is expressed as a Boolean function of its input
signals
Implementation of Boolean Functions
Any Boolean function can be implemented in electronic form as a network of gates. For any
given function, there are a number of alternative realizations. Consider the Boolean function
represented by the truth table in Table 3.We can express this function by simply itemizing the
combinations of values of A, B, and C that cause F to be 1:

There are three combinations of input values that cause F to be 1, and if any one of these
combinations occurs, the result is 1. This form of expression, for self-evident reasons, is known
as the sum of products (SOP) form. Figure 3 shows a straightforward implementation with AND,
OR, and NOT gates.
Computer Organization and Architecture compiled by: Fikru Tafesse (MSc)
Table 3: Truth table for the function in Equation (1.1)

Figure 3: Sum of products implementation of Table 3


Another form can also be derived from the truth table. The SOP form expresses that the output is
1 if any of the input combinations that produce 1 is true. We can also say that the output is 1 if
none of the input combinations that produce 0 is true. Thus,

This is in the product of sums (POS) form, which is illustrated in Figure 4. For clarity, NOT
gates are not shown. Rather, it is assumed that each input signal and its complement are
available. This simplifies the logic diagram and makes the inputs to the gates more readily
apparent. Thus, a Boolean function can be realized in either SOP or POS form. At this point, it
would seem that the choice would depend on whether the truth table contains more 1s or 0s for
the output function: The SOP has one term for each 1, and the POS has one term for each 0.
However, there are other considerations:
Computer Organization and Architecture compiled by: Fikru Tafesse (MSc)
 It is often possible to derive a simpler Boolean expression from the truth table
than either SOP or POS.
 It may be preferable to implement the function with a single gate type (NAND or
NOR).
The significance of the first point is that, with a simpler Boolean expression, fewer gates will be
needed to implement the function. Three methods that can be used to achieve simplification are:
 Algebraic simplification
 Karnaugh maps
 Quine–McKluskey tables
Algebraic Simplification
Algebraic simplification involves the application of the identities of Table 2 to reduce the
Boolean expression to one with fewer elements. For example, Equation (1.1) can be simplified
to:

Figure 4: Product of sums implementation of Table 3


This expression can be implemented as shown in Figure 5. The simplification of Equation (1.1)
was done essentially by observation. For more complex expressions, some more systematic
approach is needed.
Common Combinational Circuits
1. Adders
Binary addition differs from Boolean algebra in that the result includes a carry term. Thus,
Computer Organization and Architecture compiled by: Fikru Tafesse (MSc)
However, addition can still be dealt with in Boolean terms. In Table 4 a, we show the logic for
adding two input bits to produce a 1-bit sum and a carry bit. This truth table could easily be
implemented in digital logic. A digital arithmetic circuit that carries out the addition of a pair of
bits is called a half adder.
However, we are not interested in performing addition on just a single pair of bits. Rather, we
wish to add two n-bit numbers along with a carry from a previous bitwise addition. Such digital
circuit is called a full adder. A combination of two half adders creates a full adder. This can be
done by putting together a set of adders so that the carry from

2.4 Sequential Circuits


Combinational circuits implement the essential functions of a digital computer. However, except
for the special case of ROM, they provide no memory or state information, elements also
essential to the operation of a digital computer. For the latter purposes, a more complex form of
digital logic circuit is used: the sequential circuit. The current output of a sequential circuit
depends not only on the current input, but also on the past history of inputs. Another and
generally more useful way to view it is that the current output of a sequential circuit depends on
the current input and the current state of that circuit.
In this section, we examine sequential circuits taking flip-flops as example. As will be seen, the
sequential circuit makes use of combinational circuits. One adder is provided as input to the next.
Figure 6 shows the block diagram and the logic diagram for a half adder and Figure 7 shows
that of a full adder. You can notice that the sum and carry columns of the half adder truth table
are similar to that of an XOR and AND logic operation truth tables. Therefore, the sum and carry
of a pair of bits can be implemented with an XOR and an AND gate as shown in Figure 6 and
Figure 7.
Table 4: Binary Addition Truth Table

Figure 6: Half Adder


Computer Organization and Architecture compiled by: Fikru Tafesse (MSc)

Figure 7: Full Adder

Figure 7: Construction of 32-bit Adder using 8-bit Adders


By combining a number of full adders, we can have the necessary logic to implement a multiple-
bit adder such as shown in Figure 2-3. Note that because the output from each adder depends on
the carry from the previous adder, there is an increasing delay from the least significant to the
most significant bit. Each single-bit adder experiences a certain amount of gate delay, and this
gate delay accumulates. For larger adders, the accumulated delay can become unacceptably high.
2. Multiplexers
The multiplexer connects multiple inputs to a single output. At any time, one of the inputs is
selected to be passed to the output. A general block diagram representation is shown in Figure 8.
This represents a 4-to-1 multiplexer. There are four input lines, labeled D0, D1, D2, and D3. One
of these lines is selected to provide the output signal F. To select one of the four possible inputs,
a 2-bit selection code is needed, and this is implemented as two select lines labeled S1 and S2.
Table 5: 4-to-1 Multiplexer Truth Table

Figure 7: Block Diagram of a 4-to-1 Multiplexer


An example 4-to-1 multiplexer is defined by the truth table in Table 5. This is a simplified form
of a truth table. Instead of showing all possible combinations of input variables, it shows the
output as data from line D0, D1, D2, or D3. Figure 8 shows an implementation using AND, OR,
and NOT gates. S1 and S2 are connected to the AND gates in such a way that, for any
combination of S1 and S2, three of the AND gates will output 0.The fourth AND gate will output
the value of the selected line, which is either 0 or 1. Thus, three of the inputs to the OR gate are
always 0, and the output of the OR gate will equal the value of the selected input gate. Using this
regular organization, it is easy to construct multiplexers of size 8-to-1, 16-to-1, and so on.
Computer Organization and Architecture compiled by: Fikru Tafesse (MSc)

Figure 8: Implementation of a 4-to-1 Multiplexer


Multiplexers are used in digital circuits to control signal and data routing. An example is the
loading of the program counter (PC).
3. Demultiplexer
The demultiplexer performs the inverse function of a multiplexer. It connects a single input to
one of several outputs.
4. Decoders
A decoder is a combinational circuit with a number of output lines, only one of which is selected
at any time, depending on the pattern of input lines. In general, a decoder has n inputs and 2 n
outputs. Decoders find many uses in digital computers.
 One example is address decoding.
 The other is binary-to- octal conversion.

Figure 9: Decoder with 3 inputs and 2n = 8 outputs

2.5 Sequential Circuits


In case of combinational circuits, the value of each output depends on the values of signals
applied to the inputs. However, in case of Sequential Circuits, the values of the outputs depend
Computer Organization and Architecture compiled by: Fikru Tafesse (MSc)
not only on the present values of the inputs but also on the past behaviour of the circuit. Such
circuits include storage elements that store the values of logic signals. The contents of the storage
elements are said to represent the state of the circuit.
E.g flip-flops

2.5.1 Flip-Flops
The simplest form of sequential circuit is the flip-flops. There are a variety of flip-flops, all of
which share two properties: The flip-flop is a bistable device, i.e. has two stable states. It exists
in one of two states and, in the absence of input- function as a 1-bit memory. The flip-flop has
two outputs, Q and the complement of Q.
E.g S-R, J-K & D flip-flops
A. S-R Flip-Flops
The circuit has two inputs, S (Set) and R (Reset), and two outputs, Q & complement of Q.
Mostly, events in the digital computer are synchronized to a clock pulse, so that changes occur
only when a clock pulse occurs. The S and R inputs are passed to the NOR gates only during the
clock pulse. Only when the clock signal changes [0-1] can output affected according to the
values of input S and R.

Figure 10: S-R Flip-Flop


B. D Flip-Flops
The D (data) flip-flop -data flip-flop uses for storage of one bit of data. The output of the D flip-
flop is always equal to the most recent value applied to the input. Hence, it remembers and
produces the last input.

Figure 11: D Flip-Flops


C. J-K Flip Flops
Like S–R flip-flops, it has two inputs. However, in this case all possible combinations of input
values are valid. Intermediate state of S-R type is defined in J-K flip- flops.
Computer Organization and Architecture compiled by: Fikru Tafesse (MSc)

Figure 12: J-K Flip Flops

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy