0% found this document useful (0 votes)
10 views

u-4

The document provides an overview of the Arithmetic Logic Unit (ALU) and data paths in computer architecture, detailing their roles in performing arithmetic and logical operations within the CPU. It explains the structure and function of ALUs, various types of buses, and registers, as well as different bus organizations like one, two, and three bus systems. Additionally, it discusses the design complexities of ALUs, including the historical context of the 74181 ALU integrated chip and the trade-offs in ALU design related to speed and cost.

Uploaded by

Tulsi Bhelkar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

u-4

The document provides an overview of the Arithmetic Logic Unit (ALU) and data paths in computer architecture, detailing their roles in performing arithmetic and logical operations within the CPU. It explains the structure and function of ALUs, various types of buses, and registers, as well as different bus organizations like one, two, and three bus systems. Additionally, it discusses the design complexities of ALUs, including the historical context of the 74181 ALU integrated chip and the trade-offs in ALU design related to speed and cost.

Uploaded by

Tulsi Bhelkar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 29

http://vlabs.iitkgp.ac.in/coa/exp8/index.

html

Introduction of ALU and Data Path




Representing and storing numbers were the basic operations of the


computers of earlier times. The real go came when computation,
manipulating numbers like adding and multiplying came into the
picture. These operations are handled by the
computer’s arithmetic logic unit (ALU). The ALU is the
mathematical brain of a computer. The first ALU (Arithmetic Logic
Unit) was indeed the INTEL 74181, which was implemented as part
of the 7400 series TTL (Transistor-Transistor Logic) integrated
circuits. It was released by Intel in 1970.
What is ALU?
ALU is a digital circuit that provides arithmetic and logic operations.
It is the fundamental building block of the central processing unit of
a computer. A modern central processing unit(CPU) has a very
powerful ALU and it is complex in design. In addition to ALU modern
CPU contains a control unit and a set of registers. Most of the
operations are performed by one or more ALUs, which load data
from the input register. Registers are a small amount of storage
available to the CPU. These registers can be accessed very fast.
The control unit tells ALU what operation to perform on the available
data. After calculation/manipulation, the ALU stores the output in an
output register.
The CPU can be divided into two sections: the data section and
the control section. The data section is also known as the data
path.
An Arithmetic Logic Unit (ALU) is a key component of the CPU
responsible for performing arithmetic and logical operations. The
collection of functional units like ALUs, registers, and buses that
move data within the processor. together are known as Data Path,
they execute instructions and manipulate data during processing
tasks.
For more detailed information on the ALU, data paths, and their role
in computer architecture,
BUS
In early computers BUS were parallel electrical wires with multiple
hardware connections. Therefore a bus is a communication system
that transfers data between components inside a computer, or
between computers. It includes hardware components like wires,
optical fibers, etc and software, including communication protocols.
The Registers, ALU, and the interconnecting BUS are collectively
referred to as data paths.
Types of the bus
There are mainly three type of bus:-
1. Address bus: Transfers memory addresses from the processor
to components like storage and input/output devices. It’s one-way
communication.
2. Data bus: carries the data between the processor and other
components. The data bus is bidirectional.
3. Control bus: carries control signals from the processor to other
components. The control bus also carries the clock’s pulses. The
control bus is unidirectional.
The bus can be dedicated, i.e., it can be used for a single purpose or
it can be multiplexed, i.e., it can be used for multiple purposes.
when we would have different kinds of buses, different types of bus
organizations will take place.
Registers
In Computer Architecture, the Registers are very fast computer
memory which is used to execute programs and operations
efficiently. but In that scenario, registers serve as gates, sending
signals to various components to carry out little tasks. Register
signals are directed by the control unit, which also operates the
registers.
The following list of five registers for in-out signal data storage:
1. Program Counter
A program counter (PC) is a CPU register in the computer
processor which has the address of the next instruction to be
executed from memory . As each instruction gets fetched, the
program counter increases its stored value by 1. It is a digital
counter needed for faster execution of tasks as well as for
tracking the current execution point.
2. Instruction Register
In computing, an instruction register (IR) is the part of a CPU’s
control unit that holds the instruction currently being executed or
decoded. The instruction register specifically holds the instruction
and provides it to the instruction decoder circuit.
3. Memory Address Register
The Memory Address Register (MAR) is the CPU register that
either stores the memory address from which data will be fetched
from the CPU, or the address to which data will be sent and
stored. It is a temporary storage component in the CPU(central
processing unit) that temporarily stores the address (location) of
the data sent by the memory unit until the instruction for the
particular data is executed.
4. Memory Data Register
The memory data register (MDR) is the register in a computer’s
processor, or central processing unit, CPU, that stores the data
being transferred to and from the immediate access storage.
Memory data register (MDR) is also known as memory buffer
register (MBR).
5. General Purpose Register
General-purpose registers are used to store temporary data
within the microprocessor . It is a multipurpose register. They can
be used either by a programmer or by a user.
What is Data Path?
Suppose that the CPU needs to carry out any data processing action,
such as copying data from memory to a register and vice versa,
moving register content from one register to another, or adding two
numbers in the ALU. Therefore, whenever a data processing action
takes place in the CPU, the data involved for that operation follows a
particular path, or data path.
Data paths are made up of various functional components, such as
multipliers or arithmetic logic units. Data path is required to do data
processing operations.
One Bus Organization

In one bus organization, a single bus is used for multiple purposes. A


set of general-purpose registers, program counters, instruction
registers, memory address registers (MAR), memory data registers
(MDR) are connected with the single bus. Memory read/write can be
done with MAR and MDR. The program counterpoints to the memory
location from where the next instruction is to be fetched. Instruction
register is that very register will hold the copy of the current
instruction. In the case of one bus organization, at a time only one
operand can be read from the bus.
As a result, if the requirement is to read two operands for the
operation then the read operation needs to be carried twice. So
that’s why it is making the process a little longer. One of the
advantages of one bus organization is that it is one of the simplest
and also this is very cheap to implement. At the same time a
disadvantage lies that it has only one bus and this “one bus” is
accessed by all general-purpose registers, program counter,
instruction register, MAR, MDR making each and every operation
sequential. No one recommends this architecture nowadays.
Two Bus Organization
To overcome the disadvantage of one bus organization another
architecture was developed known as two bus organization. In two
bus organizations, there are two buses. The general-purpose
register can read/write from both the buses. In this case, two
operands can be fetched at the same time because of the two
buses. One bus fetch operand for ALU and another bus fetch for
register. The situation arises when both buses are busy fetching
operands, the output can be stored in a temporary register and
when the buses are free, the particular output can be dumped on
the buses.
There are two versions of two bus organizations, i.e., in-bus and out-
bus. From in-bus, the general-purpose register can read data and to
the out bus, the general-purpose registers can write data. Here
buses get dedicated.
Three Bus Organization
In three bus organizations we have three buses, OUT bus1, OUT
bus2, and an IN bus. From the out buses, we can get the operand
which can come from the general-purpose register and evaluated in
ALU and the output is dropped on In Bus so it can be sent to
respective registers. This implementation is a bit complex but faster
in nature because in parallel two operands can flow into ALU and out
of ALU. It was developed to overcome the busy waiting problem of
two bus organizations. In this structure after execution, the output
can be dropped on the bus without waiting because of the presence
of an extra bus. The structure is given below in the figure.
The main advantages of multiple bus organizations over the single
bus are as given below.
1. Increase in size of the registers.
2. Reduction in the number of cycles for execution.
3. Increases the speed of execution or we can say faster execution.
Arithmetic Logic Unit Design
The Arithmetic Logic Unit (ALU) is the heart of any CPU. An ALU performs
three kinds of operations, i.e.

 Arithmetic operations such as Addition/Subtraction,


 Logical operations such as AND, OR, etc. and
 Data movement operations such as Load and Store

ALU derives its name because it performs arithmetic and logical operations.
A simple ALU design is constructed with Combinational circuits. ALUs that
perform multiplication and division are designed around the circuits
developed for these operations while implementing the desired algorithm.
More complex ALUs are designed for executing Floating point, Decimal
operations and other complex numerical operations. These are called
Coprocessors and work in tandem with the main processor.

The design specifications of ALU are derived from the Instruction Set
Architecture. The ALU must have the capability to execute the instructions of
ISA. An instruction execution in a CPU is achieved by the movement of
data/datum associated with the instruction. This movement of data is
facilitated by the Datapath. For example, a LOAD instruction brings data
from memory location and writes onto a GPR. The navigation of data over
datapath enables the execution of LOAD instruction. We discuss Datapath
more in details in the next chapter on Control Unit Design. The trade-off in
ALU design is necessitated by the factors like Speed of execution, hardware
cost, the width of the ALU.

Combinational ALU
A primitive ALU supporting three functions AND, OR and ADD is explained in
figure 11.1. The ALU has two inputs A and B. These inputs are fed to AND
gate, OR Gate and Full ADDER. The Full Adder also has CARRY IN as an input.
The combinational logic output of A and B is statically available at the output
of AND, OR and Full Adder. The desired output is chosen by the Select
function, which in turn is decoded from the instruction under execution.
Multiplexer passes one of the inputs as output based on this select function.
Select Function essentially reflects the operation to be carried out on the
operands A and B. Thus A and B, A or B and A+B functions are supported by
this ALU. When ALU is to be extended for more bits the logic is duplicated for
as many bits and necessary cascading is done. The AND and OR logic are
part of the logical unit while the adder is part of the arithmetic unit.
Figure 11.1 A Primitive ALU supporting AND, OR and ADD function
The simplest ALU has more functions that are essential to support the ISA of
the CPU. Therefore the ALU combines the functions of 2's complement,
Adder, Subtractor, as part of the arithmetic unit. The logical unit would
generate logical functions of the form f(x,y) like AND, OR, NOT, XOR etc.
Such a combination supplements most of a CPU's fixed point data processing
instructions.
Figure 11.2
ALU Symbol
So far what we have seen is a primitive ALU. ALU can be as complex as the
variety of functions that are carried out by the ALU. The powerful modern
CPUs have powerful and versatile ALUs. Modern CPUs have multiple ALU to
improve efficiency.

74181 Arithmetic Logic Unit Integrated Chip


74181 is cascadable ALU of the 1960s and first of the kind. ALU operation
and complexity is better understood by the features of 74181, although
much more is offered by modern ALUs. Even today 74181 is of academic
interest in teaching Computer architecture.

 4-bit Arithmetic and Logical Unit for fixed-point operations


 Inputs: Two operands A and B of 4-bit width
 Output: F 0-3 ( 4 bit width)
 Mode selection with M - defines Arithmetic or logical mode
 Function select with 4 lines (S0-3)
 16 sets of Arithmetic and 16 Logical Operations possible ( as detailed in
Function table)
 Carry in used as a special input; Cin disabled for logical operations
 Look Ahead Carry Adder Principle employed for faster output
propagation with P, G outputs
 Carry Output
 A=B comparator output
Ref: Fairchild datasheet 74181
As we see from the table, the logical operations are AND, OR, NOT, NAND,
NOR, XOR, A Not, B Not etc. The arithmetic operations are ADD, Subtract,
Shift, 2's complement, compare, Double, etc. There are few bizarre functions
too which are rarely used. The functions and logic in the table is an example
of what an ALU is. In the microprocessor era, 74181 is not in use. ALU is in-
built in the microprocessor. The p, g and C out outputs are intended to allow k-
copies of the 74181 to be combined either using ripple-carry propagation or
carry look ahead to form 4k bit ALU. This ALU is expandable to more word
width by cascading and is shown in figure 11.3. AMD 2901 is a
microprocessor-based 4-bit bit-sliced cascadable ALU. This supports 3
arithmetic and 5 logical functions.
What is ALU (Arithmetic Logic Unit)?
In the computer system, ALU is a main component of the central processing unit, which
stands for arithmetic logic unit and performs arithmetic and logic operations. It is also known
as an integer unit (IU) that is an integrated circuit within a CPU or GPU, which is the last
component to perform calculations in the processor. It has the ability to perform all
processes related to arithmetic and logic operations such as addition, subtraction, and
shifting operations, including Boolean comparisons (XOR, OR, AND, and NOT operations).
Also, binary numbers can accomplish mathematical and bitwise operations. The arithmetic
logic unit is split into AU (arithmetic unit) and LU (logic unit). The operands and code used
by the ALU tell it which operations have to perform according to input data. When the ALU
completes the processing of input, the information is sent to the computer's memory.
Except performing calculations related to addition and subtraction, ALUs handle the
multiplication of two integers as they are designed to execute integer calculations; hence, its
result is also an integer. However, division operations commonly may not be performed by
ALU as division operations may produce a result in a floating-point number. Instead, the
floating-point unit (FPU) usually handles the division operations; other non-integer
calculations can also be performed by FPU.

Additionally, engineers can design the ALU to perform any type of operation. However, ALU
becomes more costly as the operations become more complex because ALU destroys more
heat and takes up more space in the CPU. This is the reason to make powerful ALU by
engineers, which provides the surety that the CPU is fast and powerful as well.

The calculations needed by the CPU are handled by the arithmetic logic unit (ALU); most of
the operations among them are logical in nature. If the CPU is made more powerful, which
is made on the basis of the ALU is designed. Then it creates more heat and takes more
power or energy. Therefore, it must be moderation between how complex and powerful ALU
is and not be more costly. This is the main reason the faster CPUs are more costly; hence,
they take much power and destroy more heat. Arithmetic and logic operations are the main
operations that are performed by the ALU; it also performs bit-shifting operations.

Although the ALU is a major component in the processor, the ALU's design and function
may be different in the different processors. For case, some ALUs are designed to perform
only integer calculations, and some are for floating-point operations. Some processors
include a single arithmetic logic unit to perform operations, and others may contain
numerous ALUs to complete calculations. The operations performed by ALU are:

o Logical Operations: The logical operations consist of NOR, NOT, AND, NAND,
OR, XOR, and more.
o Bit-Shifting Operations: It is responsible for displacement in the locations of the
bits to the by right or left by a certain number of places that are known as a
multiplication operation.
o Arithmetic Operations: Although it performs multiplication and division, this
refers to bit addition and subtraction. But multiplication and division operations
are more costly to make. In the place of multiplication, addition can be used as a
substitute and subtraction for division.

Arithmetic Logic Unit (ALU) Signals


A variety of input and output electrical connections are contained by the ALU, which led to
casting the digital signals between the external electronics and ALU.

ALU input gets signals from the external circuits, and in response, external electronics get
outputs signals from ALU.

Data: Three parallel buses are contained by the ALU, which include two input and output
operand. These three buses handle the number of signals, which are the same.

Opcode: When the ALU is going to perform the operation, it is described by the operation
selection code what type of operation an ALU is going to perform arithmetic or logic
operation.

Output: The results of the ALU operations are provided by the status outputs in the
form of supplemental data as they are multiple signals. Usually, status signals like
overflow, zero, carry out, negative, and more are contained by general ALUs. When the
ALU completes each operation, the external registers contained the status output
signals. These signals are stored in the external registers that led to making them
available for future ALU operations.

o Input: When ALU once performs the operation, the status inputs allow ALU to
access further information to complete the operation successfully. Furthermore,
stored carry-out from a previous ALU operation is known as a single "carry-in"
bit.
Configurations of the ALU
The description of how ALU interacts with the processor is given below. Every arithmetic
logic unit includes the following configurations:

o Instruction Set Architecture


o Accumulator
o Stack
o Register to Register
o Register Stack
o Register Memory

Accumulator
The intermediate result of every operation is contained by the accumulator, which means
Instruction Set Architecture (ISA) is not more complex because there is only required to
hold one bit.

Generally, they are much fast and less complex but to make Accumulator more stable; the
additional codes need to be written to fill it with proper values. Unluckily, with a single
processor, it is very difficult to find Accumulators to execute parallelism. An example of an
Accumulator is the desktop calculator.

Stack
Whenever the latest operations are performed, these are stored on the stack that holds
programs in top-down order, which is a small register. When the new programs are added
to execute, they push to put the old programs.

Register-Register Architecture
It includes a place for 1 destination instruction and 2 source instructions, also known as a 3-
register operation machine. This Instruction Set Architecture must be more in length for
storing three operands, 1 destination and 2 sources. After the end of the operations, writing
the results back to the Registers would be difficult, and also the length of the word should
be longer. However, it can be caused to more issues with synchronization if write back rule
would be followed at this place.

The MIPS component is an example of the register-to-register Architecture. For input, it


uses two operands, and for output, it uses a third distinct component. The storage space is
hard to maintain as each needs a distinct memory; therefore, it has to be premium at all
times. Moreover, there might be difficult to perform some operations.

Register - Stack Architecture


Generally, the combination of Register and Accumulator operations is known as for Register
- Stack Architecture. The operations that need to be performed in the register-stack
Architecture are pushed onto the top of the stack. And its results are held at the top of the
stack. With the help of using the Reverse polish method, more complex mathematical
operations can be broken down. Some programmers, to represent operands, use the
concept of a binary tree. It means that the reverse polish methodology can be easy for
these programmers, whereas it can be difficult for other programmers. To carry out Push
and Pop operations, there is a need to be new hardware created.

Computer Organization | Booth’s Algorithm




Booth algorithm gives a procedure for multiplying binary


integers in signed 2’s complement representation in efficient
way, i.e., less number of additions/subtractions required. It operates
on the fact that strings of 0’s in the multiplier require no addition
but just shifting and a string of 1’s in the multiplier from bit weight
2^k to weight 2^m can be treated as 2^(k+1 ) to 2^m. As in all
multiplication schemes, booth algorithm requires examination of
the multiplier bits and shifting of the partial product. Prior to the
shifting, the multiplicand may be added to the partial product,
subtracted from the partial product, or left unchanged according to
following rules:
1. The multiplicand is subtracted from the partial product upon
encountering the first least significant 1 in a string of 1’s in the
multiplier
2. The multiplicand is added to the partial product upon
encountering the first 0 (provided that there was a previous ‘1’) in
a string of 0’s in the multiplier.
3. The partial product does not change when the multiplier bit is
identical to the previous multiplier bit.
Booth’s Algorithm optimizes binary multiplication. For a better
grasp of computer organization,
Hardware Implementation of Booths Algorithm – The hardware
implementation of the booth algorithm requires the register
configuration shown in the figure below.
Booth’s Algorithm Flowchart –
We name the register as A, B and Q, AC, BR and QR respectively. Qn
designates the least significant bit of multiplier in the register QR.
An extra flip-flop Qn+1is appended to QR to facilitate a double
inspection of the multiplier.The flowchart for the booth algorithm is
shown below.

Flow chart of Booth’s Algorithm.

AC and the appended bit Qn+1 are initially cleared to 0 and the
sequence SC is set to a number n equal to the number of bits in the
multiplier. The two bits of the multiplier in Qn and Qn+1are
inspected. If the two bits are equal to 10, it means that the first 1 in
a string has been encountered. This requires subtraction of the
multiplicand from the partial product in AC. If the 2 bits are equal to
01, it means that the first 0 in a string of 0’s has been encountered.
This requires the addition of the multiplicand to the partial product
in AC. When the two bits are equal, the partial product does not
change. An overflow cannot occur because the addition and
subtraction of the multiplicand follow each other. As a consequence,
the 2 numbers that are added always have a opposite signs, a
condition that excludes an overflow. The next step is to shift right
the partial product and the multiplier (including Qn+1). This is an
arithmetic shift right (ashr) operation which AC and QR to the right
and leaves the sign bit in AC unchanged. The sequence counter is
decremented and the computational loop is repeated n times.
Product of negative numbers is important, while multiplying
negative numbers we need to find 2’s complement of the number to
change its sign, because it’s easier to add instead of performing
binary subtraction. product of two negative number is demonstrated
below along with 2’s complement.
Example – A numerical example of booth’s algorithm is shown
below for n = 4. It shows the step by step multiplication of -5 and -7.
BR = -5 = 1011,
BR' = 0100, <-- 1's Complement (change the values 0 to 1 and 1
to 0)
BR'+1 = 0101 <-- 2's Complement (add 1 to the Binary value
obtained after 1's complement)
QR = -7 = 1001 <-- 2's Complement of 0111 (7 = 0111 in Binary)
The explanation of first step is as follows: Qn+1

AC = 0000, QR = 1001, Qn+1 = 0, SC = 4


Qn Qn+1 = 10
So, we do AC + (BR)'+1, which gives AC = 0101
On right shifting AC and QR, we get
AC = 0010, QR = 1100 and Qn+1 = 1
Advantages:
Faster than traditional multiplication: Booth’s algorithm is
faster than traditional multiplication methods, requiring fewer steps
to produce the same result.
Efficient for signed numbers: The algorithm is designed
specifically for multiplying signed binary numbers, making it a more
efficient method for multiplication of signed numbers than
traditional methods.
Lower hardware requirement: The algorithm requires fewer
hardware resources than traditional multiplication methods, making
it more suitable for applications with limited hardware resources.
Widely used in hardware: Booth’s algorithm is widely used in
hardware implementations of multiplication operations, including
digital signal processors, microprocessors, and FPGAs.
Disadvantages:
Complex to understand: The algorithm is more complex to
understand and implement than traditional multiplication methods.
Limited applicability: The algorithm is only applicable for
multiplication of signed binary numbers, and cannot be used for
multiplication of unsigned numbers or numbers in other formats
without additional modifications.
Higher latency: The algorithm requires multiple iterations to
calculate the result of a single multiplication operation, which
increases the latency or delay in the calculation of the result.
Higher power consumption: The algorithm consumes more
power compared to traditional multiplication methods, especially for
larger inputs.
Application of Booth’s Algorithm:
1. Chip and computer processors: Corner’s Calculation is utilized
in the equipment execution of number-crunching rationale units
(ALUs) inside microchips and computer chips. These parts are liable
for performing number juggling and coherent procedure on twofold
information. Proficient duplication is fundamental in different
applications, including logical registering, designs handling, and
cryptography. Corner’s Calculation lessens the quantity of piece
movements and augmentations expected to perform duplication,
bringing about quicker execution and better in general execution.
2. Digital Signal Processing (DSP): DSP applications frequently
include complex numerical tasks, for example, sifting and
convolution. Duplicating enormous twofold numbers is a principal
activity in these errands. Corner’s Calculation permits DSP
frameworks to perform duplications all the more productively,
empowering ongoing handling of sound, video, and different sorts of
signs.
3. Hardware Accelerators: Many particular equipment gas pedals
are intended to perform explicit assignments more productively
than broadly useful processors. Corner’s Calculation can be
integrated into these gas pedals to accelerate augmentation
activities in applications like picture handling, brain organizations,
and AI.
4. Cryptography: Cryptographic calculations, like those utilized in
encryption and computerized marks, frequently include particular
exponentiation, which requires proficient duplication of huge
numbers. Corner’s Calculation can be utilized to speed up the
measured augmentation step in these calculations, working on the
general proficiency of cryptographic tasks.
5. High-Performance Computing (HPC): In logical reenactments
and mathematical calculations, enormous scope augmentations are
oftentimes experienced. Corner’s Calculation can be carried out in
equipment or programming to advance these duplication tasks and
improve the general exhibition of HPC frameworks.
6. Implanted Frameworks: Inserted frameworks frequently have
restricted assets regarding handling power and memory. By utilizing
Corner’s Calculation, fashioners can upgrade augmentation
activities in these frameworks, permitting them to perform all the
more proficiently while consuming less energy.
7. Network Parcel Handling: Organization gadgets and switches
frequently need to perform estimations on bundle headers and
payloads. Augmentation activities are regularly utilized in these
estimations, and Corner’s Calculation can assist with diminishing
handling investment utilization in these gadgets.
8. Advanced Channels and Balancers: Computerized channels
and adjusters in applications like sound handling and
correspondence frameworks require productive augmentation of
coefficients with input tests. Stall’s Calculation can be utilized to
speed up these increases, prompting quicker and more precise
sifting activities.
Basically, Corner’s Calculation finds its application any place
productive paired duplication is required, particularly in situations
where speed, power proficiency, and equipment streamlining are
significant elements.

IEEE Standard 754 Floating Point Numbers


The IEEE Standard for Floating-Point Arithmetic (IEEE 754) is a technical standard for floating-point
computation which was established in 1985 by the Institute of Electrical and Electronics Engineers
(IEEE). The standard addressed many problems found in the diverse floating point implementations that
made them difficult to use reliably and reduced their portability. IEEE Standard 754 floating point is the
most common representation today for real numbers on computers, including Intel-based PC’s, Macs,
and most Unix platforms.

There are several ways to represent floating point number but IEEE 754 is the most efficient in most
cases. IEEE 754 has 3 basic components:

1. The Sign of Mantissa –


This is as simple as the name. 0 represents a positive number while 1 represents a negative
number.

2. The Biased exponent –


The exponent field needs to represent both positive and negative exponents. A bias is added to
the actual exponent in order to get the stored exponent.

3. The Normalised Mantissa –


The mantissa is part of a number in scientific notation or a floating-point number, consisting of
its significant digits. Here we have only 2 digits, i.e. O and 1. So a normalised mantissa is one
with only one 1 to the left of the decimal.

IEEE 754 numbers are divided into two based on the above three components: single precision and
double precision.
TYPES SIGN BIASED EXPONENT NORMALISED MANTISA BIAS

Single precision 1(31st bit) 8(30-23) 23(22-0) 127

Double precision 1(63rd bit) 11(62-52) 52(51-0) 1023

Example –

85.125

85 = 1010101

0.125 = 001

85.125 = 1010101.001

=1.010101001 x 2^6

sign = 0
1. Single precision:

biased exponent 127+6=133

133 = 10000101

Normalised mantisa = 010101001

we will add 0's to complete the 23 bits

The IEEE 754 Single precision is:

= 0 10000101 01010100100000000000000

This can be written in hexadecimal form 42AA4000

2. Double precision:

biased exponent 1023+6=1029

1029 = 10000000101

Normalised mantisa = 010101001

we will add 0's to complete the 52 bits

The IEEE 754 Double precision is:

= 0 10000000101 0101010010000000000000000000000000000000000000000000

This can be written in hexadecimal form 4055480000000000

Special Values: IEEE has reserved some values that can ambiguity.

 Zero –
Zero is a special value denoted with an exponent and mantissa of 0. -0 and +0 are distinct
values, though they both are equal.

 Denormalised –
If the exponent is all zeros, but the mantissa is not then the value is a denormalized number.
This means this number does not have an assumed leading one before the binary point.
 Infinity –
The values +infinity and -infinity are denoted with an exponent of all ones and a mantissa of all
zeros. The sign bit distinguishes between negative infinity and positive infinity. Operations with
infinite values are well defined in IEEE.

 Not A Number (NAN) –


The value NAN is used to represent a value that is an error. This is represented when exponent
field is all ones with a zero sign bit or a mantissa that it not 1 followed by zeros. This is a special
value that might be used to denote a variable that doesn’t yet hold a value.

EXPONENT MANTISA VALUE

0 0 exact 0

255 0 Infinity

0 not 0 denormalised

255 not 0 Not a number (NAN)

Similar for Double precision (just replacing 255 by 2049), Ranges of Floating point numbers:

Denormalized Normalized Approximate Decimal

Single ± 2-149 to (1 – 2-23)×2- ± 2-126 to (2 – 2- ± approximately 10-44.85 to


126 23
Precision )×2127 approximately 1038.53

Double ± 2-1074 to (1 – 2-52)×2- ± 2-1022 to (2 – 2- ± approximately 10-323.3 to


1022 52
Precision )×21023 approximately 10308.3

The range of positive floating point numbers can be split into normalized numbers, and denormalized
numbers which use only a portion of the fractions’s precision. Since every floating-point number has a
corresponding, negated value, the ranges above are symmetric around zero.

There are five distinct numerical ranges that single-precision floating-point numbers are not able to
represent with the scheme presented so far:
1. Negative numbers less than – (2 – 2-23) × 2127 (negative overflow)

2. Negative numbers greater than – 2-149 (negative underflow)

3. Zero

4. Positive numbers less than 2-149 (positive underflow)

5. Positive numbers greater than (2 – 2-23) × 2127 (positive overflow)

Overflow generally means that values have grown too large to be represented. Underflow is a less
serious problem because is just denotes a loss of precision, which is guaranteed to be closely
approximated by zero.

Table of the total effective range of finite IEEE floating-point numbers is shown below:

Binary Decimal

Single ± (2 – 2-23) × 2127 approximately ± 1038.53

Double ± (2 – 2-52) × 21023 approximately ± 10308.25

Special Operations –

Operation Result

n ÷ ±Infinity 0

±Infinity × ±Infinity ±Infinity

±nonZero ÷ ±0 ±Infinity

±finite × ±Infinity ±Infinity


Operation Result

Infinity + Infinity
+Infinity
Infinity – -Infinity

-Infinity – Infinity
– Infinity
-Infinity + – Infinity

±0 ÷ ±0 NaN

±Infinity ÷ ±Infinity NaN

±Infinity × 0 NaN

NaN == NaN False

Advantages of IEEE 754

1. Portability: Ensures consistent floating-point representation across different hardware


and software platforms.
2. Accuracy: Provides mechanisms for rounding and handling special values (like NaN and
infinity).
3. Flexibility: Supports various formats and precision levels to meet different computational
needs.

To evaluate the number 32.75 using single precision IEEE 754 representation, follow these steps:

Step 1: Convert the Decimal Number to Binary

1. Convert the integer part (32) to binary:


o 323232 in binary is 100000100000100000.
2. Convert the fractional part (0.75) to binary:
o 0.75×2=1.50.75 \times 2 = 1.50.75×2=1.5 → 1 (whole part)
o 0.5×2=1.00.5 \times 2 = 1.00.5×2=1.0 → 1 (whole part)
Thus, 0.750.750.75 in binary is 0.110.110.11.
o
3. Combine both parts:
o The binary representation of 32.7532.7532.75 is 100000.11100000.11100000.11.

Step 2: Normalize the Binary Number

Normalize the binary number 100000.11100000.11100000.11:

 Move the binary point left by 5 positions to get 1.0000011×251.0000011 \times


2^51.0000011×25.

Step 3: Determine the Sign, Exponent, and Mantissa

1. Sign Bit:
o Since 32.7532.7532.75 is positive, the sign bit is 000.
2. Exponent:
o The exponent (5) needs to be biased. For single precision, the bias is 127:

Biased Exponent=5+127=132\text{Biased Exponent} = 5 + 127 =


132Biased Exponent=5+127=132

o132132132 in binary is 100001001000010010000100.


3. Mantissa:
o The mantissa is taken from the normalized binary representation (excluding the
leading 1):
o From 1.00000111.00000111.0000011, the mantissa is 000001100000110000011
followed by zeros to fill 23 bits:
o Thus, the mantissa is
000001100000000000000000000011000000000000000000000110000000000000
000.

Step 4: Combine All Parts

Now we can combine the sign bit, biased exponent, and mantissa:

 Sign Bit: 000


 Exponent: 100001001000010010000100
 Mantissa:
000001100000000000000000000011000000000000000000000110000000000000000

Final IEEE 754 Representation

Putting it all together, the single precision IEEE 754 representation of 32.7532.7532.75 is:

0 10000100 00000110000000000000000\text{0 10000100


00000110000000000000000}0 10000100 00000110000000000000000
In hexadecimal, this representation is 0x414C00000x414C00000x414C0000.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy