Chapter 3 PDF

Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

Subject: COA (ECEg 3143) Chapter: 3 Memory Concepts

CHAPTER 3
Memory Concepts

1. Concept of Memory (Classification and hierarchy of Memory systems)


We use memory to store the information, which includes both program and data. We use different
kind of memory at different level. The memory of computer is broadly categories into two categories:
 Internal  External
Internal memory is used by CPU to perform task and external memory is used to store bulk
information, which includes large software and data.
Memory is used to store the information in digital form. The memory hierarchy is given by:
 Register (Internal Memory)  Magnetic Disk (External Memory)
 Cache Memory (Internal Memory)  Removable media (Magnetic tape) (External
 Main Memory (Internal Memory) Memory)

Register: This is a part of Central Processor Unit, so they reside inside the CPU. The information
from main memory is brought to CPU and keep the information in register. These are basically faster
devices.
Cache Memory: Cache memory is a storage device placed in between CPU and main memory.
These are semiconductor memories. These are basically fast memory device, faster than main
memory. Due to higher cost we cannot replace the whole main memory by faster memory. Generally,
the most recently used information is kept in the cache memory. It is brought from the main memory
and placed in the cache memory.
Main Memory: Like cache memory, main memory is also semiconductor memory. But the main
memory is relatively slower memory. We have to first bring the information (whether it is data or
program), to main memory. CPU can work with the information available in main memory only.
Magnetic Disk: This is bulk storage device. We have to deal with huge amount of data in many
application. But we don't have so much semiconductor memory to keep these information in our
computer. On the other hand, semiconductor memories are volatile in nature. It loses its content
once we switch off the computer. For permanent storage, we use magnetic disk. The storage
capacity of magnetic disk is very high.
Removable media: For different application, we use different data. It may not be possible to keep
all the information in magnetic disk. So, whichever data we are not using currently, can be kept in
removable media. Magnetic tape is one kind of removable medium. CD is also a removable media,
which is an optical device.

Compiled By: Rituraj Jain Page:1 Wollega University, Nekemte


Subject: COA (ECEg 3143) Chapter: 3 Memory Concepts

Register, cache memory and main memory are internal memory. Magnetic Disk, removable media
are external memory. Internal memories are semiconductor memory. Semiconductor memories are
categorised as volatile memory and non-volatile memory.
RAM: Random Access Memories are volatile in nature. As soon as the computer is switched off,
the contents of memory are also lost.
ROM: Read only memories are non-volatile in nature.
Several types of ROM are available:
 PROM: Programmable Read Only Memory; it can be programmed once as per user
requirements.
 EPROM: Erasable Programmable Read Only Memory; the contents of the memory can be
erased and store new data into the memory. In this case, we have to erase whole information.
 EEPROM: Electrically Erasable Programmable Read Only Memory; in this type of memory the
contents of a particular location can be changed without effecting the contents of other
location.

2. Main Memory
The main memory of a computer is semiconductor memory. The main memory unit of computer is
basically consists of two kinds of memory:
 RAM: Random access memory; which is volatile in nature.
 ROM: Read only memory; which is non-volatile.
The permanent information are kept in ROM and the user space is basically in RAM. The maximum
size of main memory that can be used in any computer is determined by the addressing scheme.
The main memory is usually designed to store and retrieve data in word length quantities. The word
length of a computer is generally defined by the number of bits actually stored or retrieved in one
main memory access.
The data transfer between main memory and the CPU takes place through two CPU registers.
 MAR: Memory Address Register  MBR: Memory Buffer Register.
The transfer of data takes place through memory bus. It also includes control lines like Read, Write
and Memory Function Complete (MFC) for coordinating data transfer.
For memory operation, the CPU initiates a memory operation by loading the appropriate data i.e.,
address to MAR.
For Memory Read Operation For Memory Write Operation
It sets the read memory control line to 1. Then The CPU places the data into MBR and sets the
the contents of the memory location is brought write memory control line to 1. Once the
to MBR and the memory control circuitry contents of MBR are stored in specified
indicates this to the CPU by setting MFC to 1 memory location, then the memory control
circuitry indicates the end of operation by
setting MFC to 1.

Compiled By: Rituraj Jain Page:2 Wollega University, Nekemte


Subject: COA (ECEg 3143) Chapter: 3 Memory Concepts

A useful measure of the speed of memory unit is the time that elapses between the initiation of an
operation and the completion of the operation. This is referred to as Memory Access Time. Another
measure is memory cycle time. This is the minimum time delay between the initiation two
independent memory operations. Memory cycle time is slightly larger than memory access time.

2.1 Binary Storage Cell


The binary storage cell is the basic building block of a memory unit. The binary storage cell that
stores one bit of information can be modelled by an SR latch with associated gates. This model of
binary storage cell is shown in the figure.

The memory constructed with the help of transistors is known as semiconductor memory. The
semiconductor memories are termed as Random Access Memory (RAM), because it is possible
to access any memory location in random.
Depending on the technology used to construct a RAM, there are two types of RAM -
 SRAM: Static Random Access Memory.
 DRAM: Dynamic Random Access Memory.

Dynamic Ram (DRAM)


A DRAM is made with cells that store data as charge on
capacitors. The presence or absence of charge in a
capacitor is interpreted as binary 1 or 0. Because
capacitors have a natural tendency to discharge due to
leakage current, dynamic RAM require periodic charge
refreshing to maintain data storage. The term dynamic
refers to this tendency of the stored charge to leak away,
even with power continuously applied. A typical DRAM
structure for an individual cell that stores one bit
information is shown in the figure.

Compiled By: Rituraj Jain Page:3 Wollega University, Nekemte


Subject: COA (ECEg 3143) Chapter: 3 Memory Concepts

Write Operation Read Operation


A voltage signal is applied to the bit line B, a When a signal is applied to the address line, the
high voltage represents 1 and a low voltage transistor T turns on and the charge stored on
represents 0. A signal is then applied to the the capacitor is fed out onto the bit line B
address line, which will turn on the transistor T,
allowing a charge to be transferred to the
capacitor

Static RAM (SRAM)


In an SRAM, binary values are stored using
traditional flip-flop constructed with the help
of transistors. A static RAM will hold its data
as long as power is supplied to it. A typical
SRAM constructed with transistors is shown
in the figure.
Four transistors (T1, T2, T3, T4) are cross
connected in an arrangement that produces
a stable logic state.
In logic state 1, point C1 is high and point C2
is low; in this state T1 and T4 are off, and T2
and T3 are on.
In logic state 0, point A1 is low and point A2 is high; in this state T1 and T4 are on, and T2 and T3
are off.
Both states are stable as long as the dc supply voltage is applied. The address line is used to open
or close a switch which is nothing but another transistor. The address line controls two transistors
(T5 and T6). When a signal is applied to this line, the two transistors are switched on, allowing a
read or write operation.
Write Operation Read Operation
The desired bit value is applied to line B, and its The bit value is read from the line B. When a
complement is applied to line . This forces the signal is applied to the address line, the signal
four transistors (T1, T2, T3, T4) into the proper of point A1 is available in the bit line B
state.

SRAM versus DRAM:


 Both static and dynamic RAMs are volatile, that is, it will retain the information as long as power supply
is applied.
 A dynamic memory cell is simpler and smaller than a static memory cell. Thus a DRAM is more dense,
i.e., packing density is high (more cell per unit area). DRAM is less expensive than corresponding
SRAM.

Compiled By: Rituraj Jain Page:4 Wollega University, Nekemte


Subject: COA (ECEg 3143) Chapter: 3 Memory Concepts

 DRAM requires the supporting refresh circuitry. For larger memories, the fixed cost of the refresh
circuitry is more than compensated for by the less cost of DRAM cells
 SRAM cells are generally faster than the DRAM cells. Therefore, to construct faster memory modules
(like cache memory) SRAM is used.

2.2 Internal Organization of Memory Chips


A memory cell is capable of storing 1-bit of information. A number of memory cells are organized in
the form of a matrix to form the memory chip. One such organization is shown in the Figure.

Figure: 16 X 8 Memory Organization

Each row of cells constitutes a memory word, and all cell of a row are connected to a common line
which is referred as word line. An address decoder is used to drive the word line. At a particular
instant, one word line is enabled depending on the address present in the address bus. The cells in
each column are connected by two lines. These are known as bit lines. These bit lines are
connected to data input line and data output line through a Sense/Write circuit.
A memory chip consisting of 16 words of 8 bits each, usually referred to as 16 x 8 organization. The
data input and data output line of each Sense/Write circuit are connected to a single bidirectional
data line in order to reduce the pin required. For 16 words, we need an address bus of size 4. In
addition to address and data lines, two control lines, R| and Chip Select (CS), are provided. The
R| line is to use to specify the required operation about read or write. The CS line is required to
select a given chip in a multi-chip memory system.

Compiled By: Rituraj Jain Page:5 Wollega University, Nekemte


Subject: COA (ECEg 3143) Chapter: 3 Memory Concepts

3. Cache Memory
We know that a number of instructions are
executed repeatedly. This may be in the
form of a simple loops, nested loops, or a
few procedures that repeatedly call each
other. It is observed that many instructions
in each of a few localized areas of the
program are repeatedly executed, while the remainder of the program is accessed relatively less.
This phenomenon is referred to as locality of reference.
Memory access is the main bottleneck for the performance efficiency. If a faster memory device can
be inserted between main memory and CPU, the efficiency can be increased. The faster memory
that is inserted between CPU and Main Memory is termed as Cache memory. To make this
arrangement effective, the cache must be considerably faster than the main memory, and typically
it is 5 to 10 time faster than the main memory.

3.1 Operation of Cache Memory


The memory control circuitry is designed to take advantage of the property of locality of reference.
When a Read request is received from the CPU, the contents of a block of memory words containing
the location specified are transferred into the cache. When any of the locations in this block is
referenced by the program, its contents are read directly from the cache.
The cache memory can store a number of such blocks at any given time. The correspondence
between the Main Memory Blocks and those in the cache is specified by means of a mapping
function.
When the cache is full and a memory word is referenced that is not in the cache, a decision must be
made as to which block should be removed from the cache to create space to bring the new block
to the cache that contains the referenced word. Replacement algorithms are used to make the
proper selection of block that must be replaced by the new one.
When a write request is received from the CPU, there are two ways that the system can proceed. In
the first case, the cache location and the main memory location are updated simultaneously. This is
called the store through method or write through method.
The alternative is to update the cache location only. During replacement time, the cache block will
be written back to the main memory. If there is no new write operation in the cache block, it is not
required to write back the cache block in the main memory. This information can be kept with the
help of an associated bit. This bit it set while there is a write operation in the cache block. During
replacement, it checks this bit, if it is set, then write back the cache block in main memory otherwise
not. This bit is known as dirty bit. If the bit gets dirty (set to one), writing to main memory is required.

Compiled By: Rituraj Jain Page:6 Wollega University, Nekemte


Subject: COA (ECEg 3143) Chapter: 3 Memory Concepts

3.2 Mapping Functions


The mapping functions are used to map a particular block of main memory to a particular block of
cache. This mapping function is used to transfer the block from main memory to cache memory.
Two different mapping functions are available:
1. Direct mapping: A particular block of main memory can be brought to a particular block of
cache memory. So, it is not flexible.
2. Associative mapping: In this mapping function, any block of Main memory can potentially
reside in any cache block position. This is much more flexible mapping method.
All these mapping methods are explained with the help of an example.
Consider a cache of 4096 (4K) words with a block size of 32 words. Therefore, the cache is
organized as 128 blocks. For 4K words, required address lines are 12 bits. To select one of the block
out of 128 blocks, we need 7 bits of address lines and to select one word out of 32 words, we need
5 bits of address lines. So the total 12 bits of address is divided for two groups, lower 5 bits are used
to select a word within a block, and higher 7 bits of address are used to select any block of cache
memory.
Let us consider a main memory system consisting 64K words. The size of address bus is 16 bits.
Since the block size of cache is 32 words, so the main memory is also organized as block size of 32
words. Therefore, the total number of blocks in main memory is 2048 (2K x 32 words = 64K words).
To identify any one block of 2K blocks, we need 11 address lines. Out of 16 address lines of main
memory, lower 5 bits are used to select a word within a block and higher 11 bits are used to select
a block out of 2048 blocks.
Number of blocks in cache memory is 128 and number of blocks in main memory is 2048, so at any
instant of time only 128 blocks out of 2048 blocks can reside in cache memory. Therefore, we need
mapping function to put a particular block of main memory into appropriate block of cache memory.

A. Direct Mapping Technique


The simplest way of associating main memory blocks with cache block is the direct mapping
technique. In this technique, block k of main memory maps into block k modulo m of the cache,
where m is the total number of blocks in cache. In this example, the value of m is 128. In direct
mapping technique, one particular block of main memory can be transferred to a particular block of
cache which is derived by modulo function. Since more than one main memory block is mapped
onto a given cache block position, conflict may arise for that position. This situation may occurs even
when the cache is not full. Conflict is resolved by allowing the new block to overwrite the currently
resident block. So the replacement algorithm is trivial.
The detail operation of direct mapping technique is as follows:
The main memory address is divided into three fields. The field size depends on the memory
capacity and the block size of cache. In this example, the lower 5 bits of address is used to identify

Compiled By: Rituraj Jain Page:7 Wollega University, Nekemte


Subject: COA (ECEg 3143) Chapter: 3 Memory Concepts

a word within a block. Next 7 bits are used to select a block out of 128 blocks (which is the capacity
of the cache). The remaining 4 bits are used as a TAG to identify the proper block of main memory
that is mapped to cache.
When a new block is first brought into
the cache, the high order 4 bits of the
main memory address are stored in
four TAG bits associated with its
location in the cache. When the CPU
generates a memory request, the 7-bit
block address determines the
corresponding cache block. The TAG
field of that block is compared to the
TAG field of the address. If they match,
the desired word specified by the low-
order 5 bits of the address is in that
block of the cache.
If there is no match, the required word must be accessed from the main memory, that is, the contents
of that block of the cache is replaced by the new block that is specified by the new address generated
by the CPU and correspondingly the TAG bit will also be changed by the high order 4 bits of the
address. The whole arrangement for direct mapping technique is shown in the figure.

B. Associated Mapping Technique:


In the associative mapping technique,
a main memory block can potentially
reside in any cache block position. In
this case, the main memory address is
divided into two groups, low-order bits
identifies the location of a word within a
block and high-order bits identifies the
block. In the example here, 11 bits are
required to identify a main memory
block when it is resident in the cache ,
high-order 11 bits are used as TAG bits
and low-order 5 bits are used to identify
a word within a block. The TAG bits of
an address received from the CPU
must be compared to the TAG bits of each block of the cache to see if the desired block is present.

Compiled By: Rituraj Jain Page:8 Wollega University, Nekemte


Subject: COA (ECEg 3143) Chapter: 3 Memory Concepts

In the associative mapping, any block of main memory can go to any block of cache, so it has got
the complete flexibility and we have to use proper replacement policy to replace a block from cache
if the currently accessed block of main memory is not present in cache. It might not be practical to
use this complete flexibility of associative mapping technique due to searching overhead, because
the TAG field of main memory address has to be compared with the TAG field of all the cache block.
In this example, there are 128 blocks in cache and the size of TAG is 11 bits. The whole arrangement
of Associative Mapping Technique is shown in the above figure.

3.3 Replacement Algorithms


When a new block must be brought into the cache and all the positions that it may occupy are full,
a decision must be made as to which of the old blocks is to be overwritten. In general, a policy is
required to keep the block in cache when they are likely to be referenced in near future. However, it
is not easy to determine directly which of the block in the cache are about to be referenced. The
property of locality of reference gives some clue to design good replacement policy.

A. Least Recently Used (LRU) Replacement policy:


Since program usually stay in localized areas for reasonable periods of time, it can be assumed that
there is a high probability that blocks which have been referenced recently will also be referenced in
the near future. Therefore, when a block is to be overwritten, it is a good decision to overwrite the
one that has gone for longest time without being referenced. This is defined as the least recently
used (LRU) block. Keeping track of LRU block must be done as computation proceeds.
Consider a specific example of a four-block set. It is required to track the LRU block of this four-
block set. A 2-bit counter may be used for each block. When a hit occurs, that is, when a read
request is received for a word that is in the cache, the counter of the block that is referenced is set
to 0. All counters which values originally lower than the referenced one are incremented by 1 and all
other counters remain unchanged.
When a miss occurs, that is, when a read request is received for a word and the word is not present
in the cache, we have to bring the block to cache.
There are two possibilities in case of a miss:
 If the set is not full, the counter associated with the new block loaded from the main memory is
set to 0, and the values of all other counters are incremented by 1.
 If the set is full and a miss occurs, the block with the counter value 3 is removed, and the new
block is put in its place. The counter value is set to zero. The other three block counters are
incremented by 1.

B. First In First Out (FIFO) replacement policy:


A reasonable rule may be to remove the oldest from a full set when a new block must be brought in.
While using this technique, no updation is required when a hit occurs. When a miss occurs and the

Compiled By: Rituraj Jain Page:9 Wollega University, Nekemte


Subject: COA (ECEg 3143) Chapter: 3 Memory Concepts

set is not full, the new block is put into an empty block and the counter values of the occupied block
will be increment by one. When a miss occurs and the set is full, the block with highest counter value
is replaced by new block and counter is set to 0, counter value of all other blocks of that set is
incremented by 1. The overhead of the policy is less, since no updation is required during hit.

4. Memory Management
We have to keep all the information in some storage, mainly known as main
memory, and CPU interacts with the main memory only. Therefore, memory
management is an important issue while designing a computer system. The main
memory of a computer is divided into two parts. One part is reserved for operating
system. The other part is for user program. The program currently being executed
by the CPU is loaded into the user part of the memory.
In a uniprogramming system, the program currently being executed is loaded into the user part of
the memory. In a multiprogramming system, the user part of memory is subdivided to accommodate
multiple process. The task of subdivision is carried out dynamically by operating system and is
known as memory management.
There are five defined state of a process as
shown in the figure.
When we start to execute a process, it is placed
in the process queue and it is in the new state.
As resources become available, then the
process is placed in the ready queue.
1. New: A program is admitted by the scheduler, but not yet ready to execute. The operating system
will initialize the process by moving it to the ready state.
2. Ready: The process is ready to execute and is waiting access to the processor.
3. Running: The process is being executed by the processor. At any given time, only one process
is in running state.
4. Waiting: The process is suspended from execution, waiting for some system resource, such as
I/O.
5. Exit: The process has terminated and will be destroyed by the operating system.

4.1 Swapping:
Since the size of main memory is fixed, it is possible to accommodate only few process in the main
memory. If all are waiting for I/O operation, then again CPU remains idle. To utilize the idle time of
CPU, some of the process must be off loaded from the memory and new process must be brought
to this memory place. This is known swapping. In swapping
1. The process waiting for some I/O to complete, must store back in disk.

Compiled By: Rituraj Jain Page:10 Wollega University, Nekemte


Subject: COA (ECEg 3143) Chapter: 3 Memory Concepts

2. New ready process is swapped in to main


memory as space becomes available.
3. As process completes, it is moved out of
main memory.
4. If none of the processes in memory are
ready,
 Swapped out a block process to
intermediate queue of blocked process.
 Swapped in a ready process from the
ready queue.

4.2 Partitioning
Partitioning is the process of splitting of memory into
sections to allocate processes including operating
system. There are two scheme for partitioning:
 Fixed size partitions
 Variable size partitions

Fixed sized partitions: The memory is partitioned to


fixed size partition. Although the partitions are of fixed size, they need not be of equal size. There is
a problem of wastage of memory in fixed size even with unequal size. When a process is brought
into memory, it is placed in the smallest available partition that will hold it.
Even with the use of unequal size of partitions, there will be wastage of memory. In most cases, a
process will not require exactly as much memory as provided by the partition. For example, a
process that require 5-MB of memory would be placed in the 6-MB partition which is the smallest
available partition. In this partition, only 5-MB is used, the remaining 1-MB cannot be used by any
other process, so it is a wastage. Like this, in every partition we may have some unused memory.
The unused portion of memory in each partition is termed as hole.

Variable size Partition: When a processes is brought into memory, it is allocated exactly as much
memory as it requires and no more. In this process it leads to a hole at the end of the memory, which
is too small to use. It seems that there will be only one hole at the end, so the waste is less.
But, this is not the only hole that will be present in variable size partition. When all processes are
blocked then swap out a process and bring in another process. The new swapped in process may
be smaller than the swapped out process. Most likely we will not get two process of same size. So,
it will create another hole. If the swap- out and swap-in is occurring more time, then more and more
hole will be created, which will lead to more wastage of memory.
There are two simple ways to slightly remove the problem of memory wastage:

Compiled By: Rituraj Jain Page:11 Wollega University, Nekemte


Subject: COA (ECEg 3143) Chapter: 3 Memory Concepts

Coalesce: Join the adjacent holes into one large hole, so that some process can be accommodated
into the hole.
Compaction: From time to time go through memory and move all hole into one free block of memory.

4.3 Paging
Both unequal fixed size and variable size partitions are inefficient in the use of memory and lead to
memory wastage. There is another scheme for use of memory which is known as paging.
In this scheme, the memory is partitioned into equal fixed size chunks that are relatively small. This
chunk of memory is known as frames or page frames. Each process is also divided into small fixed
chunks of same size. The chunks of a program is known as pages. A page of a program could be
assigned to available page frame. At a given point of time some of the frames in memory are in use
and some are free. The list of free frame is maintained by the operating system.
As shown in below given figure, Process A, stored in disk, consists of pages. At the time of execution
of the process A, the operating system finds six free frames and loads the six pages of the process
A into six frames. These six frames need not be contiguous frames in main memory. The operating
system maintains a page table for each process.
Within the program, each logical address consists of page number and a relative address within the
page. The process uses the page table to produce the physical address which consists of frame
number and relative address within the frame.
The following figure shows the allocation of frames to a new process in the main memory. A page
table is maintained for each process. This page table helps us to find the physical address in a frame
which corresponds to a logical address within a process.

Compiled By: Rituraj Jain Page:12 Wollega University, Nekemte


Subject: COA (ECEg 3143) Chapter: 3 Memory Concepts

The conversion of logical address to physical address is shown in the figure for the Process A.

5. Virtual Memory
Instead of loading all the pages of a process, each page of process is brought in only when it is
needed, i.e. on demand. This scheme is known as demand paging. Demand paging also allows us
to accommodate more process in the main memory.
Virtual memory involves the separation of logical memory as
perceived by users from physical memory. This separation, allows an
extremely large virtual memory to be provided for programmers when
only a smaller physical memory is available. Virtual memory makes
the task of programming much easier, because the programmer no
longer needs to worry about the amount of physical memory
available; and can concentrate instead on the problem to be
programmed.
The virtual address space is used to develop a process. The special hardware unit, called Memory
Management Unit (MMU) translates virtual address to physical address. When the desired data is
in the main memory, the CPU can work with these data. If the data are not in the main memory, the
MMU causes the operating system to bring into the memory from the disk.

5.1 Address Translation


The basic mechanism for reading a word from memory involves the translation of a virtual or logical
address, consisting of page number and offset, into a physical address, consisting of frame number
and offset, using a page table.

Compiled By: Rituraj Jain Page:13 Wollega University, Nekemte


Subject: COA (ECEg 3143) Chapter: 3 Memory Concepts

Most virtual memory scheme store page table in virtual memory rather than in real memory. This
means that the page table is subject to paging just as other pages are. When a process is running,
at least a part of its page table must be in main memory, including the page table entry of the
currently executing page.

Each virtual address generated by the processor is interpreted as virtual page number (high order
list) followed by an offset (lower order bits) that specifies the location of a particular word within a
page. Information about the main memory location of each page kept in a page table.

5.2 Inverted page table structures


There is one entry in the hash table and the inverted page table for each real memory page rather
than one per virtual page. Thus a fixed portion of real memory is required for the page table,
regardless of the number of processes or virtual page supported.
Because more than one virtual address may map into the hash table entry, a chaining technique is
used for managing the overflow. The hashing techniques results in chains that are typically short –
either one or two entries.

Compiled By: Rituraj Jain Page:14 Wollega University, Nekemte


Subject: COA (ECEg 3143) Chapter: 3 Memory Concepts

6. Magnetic Disk
A disk is a circular platter constructed of nonmagnetic
material, called the substrate, coated with a magnetisable
material. Traditionally, the substrate has been an aluminium
or aluminium alloy material.

6.1 Magnetic Read and Write Mechanisms


Data are recorded on and later retrieved from the disk via a
conducting coil named the head; in many systems, there are
two heads, a read head and a write head. During a read or
write operation, the head is stationary while the platter rotates
beneath it.
For traditional write mechanism electricity flowing through a coil produces a magnetic field. Electric
pulses are sent to the write head, and the resulting magnetic patterns are recorded on the surface
below, with different patterns for positive and negative currents.
For traditional read mechanism, a magnetic field moving relative to a coil produces an electrical
current in the coil. When the surface of the disk passes under the head, it generates a current of the
same polarity as the one already recorded.

6.2 Data Organization and Formatting


The head is a relatively small device capable of reading
from or writing to a portion of the platter rotating beneath
it. Disk contains a concentric set of rings, called tracks.
Each track is the same width as the head. There are
thousands of tracks per surface. These tracks are divided
into sectors. A sector is smallest addressable unit in a
disk. Data are transferred to and from the disk in sectors.
There are typically hundreds of sectors per track.

7. Magnetic Tape
Tape systems use the same reading and recording techniques as disk systems. The medium is
flexible polyester tape coated with magnetizable material. The coating may consist of particles of
pure metal in special binders or vapor-plated metal films.
Data on the tape are structured as a number of parallel tracks running lengthwise. Modern systems
use serial recording, in which data are laid out as a sequence of bits along each track. Data are read
and written in contiguous blocks, called physical records, on a tape. Blocks on the tape are separated
by gaps referred to as interrecord gaps.

Compiled By: Rituraj Jain Page:15 Wollega University, Nekemte


Subject: COA (ECEg 3143) Chapter: 3 Memory Concepts

The typical recording technique used


in serial tapes is referred to as
serpentine recording. In this
technique, when data are being
recorded, the first set of bits is
recorded along the whole length of the
tape. When the end of the tape is reached, the heads are repositioned to record a new track, and
the tape is again recorded on its whole length, this time in the opposite direction. That process
continues, back and forth, until the tape is full.
A tape drive is a sequential-access device. If the tape head is positioned at record 1, then to read
record N, it is necessary to read physical records 1 through N-1, one at a time. If the head is currently
positioned beyond the desired record, it is necessary to rewind the tape a certain distance and begin
reading forward. Unlike the disk, the tape is in motion only during a read or write operation.

Compiled By: Rituraj Jain Page:16 Wollega University, Nekemte

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy