0% found this document useful (0 votes)
11 views

Ais 1

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

Ais 1

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 129

CHAPTER ONE

THE INFORMATION SYSTEM: AN ACCUONTANT’S PERSPECTIVE

Unlike many other accounting subjects, an information system (IS) does not have a well-
defined body of knowledge about which there is general agreement; there are many
diverse opinion as to what IS is and what it is not.
AIS applications are distinguished from other information systems applications by
the legal and professional obligations they impose on an organization’s management and
accountants. The proper discharge of these responsibilities requires a precise
understanding of the objectives and functions of AIS.
The purpose of this chapter is to place the subject accounting information systems
in proper perspective for accountants.

THE INFORMATION ENVIROMENT

Information is a business resource. Like the other business resources of raw materials,
capital and labor, information is vital to the survival of the contemporary business
organization. Every business day, vast quantities of information flow to decision makers
and other users to meet a variety of internal needs. In addition, information flows out of
the organization to external users, such as customers, suppliers, and stakeholders who
have an interest in the firm. People have relied on information systems to communicate
with each other using a variety of physical device, information processing instructions
and procedures, communication channel and stored data since the dawn of civilization.

- Responsibility of each lines of management every individual in the organization,


from business operations to top management, needs information to accomplish his
or her tasks. Horizontal
- Flow of information upward
Vertical
Down-ward
- The horizontal flow supports operations level tasks with highly detailed
information about the many business transactions affecting the firm.
- Flow of information from external to internal and from internal to external.
Figure 1.1.Presents an overview of these internal end external information flows.

Top Stakeholders
Management

Middle
Management

Operations Management

Operations Personnel Suppliers

Customers

Day-to-day operations information

WHAT IS A SYSTEM?

Systems concepts underlie the field of information systems. That is why we need to
discuss how generic systems concepts apply to business firms and the components and
activities of information systems. Understanding systems concepts will help you
understand many other concepts in technology, applications, development, and
management of information systems.
The term system has much broader applicability. Some systems are naturally occurring,
wile others are artificial. Natural systems range from the atom to plants. All life forms,
plant and animals, are examples off natural systems. Artificial systems are man-made.
These systems include everything from clocks to sub marines and social systems to
information systems.

ELEMENTS OF A SYSTEM –

Regardless of their origin, all systems possess some common elements. The following
generic concept specifies:
A system can be most simply defined as a group of interrelated or interacting elements
forming a unified whole. However, the following generic concept provides a more
appropriate foundation concept for the field of information systems: a system is a group
of interrelated components working together toward a common goal by accepting inputs
and producing outputs in an organized transformation process. In other words, a system is
a group of two or more interrelated components or subsystems that operate within a
boundary to serve a common purpose.

Let’s analyze this general definition to gain an understanding of how it applies to


businesses and information systems.

Multiple components: - A system must contain more than one part. For example, a pen
made from ink, ink container and nib without nib, it is not a system.

Relatedness – A common purpose relates the multiple parts of the system. Although each
part functions independently of the others, all parts serve a common objective. If a
particular component does not contribute to the common goal, then it is not part of the
system. For instance, a pair of ice skates and a volleyball and do not form a system.

System versus Subsyste-- The distinction between the terms system and subsystem is a
matter of perspective. A system is called a subsystem when it is viewed in relation to the
lager system of which it is a part. Likewise, a subsystem is called a system when it is
viewed in relation to the larger system of which it is the focus of attention.
Animals, plants, and other life forms are systems. They are also subsystems of the
ecosystem in which they exist.

Purpose- a system must serve at least one purpose, but it may serve several.

Two points of particular importance to the study of information systems: system


decomposition and subsystem interdependency.

System Decomposition – Decompositions is the process of dividing the system into


smaller subsystem parts. This is a convenient way of representing, viewing, and
understanding the relationships among subsystems. By decomposing a system, we can
present the overall system as a hierarchy and view the relationships between subordinate
and higher-level subsystems. Each subordinate subsystem performs one or more specific
functions to help the achievement of the overall objective of the higher-level systems.

Subsystem Interdependency – A system’s ability to achieve its goal depends upon the
effective functioning and harmonious interaction of its subsystems. If a vital subsystem
fails or become defective and can no longer meet its specific objective, the overall system
will fail to meet its objective. On the other hand, when a non-vital system fails, the
primary objective can still be met.

Like designers of other systems, information system designers must identify critical
subsystems, anticipate the cost of their failure, and design cost-effective control
procedures.
A FRAMEWORK FOR INFORMATION SYSTEMS

The information system is the set of formal procedures by which data are collected,
processed into information, and distributed to users. Two broad classes of systems
emerge from the decomposition of manufacturing firm’s information system; the
accounting information system and the management information system (MIS). We will
use this framework to identify the domain of AIS and to distinguish it from that of MIS.

An information system can be any organized combination of people, hardware, software,


communications networks, and data resources that collects, transforms, and disseminates
information in an organization.

An information system (IS) is an arrangement of people, data, process, information


presentation, and information technology that interact to support and improve day-to-day
operations in a business as well as support the problem-solving and decision-making
needs of management and users.

An information System (IS) is a formalized computer information system that can collect,
store, process, and report data from various sources to provide the information necessary
for management decision making. Not all information systems and organizations are
formal. There are informal information sources such as the office grapevine, gossip, and
so, on. Neither do information systems have to be computer-based. Information is often
gathered and stored through manual processes, although manual information systems are
becoming relatively less important.
i

Management information
Accounting Information System (MIS)
System (AIS)

Financial Marketing Production Human


Fixed Assets General Leder/ Transaction Management Management Systems Systems Resource
System Financial Processing Reporting Systems Systems
(FAS) Reporting System System System
(GL/FRS) (TPS) MDS)

Expenditure Conversion Revenue


Cycle Cycle Cycle

Purchase Cost Sales


System Accounting Processing
System System
Cash
Disbursement Materials Cash
System Requirements Receipts
Planning System
System
Payroll
Processing
System
The distinction between AIS and MIS subsystems centers on the concept of a transactions. The
information system accepts input, called transactions, which are converted through various
processes into output information that goes to users. Transactions fall into two classes; financial
transactions and non financial transactions.

A Financial transaction is an economic event that affects the assets and equities of the
organization, is reflected in its accounts, and is measured in monetary terms. E.g. sales of
products, purchase of inventory, payment of cash.

Non-Financial transactions include all events processed by the organization’s information system
that do not meet the narrow definition of financial transaction.

ACCOUNTING INFORMATION SYSTEM.

AIS subsystems process financial transactions and non financial transactions that directly affect
the processing of financial transactions. The AIS comprises four major-subsystems (1) the
transaction processing system (TPS), which supports daily business operations with numerous
documents and messages for users throughout the organization (2) the general ledger/financial
statements, such as income statement, balance sheet and statement of cash flow, tax returns, and
other reports required by law; (3) the fixed asset system (FAS), which processes transactions
pertaining to the acquisition, maintenance, and disposal of fixed assets; and (4) the management
reporting system (MRS) , which provides internal management with special-purpose financial
reports and information needed for decision making, such as budgets, variance reports and
responsibility reports, cost volume-profit analysis.

THE MANAGEMENT INFORMATION SYSTEM

Management often requires information that goes beyond the capability of AIS. The MIS
processes non financial transactions that are not normally processed by traditional AIS.

The two sets of data (that came from AIS and MIS) would need to be integrated and reported to
the manager. The task of supplying managers with integrated information is inefficient and
expensive when the supporting information systems are not integrated. Also, lack of coordination
between financial and non financial systems can produce unreliable information, resulting in
poor management decisions
For example, a purchasing manager, evaluating the performance of suppliers, wants to know the
number and financial value of inventory orders placed with specific vendors during a period of
time. In addition, the manager needs to know the number of deliveries that exceeded the normal
lead-time, and any inventory stock out conditions that resulted from late deliveries.

Such integrated information, if it could be provide at all, would traditionally come from separate
AIS and MIS applications functioning independently. The AIS application would supply the cost
of purchase data, while delivery time and stock out data (if available) would come from an MIS
application.

6
Because of this problem and to improve operational efficiency and gain competitive advantage in
the market place, many organize have reengineered their information systems to include both
AIS and MIS features.

AIS Subsystems
There are four major subsystems that AIS comprises these AIS subsystems process financial and
non financial transactions that directly affect the processing of financial transactions. These are:
1. The transaction processing system (TPS)
2. The general ledger /financial reporting system (GL/ FRS)
3. The fixed asset system (FAS)
4. The management reporting system (MRS)

Transaction Processing System – The transaction processing system is central to the overall
function of the information system by:
 Converting economic events into financial transactions
 Recording financial transactions in the accounting records (journals and ledgers)
 Distributing essential financial information to operations personnel to support their daily
operations.

The transaction processing system deals with business events that occur frequently. In a given
day, a firm may process thousands of transactions. To deal efficiently with such volume, similar
types of transactions are grouped together into transaction cycles. The TPS comprises three
transaction cycles: the revenue cycle, the expenditure cycle, and the conversion cycle. Each cycle
captures and processes different types of financial transactions.

A GENERAL MODEL FOR AIS

The following figure presents the general model for viewing accounting information system
applications. This is a general model because it describes all information systems, regardless of
their technological architecture.

The External Environment

The information Data Base


system management
External
External Data Data Information end users
sources Collection Processing Generation
of data

Feedback
Internal internal
sources end users
of data
The Business Organization
7
Feed back

The elements of the general model are end users, data sources, data collection, data processing,
data base management, information generation, and feedback.

END USERS – End users fall into two general groups; external and internal. External users
include creditors, stockholders, potential investors, regulatory agencies, tax authorities, suppliers,
and customers. Trading partners (customers and suppliers) receive transaction oriented
information including purchase orders, billing statements, and shipping documents. Institutional
users such as banks, the SEC, and others receive information in the form of financial statements,
tax returns, and other reports that the firm has a legal obligation to produce.

Internal users include management at every level of the organization, as well as operations
personnel. In contrast to external reporting, the organization has a great deal of latitude in the
way it meets the need of internal users. Internal reporting poses a less structured and generally,
more difficult challenge than external reporting; on the other hand it is an open field for
experimentation, invention, and innovation.

Data versus Information


Data are facts, which may or may not be processed (edited, summarized, or refined) and have no
direct effect on the user. By contrast, information causes the user to take an action that he or she
other wise could not, or would not, have taken. Information is often defined simply as processed
data. This is an inadequate definition. Information is determined by the effect it has on the user,
not by its physical form.

One person’s information is another person’s data. Thus, information is not just a set of
processed facts arranged in a formal report. Information allows users to take action to resolve
conflicts, reduce uncertainty, and make decisions.

The distinction between data and information has pervasive implications for the study of
information system. If output from the information system fails to cause users to act, the system
serves no purpose and has failed in its primary objective.

DATA SOURCES – Financial transactions enter the information system from both internal and
external sources. External financial transactions are economic exchanges with other business
entities and individuals outside the firm. Internal financial transactions involve the exchange or
movement of resources within the organization.

DATA COLLECTION – Data collection is the first operational stage in the information system.
The objective is to ensure that event data entering the system are valid, complete, and free from
material errors. In many respects, this is the most important stage in the system.
Two rules govern task of the system designer is to determine what is and what is not relevant. He
or she does so by analyzing the user’s needs. Only data that ultimately contribute to information

8
(as defined previously) are relevant. The data collection stage should be designed to filter
irrelevant facts from the system.
Efficient data collection procedures are designed to collect data only once. These data can then
be made available to multiple users.

DATA PROCESSING – once collected, data usually require processing to produce information.
Tasks in data processing stage range from simple to complex. Examples include mathematical
algorithms (such as linear programming models) used for production scheduling applications,
statistical techniques for sales forecasting, and posting and summarizing procedures used for
accounting applications.

DATA BASE MANAGEMENT – The organization’s data base is its physical repository for financial
and non financial data. We use the term data base in the generic sense. It can be a filling cabinet
or a computer disk. Regardless of the data base’s physical form, we can represent its contents in
a logical hierarchy. The levels in the data hierarchy – attribute, record, and file.

Data Attribute – The data attribute is the most elemental piece of potentially useful data in the
data base. An attribute is a logical and relevant characteristic of an entity about which the firm
captures data. The attributes are logical because they all relate sensibly to a common entity.

Record – A record is a complete set of attributes for a single occurrence within an entity class.
For example, the name, the ID number and department of one student is one occurrence or
record. To find a particular record within the data base, we must be able to identify it uniquely.
We call this unique identifies- attribute the key. The key for the student record is the student ID
number.

Files – A file is a complete set of records of an identical type. For example, all the account
receivable records of the organization constitute the accounts receivable file. Other classes of
records such as inventory, accounts payable, and payroll are not included in the accounts
receivable file.

Data Base Management Tasks – Data base management involves three fundamental tasks:
storage, retrieval, and deletion. The storage task assigns keys to new records and stores them in
their proper location in the data base.
Retrieval is the task of locating and extracting an existing record from the data base for
processing. After processing is complete, the storage task restores the updated record to its place
in the data base.

Deletion is the task of permanently removing obsolete or redundant records from the data base.

9
ATTRIBUTES OF INFORMATION SYSTEMS - In addition to the elements above, information
systems should also possess three attributes: efficiency, effectiveness, and flexibility.
Efficiency – The information system must use resources efficiently. The process of collecting
data and producing information is always costly, and the cost of providing information must not
exceed the benefit derived from having it.

Effectiveness – The information system must meet the needs of all users. To the extent the
system fails in this objective-either denying users access to information or inadequately serving
their needs – the system is ineffective.
Flexibility – The information system must be flexible enough to respond to changes in user
needs when they occur. However, for a system to be efficient and effective, it must deal very
specifically with user problems. The more problem-specific a system is, the more inflexible to
change it becomes. Therefore, the efficiency and effectiveness attributes conflict with flexibility.

An important task for the system designer is to seek a balance between these desirable attributes
to produce systems that economically meet both the long-and short-term needs of the
organization.

INFORMATION SYSTEM OBJECTIVES

Each firm must tailor its information system to the needs of its users. Therefore, specific
information system objectives will differ from firm to firm. However, all information systems
have three fundamental objectives in common. They are:
1- To support the stewardship function of management. Stewardship refer to management’s
responsibility over the resources of the firm. The information system provides
information about resource utilization to external users with traditional financial
statements and other mandated reports. Internally, management receives stewardship
information from various responsibility reports.
2- To support management decision making. The information system supplies managers
with the information they require to carry out their decision- making responsibilities.
3- To support the firm’s day-to-day operations. The information system provides
information to operations personnel to assist in the efficient and effective discharge of
their daily tasks.

ACQUISITION OF INFORMATION SYSTEMS

How organizations obtain information systems? Usually, they do so in two ways:


(1) they develop customized systems from scratch through in-house systems development
activities, and
(2) They purchase predesigned commercial systems from software vendors. They are three basic
types of commercial software: turnkey systems, backbone systems, and vendor-supported
systems.

10
Turnkey Systems are completely finished and tested systems that are ready for implementation.
Typically, they are general-purpose systems or systems customized to a specific industry. The
better turnkey systems, however, have built-in software options that allow the user to customize
input, output, and some processing through as the name implies, backbone systems consist of a
basic system structure on which to build. The primary processing logic is preprogrammed, and
the vendor then designs the user interfaces to suit the client’s unique needs. A backbone system
is a compromise between a custom system and a turnkey system. This approach can produce
very satisfactory results, but customizing the system is costly.

Vendor Support Systems – are custom systems that organizations purchase from commercial
vendors rather than develop in-house. Under this approach, the software vendor designs,
implements, and maintains systems for its clients. The systems themselves are custom products,
but the systems development service is commercially provided.

Both in-house systems development and the commercial software option have implications for
accountants. By whatever method they are obtained, accounting information systems must be
cost-justified and evaluated for compliance with organizational needs, accounting standards, and
internal control requirements.

ORGANIZATIONAL STRUCTURE

The structure of an organization reflects the distribution of responsibility, authority, and


accountability throughout the organization. Understanding the distribution pattern of
responsibility, authority, and accountability is essential structure thus provides a foundation for
the study of accounting information systems.

ACCOUNTING INDEPENDENCE - The need to ensure the reliability of accounting


information places the accounting function in a unique position with the organization.
Information reliability rests heavily on accounting independence. The record-keeping function
of accounting must separate from the functional areas that have custody of physical resources.
Information to be reliable it must possess certain attributes relevance, accuracy, completeness,
summarization, and timeliness

THE ROLE OF THE ACCOUNTANT


Accountants are primarily involved in three ways: as system users, as system designers, and as
system auditors.

ACCOUNTANTS AS USERS
In most organizations, the accounting function is single largest user of computer services. All
systems that process financial transaction impact the accounting function is some way, as end
users, accountants must provide a clear picture of their needs to the professionals who design
their systems. The principal cause of design errors that result in system failure is passive user
involvement.

11
ACCOUNTANTS AS SYSTEM DESIGNERS
Today, the responsibility for systems design is divided between accountants and computer
professionals as follows; the accounting function is responsible for the conceptual system, and
the computer function is responsible for the physical system.

CONCEPTUAL AND PHYSICAL SYSTEMS – To illustrate the distinction between


conceptual and physical systems, consider the following example:

The credit department of a retail business requires information about delinquent


accounts from the accounts receivable department. This information supports
decisions made by the credit manager regarding the credit worthiness of
customers.

The design of the conceptual system involves identifying the specific attributes of the delinquent
account information and the decision roles for identifying delinquent customers. The physical
system is the medium and method for accomplishing this task. The physical system may assume
one of a variety of technological forms.

The accountant determines the nature of the information required, its sources, its destination, and
the accounting rules that must be applied. This is the conceptual system. The computer
professionals determine the most economically effective method of accomplishing the task.

The accountant determines the nature of the information required, its sources, its destination, and
the accounting rules that must be applied. This is the conceptual system. The computer
professionals determine the most economically effective method of accomplishing the task. This
technological solution is the physical system.

ACCOUNTANTS AS SYSTEM AUDITORS

Auditing – is a form of independent attestation performed by an expert the auditor-who expresses


an opinion about the fairness of a company’s financial statements.
Electronic data processing (EDP) auditing us usually performed as part of broader financial
audit. The EDP auditor attests to the integrity of elements of the organization’s information
system that have become complicated by computer technology. Periodically, the auditor must
evaluate selected components of the accounting information system to establish their degree of
compliance with organizational objectives and internal control standards. This involves an
examination of both the computer environment (computer operations, systems development
procedures, and systems maintenance activities) and individual computer application (for
example, accounts payable and accounts receivable systems).

12
CHAPTER TWO
Managing Data Resources

Data are a vital organizational resource that needs to be managed like other important business
assets. Today’s e-business enterprises cannot survive or succeed without quality data about their
internal operations and external environment.

With each online mousse click, either a fresh bit of data is created or already stored
data are retrieved from all those e-commerce websites, filled with data-rich photos,
stock graphs, and music videos. And the thousands of new web pages created each
day need a safe, stable managed environment to hang out. All that’s on top of the
heavy demand for industrial strength data storage already in use by scores of big
corporations. What’s driving the growth is a crushing imperative for corporations to
analyze every bit of information they can extract from their huge data warehouses for
competitive advantage. That has turned the data storage and management function
into a key strategic role of the information age.

That’s why organizations and their managers need to practice data resource management, a
managerial activity that applies information systems technologies like database management,
data warehousing, and other data management tools to the task of managing an organization’s
data resources to meet the information needs of their business stakeholders.

Conventional Files Vs the Database

All information systems create, read, update, and delete (sometimes abbreviated CRUD) data.
This data is stored in files and databases.

A file is a collection of similar records.


Examples include a customer file, order file, and product file. A database is a collection of
interrelated files. The key word is interrelated. A database is not merely a collection of files. The
records in each file must allow for relationships to the records in other files. For example, a sales
database might contain order records that are “linked” to their corresponding customers and
product records.

Let’s compare the file and database alternatives. The figure below illustrates the fundamental
difference between the file and database environments. In the file environment, data storage is
built around the applications that will use the files. In the database environment, applications will
be built around the integrated database. Accordingly, the database is not necessarily dependent
on the applications that will use it. In other words, given a database; new applications can be
built to share that database.

13
Flat file approach of data management
File

Information
File system File

File

Information
system

Database
Information (consolidates & Information
system integrated data system
from files

Information
system
Database approach of data management

Conventional files are relatively easy to design and implement because they are normally
designed for use with a single application or information system, such as accounts receivable or
payroll. If you understand the end-user’s output requirements for that system, you can easily
determine the data that will have to be captured and stored to produce those outputs and define
the best file organization for those requirements.

Historically, another advantage of conventional files has been processing speed. They can be
optimized for the access of the application. At the same time, they can rarely be optimized for
shared use by different applications or systems.

14
The pros and cons of database

We’ve already stated the principal advantage of database-- the ability to share the same data
across multiple applications and systems. A common misconception about the database approach
is that you can build a single super database that contains all data items of interest to an
organization. This notion, however desirable, is not currently practical. The reality of such a
solution is that it would take forever to build such a complex database. Realistically, most
organizations build several databases, each one sharing data with several information systems.
Thus, there will be some redundancy between databases. However, this redundancy is both
greatly reduced and, ultimately, controlled.

Database technology offers the advantage of storing data in flexible formats. This is made
possible because databases are defined separately from the information systems and application
programs that will use them. Theoretically, this allows us to use the data in ways not originally
specified by the end-users. Care must be taken to truly achieve this data independence. If the
database is well designed, different combinations of the same data can be easily accessed to
fulfill future report and query needs. The database scope can even be extended without changing
existing programs that use it. In other words, new fields and record types can be added to the
database affecting current programs.

Database technology provides superior scalability, meaning that the database and the systems
that use it can be grown or expanded to meet the changing needs of an organization. Database
technology provides better technology for client/server and network computing architectures.
Typically, such architectures require the database to run in its own server.

On the other hand, database technology is more complex than file technology. Special software,
called a database management system (DBMS), is required. While a DBMS is still somewhat
slower than file technology, these performance limitations are rapidly disappearing.

An effective information system provides users with timely, accurate, and relevant information.
This information is stored in computer files. When the files are properly arranged and
maintained, users can easily access and retrieve the information they need.

Well managed, carefully arranged files organization make it easy to obtain data for business
decisions, whereas poorly managed files lead to chaos in information processing, high costs,
poor performance, and little, if any, flexibility. Despite the use of excellent hardware and
software many organizations have inefficient information systems because of poor files
management.

File organization Terms and Concepts

A computer system organizes data in a hierarchy that starts with bits and bytes and progresses to
fields, records, files, and databases. A bit represents the smallest unit of data a computer can
handle. A group of bits, called a byte, represents a single character, which can be a letter, a
number, or another symbol. In other word a byte represents a unit of information stored in a
computer, equal to eight bits. A computer’s memory is measured in bytes.

15
The ability to represent only binary information in a computer system is not sufficient for
business information processing. Numeric, alphabetic, and a wide variety of special characters
such as dollar signs, question marks, and quotation marks must be stored in this type of system.
In a computer system, a character of information is called a byte. A byte of information is stored
by using several bits in specified combinations. Computer codes such as EBCDIC (Extended
binary coded decimal interchange code) and ASCII (American Standard Code for Information
Interchange) use various arrangements of bits to form bytes that represent the numbers zero
through nine, the letters of the alphabet, and many other characters. See the following table:

Character EBCDIC ASCII


A 1100 0001 0100 0001
B 1100.0010 0100 0010
C 1100 0011 0100 0011

The Traditional Approach to File Management

The traditional approach to information systems is file oriented. Before the advent of database
management systems, each application maintained its own master file and generally had its own
set of transaction files. Files are custom designed for each application, and generally there is
little sharing of data among the various application. Programs are dependent on the files and vice
versa; that is, when the physical format of the file is changed, the program also has to be
changed. The traditional approach is file oriented because the primary purpose of many
applications is to maintain, on the master file, the data required to produce management
information. Therefore, the master file is the centerpiece of each application.

Figure 2-1 illustrates the principal features of the flat file approach to data management. User
access data via computer programs that process the data and present information to the users.
User Application Files
Marketing Market Customer Data
Manager research Economic Data
Application Industry Data

Accounts AR update Customer Data


Receivable Application Sales Data
clerk Inventory Data

Purchasing Purchasing
Agent Application Vendor data
Sales Data
Inventory Data

16
Disadvantages of the traditional Approach

Data redundancy – often identical data are stored in two or more files. Data redundancy occurs
when different divisions, functional areas, and groups in an organization independently collect
the same piece of information. Notice that in Table 2-1 each employee’s identification number,
name, and department are stored in both the payroll and personnel files. Such data redundancy
increases data editing, maintenance, and storage costs. In addition, data stored on two master
files (which should in theory be identical) are often different for good reason; but such
differences inevitably create confusion.
Lack of Data Integration – Data on different master files may be related, as in the case of
payroll and personnel master files. For example, from Table 2-1 management may want a report
displaying employee name, department, pay rate, and occupation. However, the application
approach does not have the mechanisms for associating these data in a logical way to make them
useful for management’s needs. This lack of data integration leads to inefficient use of stored
data.

Payroll File
Identification Employee name Pay Rate Rear-to-date Department
number earnings
3859 Joseph Hawkins $12.50 4005 380
3903 Samuel Smith $13.25 5100 390
4106 Jemal Ahmed $25.80 8135 312

Personnel File
Identification Employee Department Age Date Hired Occupation
number Name
3856 Joe Hawkins 380 25 03 Jan 83 Coordinator
3903 Sam Smith 390 55 05 Sep 65 Sales market
4106 Jemal Ahmed 312 38 31 Dec 70 The Boss

Table 2-1 Data Redundancy and Lack of Integration among Files

Program-data dependence is the tight relationship between data stored in files and the specific
programs required to update and maintain those files. Changes in the physical format of a file,
such as the addition of a data field, require changes in all programs that a programmer writes or
maintains he or she must be concerned with data management. There is no centralized execution
of the data management functions; data management is scattered among all the application
programs. Think of the thousands of computer programs that had to be altered when Ethiopia
telephone service changed from two-digit to three-digit aerial code. A centralized DBMS could
have minimized the number of place this change had to be made.

Lack of Flexibility--The information retrieval capabilities of most traditional systems are limited
to predetermined requests for data. Therefore, the system produces information in the form of
scheduled reports and queries which it has been previously programmed to handle. But it cannot
deliver ad hoc reports or respond to unanticipated information requirements in a timely fashion.
The information required by ad hoc requests is somewhere in the system but too expensive to

17
retrieve. Several programmers would have to work for weeks to put together the required data
items in a new file.

Poor Security--Because there is little control or management of data, access to and dissemination
of information are virtually out of control. What limits on access exist tend to be the result of
habit and tradition, as well as of the sheer difficulty of finding information.

Lack of Data Sharing and Availability--The lack of control over access to data in this confused
environment does not make it easy for people to obtain information. Because pieces of
information in different files and different parts of the organization cannot be related to one
another, it is virtually impossible for information to be shared or accessed in a timely manner.
Task data dependency--another problem with the flat file approach is the user’s inability to
obtain additional information as his or her needs change, a problem known as task data
dependency. The user’s information set is constrained by the data that he or she possesses and
controls.

The Database Approach to File Management

A database management system is a set of programs that serve as an interface between


application programs and a set of coordinated and integrated physical files called a database. [A
physical file is the actual storage of data on storage media.] A DBMS provides the capabilities
for creating, maintaining, and changing a database. The physical files of the database are
analogous to the master files of the application programs. However, with DBMS, the data among
the physical files are related with various pointers, and keys, which not only reduce data
redundancy but also enable the unanticipated retrieval of related information.

Figure 2-2 presents a simple overview of the data base approach with the same users and data
requirements as in figure 2-1. The most obvious change from the flat file model is the pooling of
data into a common data base that is shared by all the users.

User Application
Marketing Market
Manager research
Application

Accounts AR update
Receivable Application Data Base Customer Data
Management Economic Data
clerk
System Industry Data
Sales Data
Purchasing Purchasing Inventory Data
Agent Application Vendor Data

18
Several data base models (hierarchical, the network, and the relational models) exist and each
model uses different file structures (sequential structure, indexed structure, hashing structure,
pointer structure). Because data base files must be integrated to serve the needs of many users,
their structures can be quite complex. There is nothing inherently different between the
operational tasks performed by an application that processes data base files and one that uses
traditional flat files.

TRADITIONAL PROBLEMS SOLVED

Data sharing (the absence of ownership) is the central concept of the data base approach. Let’s
see how this resolves the problems identified.
- No data redundancy- Each data element is stored only once, thereby eliminating data
redundancy and reducing storage costs.
- Single update –Because each data element exists in only one place, it requires only a
single update procedure. This reduces the time and cost of keeping the data base current.
- Current values- A change to the data base made by any user yields current data values for
all other users. For example, if marketing manager records a customer address change,
Accounts receivable clerk has immediate access to this current information.
- Task-data independence- users have access to the full domain of data available to the
firm. As user’s information needs expand beyond their immediate domain, the new needs
can be more easily satisfied than under the flat file approach. Users are constrained only
by the limitations of the data available to the firm (the entire data base) and the
legitimacy of their need to access it.

CONTROLLING ACCESS TO THE DATA BASE

The data base approach places all the firm’s information eggs in one basket. It becomes critical,
therefore, to take very good care of the basket. The example in figure 2-3 (a) has no provision for
controlling access to the data base. Assume data X is sensitive, confidential, or secret
information that only user 3 is authorized to access. How can the organization prevent others
from gaining unauthorized access to it?

User 1 DATA BASE

Transactions
Program 1

User 2 A
Transactions B
Program 2 C
X
User 3 Y
Transactions L
Program 3 M
Figure 2-3(a) the Data Base Concept

19
Figure 2-3 (b) adds a new element to Figure 2-3(a). Standing between the users’ programs and
the physical data base is the data base management system (DBMS). The purpose of the DBMS
is to provide controlled access to the data base. The DBMS is a special software system that is
programmed to know which data elements each user is authorized to access. The user’s program
sends requests for data to the DBMS, which validates and authorizes access to the data base in
accordance with the user’s level of authority. If the user requests data that he or she is not
authorized to access, the request is denied. As you might imagine, the organization’s procedures
for assigning user authority is an important control issue for accountants to consider.
Figure 2-3(b)

User 1
Transactions Program 1

User 2 D A
Transactions B B
Program 2 M
S C

User 3 X
Transactions Program 3 Y
L
M

ELEMENTS OF THE DATA BASE APPROACH

Figure 2-4 represents a breakdown of the data base environment into four primary elements:
Users, the DBMS, the data base administrator, and the physical data base.
Data Base
System Administrator
request System development
process

Applications DBMS
Transactions Host
Transactions User programs Data Definition operating
U Language system
Transactions
S User programs
Data manipulation
Language
E Transactions User programs Physical
Query Language
Data base
R Transactions
User Programs
S

20
User Queries

Figure 2-4 element of the database concept


Users
Figure 2-4 shows how users access the data base in two ways. First, access can be achieved via
user programs prepared by systems professionals. User programs send data access requests
(calls) to the DBMS, which validates the requests and retrieves the data for processing. Under
this mode of access, the presence of the DBMS is transparent to the users.

The second method of data base access is via direct query, which requires no formal user
programs. The DBMS has a built – in query facility that allows authorized users to process data
independent of professional programmers. The query facility provides a “friendly” environment
for integrating and retrieving data to produce ad hoc management reports. This feature is a most
attractive incentive for users to adopt the data base approach. The query feature is also an
important control issue.

The Database management System – The DBMS is a complex software package that
enables the user to communicate with the database. The DBMS interprets user commands so that
the computer system can perform the task required. For example, it might translate a command
such as GET CUSTNO, AMOUNT, and INVNO into “retrieve record 458 from disk 09.” The
components of particular DBMS, by which it accomplishes these objectives, will vary somewhat
DBMS, by one system to another, but common components found in typical DBMS are:

A data dictionary/Directory --contains the names and descriptions of every data element in the
database. It also contains a description of how data elements relate to one another. Through the
use of its data dictionary, a DBMS store data in a consistent manner, thus reducing redundancy.
For example the data dictionary ensures that the data element representing the number of an
inventory item (named STOCKNUM) will be of uniform length and have other uniform
characteristics regardless of the application program that uses it. The data dictionary also
enforces consistency among users and application developers from adding data element that have
the same name but different characteristics to the database. Thus, a developer would be
prevented from creating a second inventory record and calling the data element for stock number
INVNUM. Application developers use the data dictionary to create the records they need for the
programs they are developing. The data dictionary checks records that are being developed
against the records that already exist in the database and prevents inconsistencies in data element
names and characteristics from occurring.

21
NAME: AMT –PAY-BASE
FOCUS NAME: BASEPAY
PC NAME: SALARY
DESCRIPTION: EMPLOYEE’S ANNUAL SALARY
SIZE: 9 BYTES
TYPE: N (NUMERIC)
DATE CHANGED: 01/1/99
OWNSERSHIP: COMPENSATION
UPDATE SECURITY: SITE PERSONNEL
ACCESS SECURITY: MANAGER, COMPENSATION PLANNING AND RESEARCH
MANAGER, JOB EVALUATION SYSTEMS
MANAGER, HUMAN RESOURCES PLANNING
MANAGER, CLAIMS PAYING SYSTEMS
MANAGER, QUALIFIED PLANS

BUSINESS FUNCTIONS USED BY: COMPENSATION


HR PLANNING
EMPLOYMENT
INSURANCE
PENSION
PROGRAMS USING: PI 01000
PI 02000
PI 03000
PI 04000

REPORTS USING: REPORT 124 (SALARY INCREASE TRACKING REPORT)


REPORT 448 (GROUP INSURANCE AUDIT REPORT)
REPORT 452 (SALARY REVIEW LISTING)
PENSION REFERENCE LISTING

Figure 2.6 illustrates a sample data dictionary report that shows the size, format, meaning, and
uses of a data element in a human resources database. A data element represents a field.

Because of the data dictionary, an application program does not have to specify the
characteristics of the data it wants from the database. It merely requests the data from the DBMS.
This may permit you to change the characteristics of a data element in the data dictionary
without having to change all the application programs that use the data element.

A DBMS uses two languages – a data definition/description language (DDL) and a data
manipulation language (DML). The DDL is the link between the logical and physical views of
the database. To place a data element in the dictionary, DDL is used to describe the
characteristics of the data element.

Many users and many application programs may utilize the same database; therefore, many
different subschemas’ can exist. Each user or application program uses a set of DDL statements
to construct a subschema that includes only data elements of interest. The following figure show
statements from a DDL.

22
The Name of the Database Data Type
This Field contains student Number
(Its Logical name is SNO) Length of the Field

1. SCHEME NAME IS EDUCATION.


2.
3. RECORD NAME IS STUDENT;
4. SNO ; TYPE IS FIXED DECIMAL 6.
5. SNAME ; TYPE IS CHARACTER 20.
6. MAJOR ; TYPE IS CHARACTER 10.
7. 12323
8. RECORD NAME IS TEACHER;
9. TNO ; TYPE IS FIXED DECIMAL 4.
10. TNAME ; TYPE IS CHARACTER 20.
11. SUBJECT ; TYPE IS CHARACTER 10.

The DDL is used to define the physical characteristics of each record the fields within the record,
and each field’s logical name, data type, and length. The logical name (such as SNAME for the
student name field) is used by application programs and users to refer to a field for the purpose
of retrieving or updating the data in it. The DDL is also used to specify relationships among the
records. The primary functions of the DDL are to:

- Describe the schema and subschema


- Describe the fields in each record and the record’s logical name
- Describe the data type and name of each field.
- Indicate the keys of the record.
- Provide for data security restrictions
- Provide for logical and physical data independence
- Provide a means of associating related data.

To ensure uniformity in accessing data from the database, a DBMS will require that standardized
commands be used in application programmers to retrieve, sorting, displaying, and deleting data
or records from the database. This language is called the data manipulation language, or DML.

The DML usually consists of a series of commands (variety of manipulation verbs), such as
FIND, GET, and INSERT, and operands, for each verb. (An operand is an entity to which an
operation is applied; that which is operated upon.) The following table contains some of these
verbs and corresponding operands.

Verbs Operands
DELETE Record key, field name, record name, or file name
SORT Field name
INSERT Record key, field name, record name, or file name
DISPLAY Record key, field name, record name, or file name
ADD Field name.

23
The verbs in this table are combined with operands to manipulate data. For example, a command
might be DELET CUSTNO 5.

Most DML interface with high-level programming languages such as COBOL, PL/1. These
languages enable a programmer to perform unique data processing that the DBMS’s, DML
cannot perform.

A key feature of a DML is that it is uses logical names (such as CUSTNO for customer number)
instead of physical storage locations when referring to data. This capability is possible since the
DDL provides the link between the logical view of data and their physical storage. The functions
of a DML are to:
- Provide the techniques for data manipulation such as the deletion, replacement, retrieval,
sorting, or insertion of data or records.
- enable the user and application programs to process data by using logically meaningful data
name rather than physical storage locations.
- provide interfaces with programming languages, including several high-level languages such as
COBOL, PL/S, and FORTRAN.
- allow the user and application programs to be independent of physical data storage and
database maintenance.
- Provide for the use of logical relationships among data items.

A query language is a set of commands for creating, updating, and accessing data from a
database. Query languages allow programmers, managers and other users to ask ad hoc questions
of the database interactively without the aid of programmers. The most prominent DML, query
language today is structured Query language or SQL. SQL is a set of about 30 English-like
commands that has become a standard in the database industry.

The basic form of an SQL command is SELECT….. FROM……… WHERE…….After


SELECT you list the fields you want. After FROM you list the name of the file or group of
records that contains those fields. After WHERE you list any conditions for the search of the
record. For example, you might wish to SELECT all customer names from customer records
WHERE the city in which the customers live is Dire Dawa. So you would enter this:

SELECT NAME, ADDRESS, CITY, ZIP


FROM CUSTOMER
WHERE CITY = ‘DD’

The result would be a list of the names and addresses of all customers located in Dire Dawa.

Suppose, for example, that a professor of information systems wanted to know at the beginning
of the semester how students performed in the prerequisite computer introduction course
(computer introduction 101) and the students current majors using a database supported by the
registrar, the professor would need something similar to the report shown in figure 2.5

24
Ideally, for such a simple report, the professor could sit at an office terminal connected to the
registrar’s database and write a small application program using the DML to create this report.
The professor first would develop the desired logical view of the data (figure 2.5) for the
application program. The DBMS would then assemble the requested data elements, which may
reside in several different files and disk locations.

The query used by the professor

SELECT Stud – name, stud, stud – id, Major, Grade


FROM Student, Course
WHERE Stud. Stud – id = course. Stud – id
AND Course – id = “CI 101”

The report required by the professor


Student Name ID. No. Major Grade in computer
introduction 101
Lind 468 Accounting A-
Pinekus 332 Marketing B+
Williams 097 Economics C+
Laughlin 765 Accounting A
Orlando 324 Statistics B

Some query languages use a natural-language set of commands. These query languages are
structured so that the commands used are as close to Standard English as possible. For example,
the following statement might be used:
PRINT THE NAMES AND ADDRESSES OF ALL CUSTOMERS WHO LIVE IN DIRE DAWA

Query languages allow users to retrieve data from databases without having detailed information
about the structure of the records and without being concerned about the processes the DBMS
uses to retrieve the data.

A teleprocessing monitor--is a communications software package that manages communications


between the database and remote terminals. Teleprocessing monitors often handle order entry
systems that have terminals located at remote sales locations.
An application development system is a set of programs designed to help programmers to
develop application programs that use the database.
A security software package provides a variety of tools to shield the database from unauthorized
access.
Archiving programs provide the database manager with tools to make copies of the database,
which can be used in case original database records are damaged. Restart /recovery systems are
tools used to restart the database and to recover lost data in the event of a failure.
A report writer allows programmers, accountants, managers and other users to design output
reports without writing an application program in a programming language, such as COBOL.

25
The Database Administrator

The development of database management systems has created a need for organizational changes
within firms. The focal point of this change is the position of database administrator, or DBA.
The database administrator is charged with managing the organization’s data resources, a job that
often includes database planning, design, operation, training, user support, security, and
maintenance. The role of the DBA requires a person who can relate to top management, systems
analysts, application programmers, users, and system programmers. Such a person needs not
only effective management skills but also a fair amount of technical ability. The deployment of a
database management system in a firm generates a number of changes in the way records and
files are administrated. The most prominent change is the data are now shared by many users
instead of being “owned” by one or more users. Thus, payroll records are no longer the property
of the payroll department; they are part of the company’s database.

Another change occurs when data are added to a database. Pooling data in a common database
requires consensus concerning the structure of the data elements. The users of the database must
agree on the nature of each data element and its characteristics such as length, type of data, and
the like.

Another change pertains to data access. Maintaining data integrity and security is an important
role of the DBA. Thus access to data stored in the database must not only be approved, it also
must be made using standard procedures approved by the DBA. Because the data are shared, it is
important to permit only those who need the data to have access to them. The ability to add,
delete or modify existing data must be tightly controlled. The database administrator and his or
her staff perform the following functions:
- Maintain a data dictionary. The data dictionary defines the meaning of each data
item stored in the data base and describes interrelations between data items. The
trend in DBMS is the combine functions of the DDL and data dictionary into an
active data dictionary. It is called “active” because the DBMS continuously refers
to it for all the physical data definitions (field lengths, data types, and so on) that a
DDL would provide part of a data dictionary.
- Determine and maintain the physical structure of the database.
- Provide for updating and changing the database, including the deletion of inactive
records.
- Create, maintain, edit and controls regarding changes and additions to the
database.
- Develop retrieval methods to meet the needs of the users.
- Implement security and disaster recovery procedures
- Maintain configuration control of the database. Configuration Control means that
changing requested by one user must be approved by the other users of the
database. One person cannot indiscriminately change the database to the
detriment of other users.
- Assign user access codes in order to prevent unauthorized use of data.

26
A database administrator works very closely with users to create, maintain, and safeguard the
database. In effect, the DBA is the liaison between the database and its users, and therefore must
be familiar with users’ information requirements. The administrator must also be technically
competent in the areas of DBMS and data storage and processing.

The Physical Database

This is the lowest level of the data base and the only level that exists in physical form. The
physical data base consists of magnetic spots on magnetic disks. The other levels of the database
(the user view, conceptual view, and internal view) are abstract representations of the physical
level.

In a DBMS, the data must be stored on direct- access devices like magnetic disks. However, well
managed installations create backup copies of the database on off-line storage media such as
magnetic tape. These security measures are extremely important in a data base environment,
since many departments and application programs may be dependent on a single, centralized
database.

Database management systems are designed to optimize the use of physical storage and CPU
processing time. The logical view may contain redundant data items in order to make them more
understandable to users. But the physical implementation of the DBMS attempts to make in the
physical storage no redundant. The DBMS also uses other techniques to optimize resource
utilization. Data records that are seldom used may be placed on inexpensive, slow memory
devices, where as frequently used data may be put on faster but more expensive media.

VIWES OF THE DATABASE

Any database contains two types of data: the actual data, such as employee names, hourly wage
rates, and hours worked, and information about the data. That is, it contains (1) the definitions of
each data element and (2) how each data element relates to other data elements. The data kept
about data are called metadata (data about data; data describing the structure, data elements,
interrelationships, and other characteristics of a database).

The data in a database are organized both logically and physically. Logical organization of data
in a database takes place when you conceptually arrange data elements into a record. For
example, these data elements can be logically organized into a payroll record.

1. Employee number (primary key)


2. Employee last or family name
3. Employee first or given name
4. Employee middle initial
5. Employee zone
6. Employee woreda
7. Employee Kebelle
8. Employee telephone number

27
We may also logically organize all employee records into a file called the employee master file.
However, there is no assurance that the employee records in the file will be physically stored
contiguously on the storage media used. Storing data randomly with a hashing algorithm may
scatter records and parts of records that logically belong together all over a hard disk. Thus, the
data elements of a logical record may be stored in many different actual locations on disk. A
physical record is a collection of data elements grouped together on a disk. A physical record
may contain the data elements of more than one logical record. In the same way, records in a
logical file may be stored in many different actual locations on disk. A physical file is a
collection of records actually grouped together on a disk track.

In other words, the logical organization of records and files are the way these items appear to the
user. The physical organization of records and files is the way these items are actually stored on
disk or tape.

In a DBMS the data might be physically disaggregated and stored on magnetic disk according to
some complex addressing mechanism; but the DBMS assumes the responsibility of aggregating
the data into a neat, logical format whenever the application program needs it. This frees
application programmers from having to worry about tracks and cylinders, and lets them
concentrate on the business aspects of the problem to be solved.

The following Figure shows how the DBMS insulates the user from physical storage details. The
user or application programmer can refer to data items by using meaningful names, such as
CUSTOMER NAME and TOTAL- PURCHASE. He or she no longer has to worry about specifying
things like the number of bytes in a field.

Application
program

Logical DBMS Physical


view view Database
Application
program

The Conceptual View

The database itself may be viewed from at least three perspectives.


The conceptual view of the database is a logical view: It is how the database appears to be
organized to the people who designed it. The conceptual view, also called the schema is a global
view of the entire database and this is the view usually used by the DBA. The conceptual view
includes all the data elements in the database and how these data elements logically relate each
other. The DBA define those data elements, and the relationship of each data element to the
others.

28
The External View
Another, less comprehensive view is the external or user view of the database. This view is also
called a subschema and is usually used by an application programmer, an application program, or
a user. The external view encompasses only a subset of the data elements in the entire database,
those data elements needed by one application program or user. Thus each application program
holds an external view of the database.

Both the conceptual and external views are logical views of the database; that is, these views are
not concerned with how data are physically organized on cylinders, tracks, and sectors. The
conceptual and external views both describe the logical organization of the data elements and
how each data element relates to another. There differences can be found in that the conceptual
view includes all the data elements and their relationships whereas an external view includes
only those data elements and relationships needed by an application program (see the following
figure)
Record type 1 Record type 3

Data element 1 Data element 1 External view 1


Data element 2 Data element 2
Data element 3 Data element 3
Data element 4

Record type 2 Record type 4 External view 2


Data element 1 Data element 1
Data element 2 Data element 2
Data element 3

External view of data provides one means of making the system secure. Users of one application
program, for example, may be restricted to their own views of the database. The DBMS may not
allow users of an application program to use data beyond their views.

The Internal View


A third view of the database is the internal or physical view. This view is usually the one taken
by the systems programmer. The system programmer is concerned with the actual physical
organization and placement of the data elements in the database. The internal view is a physical
or hardware view of the database. The system programmer designs and implements this view by
allocating cylinders, tracks, and sectors for the various segments of the database so various
programs run as smoothly and efficiently as possible.

Summary for the three views of a database

1. Conceptual View
a. Held by the database administrator
b. Involves the identification and description of all data elements and their relationships to
other data elements
c. Reflects a logical view of the entire database

29
2. External or User View
aHeld by the application programmer, application program, or user
b. Consists of the identification and description of each data element needed for a given
application.
c. Constitutes a logical view of one part of the database
3. Internal View
a. Held by the systems programmer
b. Involves the organization and placement of the actual data on the physical storage media.
c. Represents a physical view of the database.

Physical Design Considerations


File Organization Techniques
Record Access Methods
Data Structures

Physical Design
Provide good performance
– Fast response time
– Minimum disk accesses

Disk Assembly

Cylinder
Track

Page or
Sector

Access time = seek time + rotational delay + transfer time

30
Definitions
 Seek time
– Average time to move the read-write head
 Rotational delay
– Average time for the sector to move under the read-write head
 Transfer
– Time to read a sector and transfer the data to memory

More Definitions
 Logical Record
– The data about an entity (a row in a table)
 Physical Record
– A sector, page or block on the storage medium
 Typically several logical records can be stored in one physical record.

Record Access and File Organization


Record Access

 Sequential
 Random

The record access method is dependent upon the physical medium on which the file is stored.
Magnetic tape is sequential by its very nature. To read a record you must start at the beginning of
the tape and sequentially read each record until you get to the one you want. With disks, of
course, random access is possible. It is the same as the difference between audio cassette and
audio compact disc. With audio tape you have to start at the beginning and run the tape forward
until you get to the song you want to hear. With compact disc you can play the songs in random
order or go directly to the track you want to hear.

However, not only must the medium allow for random access to records, but the file itself must
support going directly to the record you want to retrieve. This characteristic of the file is called
"file organization."

File Organization Techniques


Three techniques
– Heap (unordered)
– Sorted
 Sequential (SAM)
 Indexed Sequential (ISAM)
– Hashed or Direct

A file, even if it is stored on a magnetic disk or CD-ROM disk, may have a sequential file
organization. This records in this kind of sequential file (even though the file is on a medium that
allows for direct access) may only be retrieved sequentially.

31
On the other hand, a file on disk can support random access to records if its file organization is
either Direct or Indexed. With Direct and Indexed file organization random record access is
achieved by means of a "key field." In our by now well-worn example, your university records
are likely to be keyed on your Student ID Number (SSN).

With Direct file organization records are assigned relative record numbers based on a
mathematical formula known as a hashing algorithm. For example you input a Student ID
Number, a mathematical formula is applied to it, and the resulting value is the value that points
to the storage location on disk where the record can be found.

An Indexed file includes a table that relates key values (e.g., Student ID Number) to storage
locations of the corresponding records. This table is called the index. It is just like the index of a
book where the key value (topic) has a pointer to the storage location (page number) where the
information is stored.

File Organization Supported Record Access

Sequential Sequential

Direct Sequential or Random

Indexed Sequential or Random

Heap File

ID Company Industry Symbl. Price Earns. Dividnd.


1767 Tony Lama Apparel TONY 45.00 1.50 0.25
1152 Lockheed Aero LCH 112.00 1.25 0.50
1175 Ford Auto F 88.00 1.70 0.20
1122 Exxon Oil XON 46.00 2.50 0.75
1231 Intel Comp. INTL 30.00 2.00 0.00
1323 GM Auto GM 158.00 2.10 0.30
1378 Texaco Oil TX 230.00 2.80 1.00
1245 Digital Comp. DEC 120.00 1.80 0.10

Heap File Characteristics


 Insertion
– Fast: New records added at the end of the file
 Retrieval
– Slow: A sequential search is required
 Update - Delete
– Slow:
 Sequential search to find the page
 Make the update or mark for deletion

32
 Re-write the page

Sequential (Ordered) File

ID Company Industry Symbl. Price Earns. Dividnd.


1122 Exxon Oil XON 46.00 2.50 0.75
1152 Lockheed Aero LCH 112.00 1.25 0.50
1175 Ford Auto F 88.00 1.70 0.20
1231 Intel Comp. INTL 30.00 2.00 0.00
1245 Digital Comp. DEC 120.00 1.80 0.10
1323 GM Auto GM 158.00 2.10 0.30
1378 Texaco Oil TX 230.00 2.80 1.00
1480 Conoco Oil CON 150.00 2.00 0.50
1767 Tony Lama Apparel TONY 45.00 1.50 0.25

Sequential Access
1231...
1175...
1152 ...
1122...other data

Sequential File Characteristics

 Older media (cards, tapes)


 Records physically ordered by primary key
 Use when direct access to individual records is not required
 Accessing records
– Sequential search until record is found
 Binary search can speed up access
– Must know file size and how to determine mid-point,

Inserting Records in SAM files


 Insertion
– Slow:
 Sequential search to find where the record goes

33
 If sufficient space in that page, then rewrite
 If insufficient space, move some records to next page
 If no space there, keep bumping down until space is found
– May use an “overflow” file to decrease time

Deletions and Updates to SAM


 Deletion
– Slow:
 Find the record
 Either mark for deletion or free up the space
 Rewrite
 Updates
– Slow:
 Find the record
 Make the change
 Rewrite

Indexed Sequential
 Disk (usually)
 Records physically ordered by primary key
 Index gives physical location of each record
 Records accessed sequentially or directly via the index
 The index is stored in a file and read into memory when the file is opened.
 Indexes must be maintained

Indexed Sequential Access


 Given a value for the key
– search the index for the record address
– issue a read instruction for that address
– Fast: Possibly just one disk access
Indexed Sequential Access: Fast

34
Find record with key 777-13-1212
222-66-7634
Key Cyl. Trck Sect.
255-75-5531
279-66-7549 3 10 2 279-66-7549
452-75-6301 3 10 3
789-12-3456 3 10 4 333-88-9876
382-32-0658
452-75-6301
777-13-1212 < 789-12-3456 701-43-5634
Search Cyl. 3, Trck 10, Sect. 4 777-13-1212
sequentially. 789-12-3456

Inserting into ISAM files


 Not very efficient
– Indexes must be updated
– Must locate where the record should go
– If there is space, insert the new record and rewrite
– If no space, use an overflow area
– Periodically merge overflow records into file

Deletion and Updates for ISAM


 Fairly efficient
– Find the record
– Make the change or mark for deletion
– Rewrite
– Periodically remove records marked for deletion

Use ISAM files when:


 Both sequential and direct access is needed.
 Say we have a retail application like Foley’s.
 Customer balances are updated daily.
 Usually sequential access is more efficient for batch updates.
 But we may need direct access to answer customer questions about balances.

Direct or Hashed Access


 A portion of disk space is reserved
 A “hashing” algorithm computes record address

35
455-72-3566
Address
Hashing
Algorithm
Address

376-87-3425
Overflow

Hashed Access Characteristics


 No indexes to search or maintain
 Very fast direct access
 Inefficient sequential access
 Use when direct access is needed, but sequential access is not.

Secondary Indexes
 Provide access via non-key attributes
 Three data structures discussed here:
– Linked lists: embedded pointers
– Inverted lists: cross-index tables
– B-Trees

Simple Linked List: Oil Industry


Head: 1122
ID Company Industry Symbl. Price Earns. Dividnd. Next Rec.
1122 Exxon Oil XON 46.00 2.50 0.75 1378
1152 Lockheed Aero LCH 112.00 1.25 0.50
1175 Ford Auto F 88.00 1.70 0.20
1231 Intel Comp. INTL 30.00 2.00 0.00
1245 Digital Comp. DEC 120.00 1.80 0.10
1323 GM Auto GM 158.00 2.10 0.30
1378 Texaco Oil TX 230.00 2.80 1.00 1480
1480 Conoco Oil CON 150.00 2.00 0.50 -null-
1767 Tony Lama Apparel TONY 45.00 1.50 0.25

Ring: Oil Industry


Conoco points to the head of the list.
Head: 1122
ID Company Industry Symbl. Price Earns. Dividnd. Next Rec.
1122 Exxon Oil XON 46.00 2.50 0.75 1378
1152 Lockheed Aero LCH 112.00 1.25 0.50

36
1175 Ford Auto F 88.00 1.70 0.20
1231 Intel Comp. INTL 30.00 2.00 0.00
1245 Digital Comp. DEC 120.00 1.80 0.10
1323 GM Auto GM 158.00 2.10 0.30
1378 Texaco Oil TX 230.00 2.80 1.00 1480
1480 Conoco Oil CON 150.00 2.00 0.50 1122
1767 Tony Lama Apparel TONY 45.00 1.50 0.25

Types of List
 Non-dense: Few records (Oil stocks)
 Dense: Many or all records (Symbol)
 Two-way: Next and Prior pointers
 Multi-list: Many lists on the same record type

Processing with Lists


Assume an Oil industry list and the query:

SELECT * FROM STOCKS


WHERE INDUSTRY = “Oil” AND EARNINGS > 2.50

Fetch head of oil industry list.


Until Next Record Pointer = -null-
Fetch oil industry record.
If Earnings > 2.50, include on output list.
End Until.

Problems with lists


 Difficult to maintain
 Broken pointer chains
 Inefficient to search dense lists for one or a few records
Inverted List: Industry
Values Table Occurrences Table
Aero 1152
Auto 1175, 1323
Apparel 1767
Computer 1231, 1245
Oil 1122, 1378, 1480

Earnings and Dividends


Earnings Dividends
Values Occurrences Values Occurrences
1.25 1152 0.00 1231
1.50 1767 0.10 1245

37
1.70 1175 0.20 1175
2.00 1231, 1480 0.25 1767
2.10 1323 0.30 1323
2.50 1122 0.50 1152
2.80 1378 0.75 1122
1.00 1378

Select * From Stocks Where Earnings > 2.00 and Dividends < 0.60
Earnings Dividends
Values Occurrences Values Occurrences
1.25 1152 0.00 1231
1.50 1767 0.10 1245
1.70 1175 0.20 1175
2.00 1231, 1480 0.25 1767
2.10 1323 0.30 1323
2.50 1122 0.50 1152
2.80 1378 0.75 1122
1.00 1378
Only one record must be fetched !

B-Trees
 Hierarchical (tree) structure
 Balanced
– all paths from top to bottom the same length
 Consist of:
– Index Set: Indexed access for the key field
– Sequence Set: Sequential access for the key field

Database Models

From the previous discussions, we can conclude that a database system is a collection of data
items such as records and tables, and the association among them. It also incorporates other
elements such as relationships, constraints, and indexes. Hence organizing a database is not so
easy. Hence, to simplify matters, one needs a map showing how different elements of database
are associated. This map is called data model. Data modeling is a way of organizing a collection
of information pertaining to a system under investigation. A data model consists of two parts.

- A mathematical notation for describing the data and relationships.


- A set of operations used to manipulate that data.

Every database and database management system is based on particular database model. A
database model consists of rules and standards that define how data is organized in a database. It
provides a strong theoretical foundation of the database structure. From the theory emerges the
power of analysis, the ability to extract inferences and to create deductions that emerge from the

38
raw data. Different models provide different conceptualizations of the database and they have
different outlooks and different perspectives. There are different types of database systems.

Conceptual Database structures (models)

Organizing a large database logically into records and identifying the relationships among these
records are complex and time-consuming tasks. Just consider the large number of different
records that are likely to be part of a corporate database and the numerous data elements
constituting those records. Even a small business is likely to have many different record types,
each of which possesses several distinict data elements. Then consider the many relationships
records may have with one another. For example, an invoice can be related to the customer who
purchased the merchandise, the salesperson who sold to the customer the merchandise, the
products included on the invoice, and the warehouse from which the merchandise was picked.

Several general types of record relationships can be represented in a database.


1. One-to-one relationships, as in a single parent record to a single child record or as in a
husband record and wife record in an ideal monogamous society.

One-to-one Husband
relationship
Wife
2. One-to-many relationships, as in a single parent record to two or more child records-for
example, a teacher who teaches three single-section courses.

One-to-many Teacher
relationship

Course 3
Course 1 Course 2

3. Many-to-many relationships, as in two or more parent records to two or more child


records-for example, when two or more students are enrolled in two or more course.

Student 1 Student 2 Student 3

Course 1 Course 2 Course 3

39
The relationships among the many individual records stored in databases are based on one of
several logical data structures, or models. Database management system packages are designed
to use a specific data structure to provide end users with quick, easy access to information stored
in databases. Five fundamental database structures are the hierarchical, network, relational,
object-oriented, and multidimensional models.

The hierarchical and network models are termed navigational models because they possess
explicit links or path among data elements, but the relational models based upon implicit
linkages among data elements.

The Hierarchical Database Model

In a hierarchical database, records are logically organized into a hierarchy of relationships. This
was a popular approach to data representation because it reflected, more or less faithfully, many
aspects of an organization that are hierarchical in relationship. A hierarchically structured
database is arranged logically in an inverted three pattern. For example, an equipment database,
diagrammed below, may have building records, room records, equipment records, and repair
records. The database structure reflects the fact that repairs are made to equipment located in
rooms that are part of buildings.
Root
BLDG 1 Parent of room

Siblings Children of root


Room 1 Room 2 Parent of equipment

Siblings EQUIP 3 children of room


EQUIP 1 EQUIP 2
Parents of repair

Repair 3 Children of
Siblings Repair 1 Repair 2 equipment

Leaf

All records in a hierarchy are called nodes. Each node is related to the others in a parent-child
relationship. Each parent record may have one or more child records, but no child record may
have more than one parent record. Thus, the hierarchical data structures implements one-to-one
and one-to-many relationships.

The top parent record in the hierarchy is called the root record, and the lowest file in a particular
branch is called a leaf. Files at the same level with the same parent are called siblings.
This structure is also called a tree structure.

A NAVIGATIONAL DATA BASE

40
The hierarchical data model is called a navigational database because traversing (navigating) it
requires following a predefined path. This is established through explicit linkages (pointers)
between related records. The only way to access data at lower levels in the tree is by moving
progressively downward from a root and along the branches (pointers) of the tree until the
desired record. For example, consider the partial database in the following figure. To retrieve an
employee record, the DBMS must first access the department record (the root). That record
contains a pointer to project record, which points to the employee record.

Department
data element

Portion of a hierarchical database


Project A data Project B data
element element

Employee 1 Employee 2 following figure shows the data


DATA LINKAGES IN THE HIERARCHICAL MODEL. The
data element data element
structures and linkages for the partial customer database. The purpose of this example is to
illustrate the navigational nature of the linkages between related records.
Current
Customer file cust # Name Address balance Pointers
Previous record 1875 J. smith kebelle 1820 Next record
18118

To head record in cash receipts let

To head record in sales invoice list

Sales
Invoice # $ amount ship date Pointers Last invoice
Invoice Next invoice for for customer
file customer 1875 ……
1921 800 2/10/91 1875

Pointer to
next record pointer to pointer to
next record next record

line item Item # Qnty unit extended pointer Item # Qnty unit extended pointer
file price price price price
Last
9215 10 45.00 450.00 9215 10 45.00 450.00 record

Pointer to next record


First cash rec. record Next cash rec, record .. … Last cash rec. record
cash reciept For customer 1875 For customer 1875 for customer 1875
file

41
Pointer to next record
Pointer to next record pointer to next record
Assume that we wish to retrieve, for inquiry purposes, all data pertinent to a particular sales
invoice (number 1921) for a customer John Smith (Account number 1875). An access method
used for this situation is the hierarchical indexed direct access method (HIDAM) under this
method, the root segment (element) (customer file) of the database is organized as an indexed
sequential file. Lower-level records (sales invoice and invoice line item records) use pointers in a
linked-list arrangement.

The pointer in the customer record (the level above) directs the access method to head record in
the appropriate linked-list. The access method then compares the key value sought (invoice
number 1921) against each records in the list until it finds a match pointers in these records
identify the supporting detail records (the specific items sold) in the invoice line item file. The
structure of the line item file is also a linked list arrangement. The sales invoice and line item
records are returned via the operating system and the DBMS to user’s application for processing.

LIMITATIONS OF THE HIERARCHICAL MODEL – The hierarchical model presents a limited view
of data relationships. Based on the proposition that all business relationships are hierarchical (or
can be represented as such), this model does not always reflect reality. The following rules,
which govern the hierarchical model, reveal its operating constraints:
1. A parent record may have one or more child records. For example, in previous figure,
customer is the parent of both sales invoice and cash receipts.
2. No child record can have more than one parent

The second rule is often restrictive and limits the usefulness of the hierarchical model. Many
firms need a view of data associations that permit multiple parents such as that represented
below.

Figure (A) Sales person Customer Sales person Customer figure (B)
file file file file

Sales Sales Sales


invoice file invoice file invoice file

Natural Relationship Hierarchical representation with


data redundancy
Management, wishing to keep track of sales orders by customer and by salesperson, will wish to
view sales order records as the logical child of both parents. However, this relationship,
although, logical, violates the single parent rule of the hierarchical model. Figure (B) shows the
most common way of resolving this problem. By duplicating the sales invoice file, we create two
separate hierarchical representations. Unfortunately, we achieve this improved functionality of a
cost-increased data redundancy. The network model deals with this problem more efficiently.

42
Hierarchically structured databases are less flexible than other database structures because the
hierarchy of records must be determined and implemented before a search can be conducted. In
other words, the relationships between records are relatively fixed by the structure. Ad hoc
queries made by users that require different relationships than are already implemented in the
database may be difficult or time consuming to accomplish. For example, a manager may wish to
identify vendor of equipment with a high frequency of repair. If the equipment record contains
the name of the original vender, such a query could be performed fairly directly. However, data
describing the original vender may be contained in a record that is part of another hierarchy. As a
result, there may not be any established relationship in a large database is not a minor task.

A hierarchical database management system usually processes structured, day-to-day operational


data rapidly. In fact, the hierarchy of records is usually specifically organized to maximize the
speed with which large batch operations such as payroll or sales invoices are processed.
Any group of records with a natural hierarchical relationship to one another fit nicely within this
structure.

THE NETWORK DATABASE MODEL

A network database structure views all records in sets. Each set is composed of an owner record
and one or more member records. This is analogous to the hierarchy’s parent-children
relationship. Thus, the network model implements the one-to-one and the one-to-many record
structures. However, unlike the hierarchical model, the network model also permits a record to
be a member of more than one set at one time. The network model would permit the equipment
records to be the children of both the room records and the vendor records. This feature allow the
network model to implement the many-to-one and the many-to-many relationship types.

DATA LINKAGES IN A SIMPLE NETWORK - As with the hierarchical model, the network model is
a navigational database with explicit linkages between records. However, whereas the
hierarchical model allows only one path, the network model supports multiple paths to a
particular record. The structure can be accessed at either of the root level records (salesperson or
customer) by hashing their respective primary keys (SP # or cust #) or by reading their addresses
from an index. The path to the child record is explicitly defined by a pointer field in the parent
record. Notice the structure of the sales invoice file. In this example, each child now has two
parents and contains explicit links to other records that form linked-lists related to each parent.
For example, Invoice Number 1 is the child of salesperson number 1 and customer number 5.
This record has two links to related records. The first is a salesperson (SP) link to invoice
number 2. This represents a sale by salesperson number 1 to customer number 6. The second
pointer is the customer (C) link to invoice number 3. This represents the second sale to customer
number 5, which was processed this time by salesperson number 2.
Salesperson Customer
File File
SP 1 BP2 Cust 5 Cust 6

43
Invoice SP1/cust 5 Invoice SP1/cust 6 Invoice SP2/cust 5 Invoice SP2/cust 5
#1 #2 #3 #4

Sales invoice Sp link Sp link


File
Customer link Customer link

Linkages in a network Database

DATA LINKAGES IN A MANY-TO-MANY RELATIONSHIP. The M:M association is a two-way


relationship in which each file in the set is both parent and child.

Navigating an M:M association requires creating a separate link file that contains pointer records
in a linked-list structure. The following figure illustrates the link file between the inventory and
vendor files. Notice that each inventory and vendor record exists only once, but there is a
separate link record for each item in vendor supplies and for each supplier of a given inventory
item. This arrangement allows us to find all vendors of a given inventory item and all inventories
supplied by each vendor.

Link files may also contain accounting data. For example, the link file in this figure shows that
the price for inventory number 1356 from vendor number 1($10) is not the same price changed
by vendor number 3 ($12) similarly, the delivery time (days-lead time) and discounted offered
(terms) are different. Data that are unique to the item-vendor associations are stored in the unique
link file record. Transaction characteristics such as these can vary between vendors and even
between different items from the same vendor.

Unlike hierarchical data structures that require specific entrance points to find records in a
hierarchy, network data structures can be entered and traversed more flexibly.

Inventory file

Inven # Inven # Inven #


1356 1730 2512
Vendor link
Inventory
link vendor link
Inventory link Inventory link

Inventory/vendor

Unique Unique Unique Days Terms cost Days Terms cost


Data Data Data Lead 2/12/20 $10 Lead 2/12/20 $12 Unique
Time Time
44
Data
5 3
link file

Inventory link inventory link


inventory
link vendor link vendor link vendor link

Vendor # 2 Vendor # 3
Vendor # 1
Vendor link

A link files in a many-to-many relationship

THE RELATIONAL DATABASE MODEL

Both the hierarchical and network data structured require explicit relationships, or links, between
records in the database. Both structures also require that data be processed one record at a time.
The relational database structure departs from both these requirements.

The relational model represents all data in the database as simple two dimensional tables called
relations. The tables appear similar to flat files, but the information in more than one file can be
easily extracted and combined. Sometimes the tables are referred to as files.

Each relation consists of named columns which are attributes (data element or fields).

The following figure shows a supplier table. In supplier relation (table) the rows are unique
records and the columns are fields. Another term for a row or record in a relation is a tuple.
Often a user needs information from a number of relations to produce a report. Here is the
strength of the relational model: It can relate data in any file or table to data in another file or
table as long as both tables share a common data element.

Table
(Relation) columns (fields or Attributes)

Order Order Deliver Part Part Order


Number Date Date number amount Total
1634 02/02/05 02/22/05 152 2 144.50 rows
1635 02/12/05 02/29/05 137 3 79.70 (records,

45
1636 02/10/05 03/01/05 145 1 24.30 tuple)

Properly designed tables possess the following six characteristics:


1. Values are atomic.
2. Column values are of the same kind.
3. Each row is unique.
4. The sequence of columns is insignificant.
5. The sequence of rows is insignificant.
6. Each column must have a unique name.

Values Are Atomic--This property implies that columns in a relational table are not repeating
group or arrays. Such tables are referred to as being in the "first normal form" (1NF). The atomic
value property of relational tables is important because it is one of the cornerstone of the
relational model.

The key benefit of the one value property is that it simplifies data manipulation logic.

Column Values are of the Same Kind--In relational terms this means that all values in a
column come from the same domain. A domain is a set of values which a column may have. For
example, a Monthly Salary column contains only specific monthly salaries. It never contains
other information such as comments, status flags, or even weekly salary.

This property simplifies data access because developers and users can be certain of the type of
data contained in a given column. It also simplifies data validation. Because all values are from
the same domain, the domain can be defined and enforced with the Data Definition Language
(DDL) of the database software.

Each Row is Unique--This property ensures that no two rows in a relational table are identical;
there is at least one column, or set of columns, the values of which uniquely identify each row in
the table. Such columns are called primary keys. This property guarantees that every row in a
relational table is meaningful and that a specific row can be identified by specifying the primary
key value.

The default for a primary key is always not null because if the key has no value it cannot serve
its purpose to identify an instance of an entity.

The Sequence of Columns is Insignificant--This property states that the ordering of the
columns in the relational table has no meaning. Columns can be retrieved in any order and in
various sequences. The benefit of this property is that it enables many users to share the same
table without concern of how the table is organized. It also permits the physical structure of the
database to change without affecting the relational tables.

The Sequence of Rows is Insignificant-- This property is analogous the one above but applies
to rows instead of columns. The main benefit is that the rows of a relational table can be
retrieved in different order and sequences. Adding information to a relational table is simplified
and does not affect existing queries.

46
Each Column Has a Unique Name-- Because the sequence of columns is insignificant,
columns must be referenced by name and not by position. In general, a column name need not be
unique within an entire database but only within the table to which it belongs.

DATA LINKAGES IN THE RELATIONAL MODEL : - The data linkages in the relational model are
implicit. The conceptual relationships among files in all three database models are the same, but
note the absence of explicit pointers in the relational tables.

Whereas the user of navigational databases sees data represented as a tree or network structure
with explicit paths for traversing the data, users of relational systems see data as a collection of
independent tables. No pointer or other explicit links connect related tables. Relations are formed
by an attribute that is common to both tables in the relation. For example, the primary key of the
customer table (cost #) is an embedded foreign key in both the sales invoice and cash receipts
tables.

Similarly, the primary key in the sales invoice table (Invoice #) is a foreign key in the line item
table. Note that the line item table uses a composite primary key comprising two fields- invoice #
and Item #. Both fields are needed to identify each record in the table uniquely, but only the
invoice number portion of the key provides the logical link to the sales invoice table.

Records in related tables are logically connected by the DBMS, which searches the specified
tables for records with a known key value. For example, if a user wants all the invoices for
customer 1875, the systems would search the sales invoice table for records with a foreign key
value of 1875. We see from the following figure that there is only one occurrence – invoice
1921. To obtain the line item details for this invoice, a search is made of the line item table for
records with a foreign key value of 1921. Two records are retrieved.

The nature of the association between two tables determines the method used for assigning
foreign keys. Where the association is one-to-one, it does not matter which table’s primary key is
embedded in the other as a foreign key. In one-to-many associations, the primary key on the
“one” side is embedded as the foreign key on the “Many” side. For example, one customer may
have many invoice and cash receipts records.

Embedded Foreign key


customer Sales invoice
Cost # Name Address Current Invoice # Cust # Amount Ship date
(key) balance key)
1875 J. Smith 18 kebell 1820 - - - -
1876 G. Adams 21 keblle 2400 - - - -
1943 J. Hobbs 16 kebels 549.87 1921 1875 800 2/10/2000
- - - - - - - -
- - - - - - - -
- - - -

key

47
Line item cash receipts
Invoice Item # only Unit Extended Remit Cust # Amount Date
# price Price #(key) receiver received
1918 8312 1 84.50 84.50
- - - -
- - - - -
1921 9215 10 45 450
1362 1875 800 2/30/2000
1921 3914 1 350 350 - - - -
- - - -

Embedded foreign key embedded foreign key

Therefore, cust # is embedded in the records of the sales invoice and cash receipts tables.
Similarly, there is a one-to-many association between the sales invoice and line item tables.
Many-to-many associations between tables do not use embedded foreign keys. Instead, a
separate link table containing keys for the related tables must be created.

Finally, the assertion that the relational model employs no physical links refers specifically to the
user’s perception of the data base. The physical database, however, may employ any data
structure discussed so far (such as hashing, indexes, and pointers) as long as the structure does
not prevent the user from viewing data in tabular form. The most commonly used structure for
relational databases is the indexed sequential file.

RULES FOR RELATIONAL DATABASE MANAGEMENT SYSTEMS

A system is relational if it:


1. Presents data to users as tables
2. Supports the relational algebra functions of restrict, project, and join without requiring any
definitions of access paths (links) to support these operations. In a relational database these, three
basic operations are used to develop useful sets of data: select (restrict), project, and join.

Restrict: Extracts specified rows from a specified table. This operation creates a new table that is
a subset of the original table. In other words this operation creates a subset of rows that meet
stated criteria.
Project: Extracts specified attributes (columns) from a table and crates a new table. The project
operation creates a subset consisting of columns in a table, permitting the user to create new
tables that contain only the information required.
Join: Builds a new table from two tables consisting of all concatenated pairs of rows, one from
each table. In other words, this operation combines relational tables to provide the user with
more information than is available in individual tables.

CREATING USER VIEWS FROM NORMALIZED BASE TABLES

The process of creating a user view beings by designing the output reports, documents, and input
screens needed by the user. This is usually involves an extensive analysis of user information
needs. Once the analysis is complete, the designer can derive the set of data attributes (the

48
conceptual user view) necessary to produce these inputs and output. As the physical
representation of the conceptual user view, these reports, documents, and computer screens, are
called physical views. They help the designer understand key relationships among the data.

Normalization
Normalization is a design technique that is widely used as a guide in designing relational
databases. Normalization is essentially a two step process that puts data into tabular form by
removing repeating groups and then removes duplicated data from the relational tables.

Normalization theory is based on the concepts of normal forms. A relational table is said to be a
particular normal form if it satisfied a certain set of constraints. There are currently five normal
forms that have been defined. In this section, we will cover the first three normal forms that were
defined by E. F. Codd.

Basic Concepts

The goal of normalization is to create a set of relational tables that are free of redundant data and
that can be consistently and correctly modified. This means that all tables in a relational database
should be in the third normal form (3NF). A relational table is in 3NF if and only if all non-key
columns are (a) mutually independent and (b) fully dependent upon the primary key. Mutual
independence means that no non-key column is dependent upon any combination of the other
columns. The first two normal forms are intermediate steps to achieve the goal of having all
tables in 3NF. In order to better understand the 2NF and higher forms, it is necessary to
understand the concepts of functional dependencies and lossless decomposition.

Functional Dependencies

The concept of functional dependencies is the basis for the first three normal forms. A column,
Y, of the relational table R is said to be functionally dependent upon column X of R if and only
if each value of X in R is associated with precisely one value of Y at any given time. X and Y
may be composite. Saying that column Y is functionally dependent upon X is the same as saying
the values of column X identify the values of column Y. If column X is a primary key, then all
columns in the relational table R must be functionally dependent upon X.

A short-hand notation for describing a functional dependency is:

R.x —>; R.y

which can be read as in the relational table named R, column x functionally determines
(identifies) column y.

Full functional dependence applies to tables with composite keys. Column Y in relational table
R is fully functional on X of R if it is functionally dependent on X and not functionally
dependent upon any subset of X. Full functional dependence means that when a primary key is

49
composite, made of two or more columns, then the other columns must be identified by the entire
key and not just some of the columns that make up the key.

Overview

Simply stated, normalization is the process of removing redundant data from relational tables by
decomposing (splitting) a relational table into smaller tables by projection. The goal is to have
only primary keys on the left hand side of a functional dependency. In order to be correct,
decomposition must be lossless. That is, the new tables can be recombined by a natural join to
recreate the original table without creating any spurious or redundant data.

Sample Data

Data taken from Date [Date90] is used to illustrate the process of normalization. A company
obtains parts from a number of suppliers. Each supplier is located in one city. A city can have
more than one supplier located there and each city has a status code associated with it. Each
supplier may provide many parts. The company creates a simple relational table to store this
information that can be expressed in relational notation as:

FIRST (s#, status, city, p#, qty)

where

s# supplier identifcation number (this is the primary key)


status status code assigned to city
city name of city where supplier is located
p# part number of part supplied
qty> quantity of parts supplied to date

In order to uniquely associate quantity supplied (qty) with part (p#) and supplier (s#), a
composite primary key composed of s# and p# is used.

First Normal Form

A relational table, by definition, is in first normal form. All values of the columns are atomic.
That is, they contain no repeating values. Figure1 shows the table FIRST in 1NF.

50
Figure 1: Table in 1NF

Although the table FIRST is in 1NF it contains redundant data. For example, information about
the supplier's location and the location's status have to be repeated for every part supplied.
Redundancy causes what are called update anomalies. Update anomalies are problems that arise
when information is inserted, deleted, or updated. For example, the following anomalies could
occur in FIRST:

 INSERT. The fact that a certain supplier (s5) is located in a particular city (Athens)
cannot be added until they supplied a part.
 DELETE. If a row is deleted, then not only is the information about quantity and part lost
but also information about the supplier.
 UPDATE. If supplier s1 moved from London to New York, then six rows would have to
be updated with this new information.

Second Normal Form

The definition of second normal form states that only tables with composite primary keys can be
in 1NF but not in 2NF.

A relational table is in second normal form 2NF if it is in 1NF and every non-key column is fully
dependent upon the primary key.

That is, every non-key column must be dependent upon the entire primary key. FIRST is in 1NF
but not in 2NF because status and city are functionally dependent upon only on the column s# of
the composite key (s#, p#). This can be illustrated by listing the functional dependencies in the
table:

s# —> city, status


city —> status
(s#,p#) —>qty

The process for transforming a 1NF table to 2NF is:

51
1. Identify any determinants other than the composite key, and the columns they determine.
2. Create and name a new table for each determinant and the unique columns it determines.
3. Move the determined columns from the original table to the new table. The determinate
becomes the primary key of the new table.
4. Delete the columns you just moved from the original table except for the determinate
which will serve as a foreign key.
5. The original table may be renamed to maintain semantic meaning.

To transform FIRST into 2NF we move the columns s#, status, and city to a new table called
SECOND. The column s# becomes the primary key of this new table. The results are shown
below in Figure 2.

Figure 2: Tables in 2NF

Tables in 2NF but not in 3NF still contain modification anomalies. In the example of SECOND,
they are:

INSERT. The fact that a particular city has a certain status (Rome has a status of 50) cannot be
inserted until there is a supplier in the city.

DELETE. Deleting any row in SUPPLIER destroys the status information about the city as well
as the association between supplier and city.

Third Normal Form

The third normal form requires that all columns in a relational table are dependent only upon the
primary key. A more formal definition is:

A relational table is in third normal form (3NF) if it is already in 2NF and every non-key column
is non transitively dependent upon its primary key. In other words, all nonkey attributes are
functionally dependent only upon the primary key.

52
Table PARTS is already in 3NF. The non-key column, qty, is fully dependent upon the primary
key (s#, p#). SUPPLIER is in 2NF but not in 3NF because it contains a transitive dependency. A
transitive dependency is occurs when a non-key column that is a determinant of the primary key
is the determinate of other columns. The concept of a transitive dependency can be illustrated by
showing the functional dependencies in SUPPLIER:

SUPPLIER.s# —> SUPPLIER.status


SUPPLIER.s# —> SUPPLIER.city
SUPPLIER.city —> SUPPLIER.status

Note that SUPPLIER.status is determined both by the primary key s# and the non-key column
city. The process of transforming a table into 3NF is:

1. Identify any determinants, other the primary key, and the columns they determine.
2. Create and name a new table for each determinant and the unique columns it determines.
3. Move the determined columns from the original table to the new table. The determinate
becomes the primary key of the new table.
4. Delete the columns you just moved from the original table except for the determinate
which will serve as a foreign key.
5. The original table may be renamed to maintain semantic meaning.

To transform SUPPLIER into 3NF, we create a new table called CITY_STATUS and move the
columns city and status into it. Status is deleted from the original table, city is left behind to
serve as a foreign key to CITY_STATUS, and the original table is renamed to SUPPLIER_CITY
to reflect its semantic meaning. The results are shown in Figure 3 below.

Figure 3: Tables in 3NF

The results of putting the original table into 3NF has created three tables. These can be
represented in "psuedo-SQL" as:

PARTS (#s, p#, qty)


Primary Key (s#,#p)
Foreign Key (s#) references SUPPLIER_CITY.s#

53
SUPPLIER_CITY(s#, city)
Primary Key (s#)
Foreign Key (city) references CITY_STATUS.city

CITY_STATUS (city, status)


Primary Key (city)

Advantages of Third Normal Form

The advantage of having relational tables in 3NF is that it eliminates redundant data which in
turn saves space and reduces manipulation anomalies. For example, the improvements to our
sample database are:

INSERT. Facts about the status of a city, Rome has a status of 50, can be added even though
there is not supplier in that city. Likewise, facts about new suppliers can be added even though
they have not yet supplied parts.

DELETE. Information about parts supplied can be deleted without destroying information about
a supplier or a city. UPDATE. Changing the location of a supplier or the status of a city requires
modifying only one row.

54
CHAPTER THREE

SYSTEMS DEVELOPMENT LIFE CYCLE (SDLC)


A system life cycle divides the life of an information system into two stages, systems
development and systems operation and support. First you build it then you use it, keep it
running, and support it. Eventually, you cycle back from operations and support to
redevelopment. Figure 3.1.illustrates the two life cycle stages and the two key events between
the stages.
- When a system cycle from development to operation and support, a conversion must takeplace.
At some point, obsolescence occurs and a system cycles from operation to redevelopment.

Conversion

Life cycle Life time of Life cycle stage


stage a system
System operation
System and support
development
Using information
Using system Technology
development Obsolescence
methodology

55
A system may be in more than one stage at the same time. For example, version one may be in
operation and support while version two is in development.

What is a systems development methodology? Figure 3.1 also demonstrates that a systems
development methodology implements the development stage of the system life cycle. The
process used to development information systems (IS) is called a methodology: all
methodologies are derived from a logical system problem-solving process that is sometimes
called a system development life cycle.
A systems development methodology is a very formal and precise system development process
that defines a set of activities, methods, best practices, deliverables, and automated tools for
system developers and project managers to use, to develop and maintain most or all information
systems and software.

Methodologies ensure that a consistent, reproducible approach is applied to all projects.


Methodologies reduce the risk associated with shortcuts and mistakes. Finally, methodologies
produce complete and consistent documentation from one project to the next. These advantages
provide one overriding benefit-as development teams and staff constantly change, the results of
prior work can be easily retrieved and understood by those who follow:
Methodologies can be homegrown; however, many businesses purchase their development
methodology.
General principles that should underlie all systems development methodologies are:
- Principle 1: Get the owners and users involved analysts, programmers, and other
information technology specialists frequently refer to “my system”. This attitude crates
an “us-versus-them” conflict between technical staff and the users and management.
Although analysts and programmers work hard to create technologically impressive
solutions, those solutions often backfire because they don’t address the real organization
problems or they introduce new problems. For this reason, system owner and user
involvement is necessary for successful systems development. The individuals
responsible for systems development must make time for owners and users, insist on their
participation, and seek agreement from them on all decisions that may affect them.
- Principle 2: Use a problem-solving approach a methodology is a problem-solving
approach to building systems. The term problem includes real problems, opportunities for
improvement, and directives from management. The classic problem-solving approach is
as follows:
1. Study and understand the problem and its context.
2. Define the requirements of a suitable solution.
3. Identify candidate solutions and select the “best” solution
4. Design and/or implement the solution
5. Observe and evaluate the solution’s impact, and refine the solution accordingly.
Systems analysts should approach all projects using a problem –solving approach.
Principle 3: Establish phases and activities all life cycle methodologies prescribe phases and
activities. The number and scope of phases and activities varies from author to author, expert to
expert, and company to company. The phases or activities are:
- Preliminary investigation or system planning
- Problem analysis or system analysis
- Requirements analysis

56
- Decision analysis or system selection
- Design or detailed design
- Construction
- Implementation

Each phase serves a role in the problem-solving process some phases identify problems, while
others evaluate, design and implement solutions
Principle 4: Establish standards- organization should embrace standards for both information
systems and the process used to develop those systems. In medium to large organizations, system
owners, users, analysts, designers, and builders come and go. Some will be promoted, some will
quit; and others will be reassigned. To promote good communication between constantly
changing managers, users, and information technology professionals, you must develop
standards to ensure consistent systems development. Standards should minimally encompass the
following:
- Documentation
- Quality
- Automated tools
- Information technology

57
These standards will be documented and embraced within the context of the chosen system
development process or methodology.
The need for documentation standards underscores a common failure of many analysts-the
failure of document as an ongoing activity during the life cycle. Documentation should be a
working by-product of the entire systems development effort. Documentation reveals strengths
and weaknesses of the system to multiple stockholders (system owners, system users, system
designers, and system builders) before the system are built. It stimulates user involvement and
reassures management about progress.
Quality standards ensure that the deliverables of any phase or activity meet business and
technology expectations. They minimize the likelihood of missed business problems and
requirements, as well as flawed designs and program errors (bugs). Frequently, quality standards
are applied to documentation produced during development, but quality standards must also be
applied to the technical end products such as databases, programs, user and system interfaces,
and networks.
Automated tool standards prescribe technology that will be used to develop and maintain
information systems and to ensure consistency, completeness, and quality. Today’s developers
use automated tools (such as Microsoft access or visual basis or computer-aided systems
engineering (CASE) to facilitate the completion of phases and activities, produce documentation,
analyze quality, and generate technical solutions.

Finally, information technology standards direct technology solutions and information


systems to a common technology architecture or configuration. This is similar to automated
tools except the focus is on the underlying technology of the finished product, the
information systems themselves. For example, an organization may standardize on specific
computers and peripherals, operating systems, database management systems, network
topologies, user interfaces, and software architectures. The intent is to reduce effort and cost
required to provide high quality support and maintenance of the technologies themselves.
Information technology standards also promote familiarity, ease of learning, and ease of use
across all information systems by limiting the technology choices. Information technology
standards should not inhibit the investigation or use of appropriate emerging technologies
that could benefit the business.
Principle 5: Justify systems as capital investments information systems are capital
investments, just as a fleet of trucks and a new building are. Even if management fails to
recognize information system as an investment, you should not. When considering a capital
investment, two issues must be addressed.

First, for any problem, there are likely to be several possible solutions. The analyst who fails
to look at alternatives may be doing the business a disservice.
Second, after identifying alternative solutions the systems analyst should examine or evaluate
each possible solution for feasibility, especially for cost-effectiveness and risk management.
Cost-effectiveness is defined as the result obtained by striking a balance between the cost of
developing and operating an information system and the benefits derived from the system.
Cost-effectiveness is measured using a technique called cost-benefit analysis. Risk
management is the process of identifying, evaluating, and controlling what might go wrong
in a project before it becomes a threat to the successful completion of the project or
implementation of the information system.

58
Principle 6: Don’t be afraid to cancel or revise scope a significant advantage of the phased
approach to systems development is that it provides several opportunities to reevaluate cost-
effectiveness and feasibility. There is often a temptation to continue with a project only
because of the investment already made. In the long run canceled projects are less costly than
implemented disasters.
Feasibility should be measured throughout the life cycle. The scope and complexity of an
apparently feasible project can change after the initial problems and opportunities are fully
analyzed or after the system has been designed. Thus, a project that is feasible at one point
may become infeasible later.
We advocate a creeping commitment approach to systems development. Using creeping
commitment approach, multiple feasibility check points are built into any systems
development methodology. At each feasibility checkpoint, all costs are considered sunk
(meaning not recoverable). They are, therefore, irrelevant to the decision. Thus, the project
should be reevaluated at each checkpoint to determine if it remains feasible to continue
investing time, effort, and resources.

At each checkpoint, the analyst should consider the following options:


- Cancel the project if it is no longer feasible
- Reevaluate and adjust the costs and schedule if project scope is to be increased
- Reduce the scope if the project budget and schedule are frozen and not sufficient to cover
all project objectives.
Principle 7: Divide and conquer consider the old saying, “If you want to learn anything, you
must not try to learn everything-at least not all at once.” For this reason, we divide a system into
subsystems and components to more easily conquer the problem and build the larger system. By
dividing a larger problem (system) into more easily managed pieces (subsystems), the analyst
can simplify the problem-solving process. This divide and conquer approach also complements
communication and project management by allowing different pieces of the system to be
delegated to different stockholders.
Principle 8: Design systems for growth and change many systems analysts develop systems to
meet only today’s user requirements because of the pressure to develop the system as quickly as
possible. Although this may seem to be a necessary short term strategy, it frequently leads to
long-term problems.
System scientists describe the natural and inevitable decay of all systems over entropy. When the
cost of maintaining the current system exceeds the cost o developing a replacement system-the
current system has reached entropy and became obsolete. But system entropy can be managed.
Today’s tools and techniques make it possible to design systems that can grow and change as
requirements grow and change.
Flexibility and adoptability do not happen by accident – they must be built into a system.

3.1. IN-HOUSE SYSTEMS DEVELOPMENT


Organizations usually acquire information systems in two ways: (1) they develop customized
systems in-house through formal systems development activities and (2) they purchase
commercial system from software vendors. Many organizations require systems that are highly
tuned to their unique operations. These firms design their own information systems through in-
house systems development activities. These systems are developed through a formal process
called the systems development life cycle (SDLC), is shown in Fig 3.2. A system development

59
life cycle (SDLC) is a logical process by which systems analysts, software engineers,
programmers, and end-users build information systems and computer applications to solve
organizational problems and needs. It is sometimes called the application development life cycle.

Fig 3.2. The systems Development life cycle


Systems Analysis
- Do initial investigation
Systems - Do systems survey
Planning - Do feasibility study
Project proposal - Determine information needs
and schedules and system requirements
- Deliver systems requirements
Operation and maintenance Conceptual design
Identify and evaluation design
Operate system alternatives
Modify system Develop design specifications
Do ongoing maintenance Deliver conceptual design system
Deliver improved system System selection
System selection report
Implementation and conversion Detailed/physical Design

Develop implementation and conversion plan Design output


Install hardware and software Design data base
Train personnel Design input
Test the system Develop programs
Complete documentation Develop procedures
Convert from old to new system Design controls
Fine-tune and do post implementation review Deliver developed system
Deliver operational system
Back end activities – these

Throughout the life cycle planning must be done and behavioral aspects of change must be
considered.
The SDLC in figure 3.2 is a seven stage process consisting of two major phases: new systems
development and maintenance.
NEW SYSTEMS DEVELOPMENT The first six stages of the SDLC describe the activities that all
new systems should undergo. Conceptually, new systems development involves five steps:
identify the problem, understand what is to be done, consider alternative solutions, select the best
solution, and finally, implement the solution. Each stage of the SDLC produces a set of required
documentation that marks the completion of the stage.
MAINTENANCE- once a system is implemented; it enters the second phase in its life cycle-
maintenance. Maintenance involves changing systems to accommodate changes in user needs.
This may be relatively trivial, such as modifying the system to produce a new report or changing
the length of a data field. Maintenance may also be more extensive, such as making major
changes to an application’s logic and user interface.

60
FRONT END AND BACK END – The first four stages in the SDLC are often called front end
activities. These are concerned, conceptually, with what the system should do. The last three
stages are the back end activities. These stages deal with the technical issues of how the physical
system will accomplish its objectives.
The participants are systems development can be classified into three broad groups: systems
professionals, end users, and stockholders.
Why are accountants involved with SDLC- The SDLC process is of interest to accountants for
two reasons. First, the creation of an information system represents a significant financial
transaction that consumes both financial and human resources such transactions must be planned,
authorized, scheduled, accounted for, and continued, accountants are as concerned, with the
integrity of this process as they are with any manufacturing process that has financial resource
implications.
The second and more pressing concern for accountants is with the products that emerge from the
SDLC. The quality of accounting information systems rests directly on the SDLC activities that
produce them. The accountant’s responsibility is to ensure that the systems apply proper
accounting conventions and rules, and possess adequate controls. Therefore, accountants are
concerned with the quality of the process that produces AIS.

3.2. IMPROVING SYSTEMS DEVELOPMENT THROUGH AUTOMATION


System development projects are not always success stories. In fact, by the time they are
implemented, some systems are absolute or defective and must be replaced. Historically, the
SDLC has been plagued by three problems that account for most systems failures. These
problems are:

- Poorly specified systems requirements


- Ineffective development techniques
- Lack of user involvement in systems document.
What many system developers have come to understand is that meeting the needs of the user is
hard to quantify and cannot be simply stated in formal system requirement documentation by
including the term "user friendly". So how does a System Analyst determine if a system meets
user needs?
 One of the methods is prototyping. As part of the system analysis phase, a model of the
system is developed and presented to users for feedback. The prototype is an example of
the system analysts' interpretation of what the system is and how it is supposed to
function. Prototyping gives the user the opportunity to make comments and suggestions
that will clarify their needs to the systems analyst. Thus, this iterative process allows the
analyst to more clearly define requirements and components of the user interface.
 CASE = Computer-Aided Systems Engineering is not a methodology or an alternative
way of creating an IS. CASE is the application of information technology to systems
development activities, techniques, and methodologies. CASE tools are software
applications that automate or support one or more phases of a systems development life
cycle. The technology is intended to accelerate the process of developing systems and to
improve the quality of the product.

CASE Tool Framework: CASE tools are classified according to which phases of the life cycle
they support. CASE tools provide some of the following:

61
 diagramming tools are used to draw the system models required or recommended in most
methodologies
 description tools are used to record, delete, edit, and output non-graphical documentation
and specifications
 prototyping tools to construct system components (inputs, outputs, programs)
 inquiry and reporting tools — to extract models, descriptions, and specifications
 quality management tools to analyze models, descriptions, and prototypes for consistency
and conformance to accepted rules
 decision support tools — to provide information for various decisions that occur during
system development
 documentation organization tools — to assemble, organize, and report information to be
reviewed by system owners, users, designers, and builders
 code generator tools — to generate some parts of the application code
 testing tools to help designers test databases and applications before distribution
 version control tools
 housekeeping tools — establish user accounts, backup and recovery of data, etc.

The benefit of CASE is improved productivity. Repeated programming tasks or certain


complicated tasks can be aided by the CASE tool. Most CASE products provide tools to create
consistently high quality documentation. CASE tools also enforce standardization and adherence
to the design methodology.

 Joint Application Development (JAD) is a process that accelerates the design of


information technology solutions. JAD uses customer involvement and group dynamics
to accurately depict the user's view of the business need and to jointly develop a solution.
Before the advent of JAD, requirements were identified by interviewing stakeholders
individually. The ineffectiveness of this interviewing technique, which focused on
individual input rather than group consensus, led to the development of the JAD
approach.

JAD offers a team oriented approach to the development of information management solutions
that emphasize a consensus based problem-solving model. By incorporating facilitated
workshops and emphasizing a spirit of partnership, JAD enables system requirements to be
documented more quickly and accurately than if a traditional approach were used. JAD combines
technology and business needs in a process that is consistent, repeatable, and effective.
JAD can be successfully applied to a wide range of projects, including the following:
1. New systems
2. Enhancements to existing systems
3. System conversions
4. Purchase of a system

The JAD approach provides the following benefits:

1. Accelerates design
2. Enhances quality
3. Promotes teamwork with the customer

62
4. Creates a design from the customer's perspective
5. Lowers development and maintenance costs

3.3. SYSTEMS PLANNING AND SYSTEMS ANALYSIS


We shall treat the SDLC subject matter conceptually. That is, the focus will be on what is done
rather than how it is done.
Systems Planning
The objective of systems planning is to link individual system projects or applications to the
strategic objectives of the firm. In fact, the basis for the systems plan is the organization’s
business plan, which specifies where the firm plans to go and how it will get there. There must
be congruence between the individual projects and the business plan, or the firm may fail to meet
its objectives. Effective systems planning provide this goal congruence.

Who should do systems planning?


Most firms that take systems planning seriously establish a systems steering committee to
provide guidance and review the status of system projects.
Typical responsibilities for a steering committee include:
1. Resolving conflicts that a rise from new systems
2. Reviewing projects and assigning priorities
3. Budgeting funds for systems development
4. Reviewing the status of individual projects under development
5. Determining at various checkpoints throughout the SDLC whether to continue with the
project or terminate it.
A system planning occurs at two levels: strategic planning and project planning.
STRATEGIC SYSTEM PLANNING
Technically, strategic system planning is not part of the SDLC because the SDLC pertains to
specific applications. The strategic plan is concerned with the allocation of such systems
resources as employees, hardware, software, and telecommunications. It is important that the
strategic plan avoid excessive detail.
PROJECT PLANNING
The purpose of project planning is to allocate resources to individual applications within the
framework of the strategic proposals. This involves identifying areas of user needs, preparing
proposals, evaluating each proposal’s feasibility and contribution to the business plan,
prioritizing individual projects, and scheduling the work to be done. The product of this phase
consists of two formal documents; the project proposal and the project schedule.

Project planning includes the following steps:


Recognizing the Problem
The need for a new, improved information system may be manifested in various symptoms. The
point at which the problem is recognized is important. This is often a function of the philosophy
of a firm’s management. The reactive management philosophy characterizes an extreme position;
in contrast to this is the philosophy of proactive management.

63
Who reports problem symptoms? Typically, symptoms are first reported by lower-level
manages and operations personnel. Occasionally, top management initiates system request on
rare occasions, computer specialists will identify problem symptoms and will initiate a system
solution.
Defining the Problem
It is tempting to take a leap in logic form symptom recognition problem definition. The manager
(user) must avoid this. It is important to keep an open mind and avoid drawing conclusions about
the nature of the problem that may direct attention and resources in the wrong direction. The
manager must specify the nature of the problem as he or she sees it based on the nature of the
difficulties identified.
The manager reports this problem definition to the computer systems professionals within the
firm. This begins an interactive process between the systems professionals and the user, which
results in a formal project proposal that will go before the steering committee for approval.

Specifying System Objectives


There must be harmony between the strategic objectives of the firm and the operational
objectives of the information system. Broad strategic objectives shape the narrower functional
objectives at the tactical and operations levels. These functional objectives lead to the
identification of information needs and serve to set out the operational objectives for the
information system.

Determining Project Feasibility


A preliminary project feasibility study is conducted at this early stage to determine how best to
proceed with the project. By assessing the major constraints on the proposed system,
management can evaluate the project’s feasibility, or likelihood for success, before committing
large amounts of financial and human resources. The acronym TELOS identifies five aspects to
project feasibility. Technical feasibility, economic feasibility, legal feasibility, operational
feasibility and schedule feasibility.

Preparing a Formal Project Proposal


The systems project proposal provides management with a basis for deciding whether or not to
proceed with the project. The formal proposal serves two purposes. First, it summarizes the
findings of the study conducted to this point into a general recommendation for a new or
modified system. This enables management to evaluate the perceived problem along with the
proposed system as a feasible solution. Second, the proposal outlines the linkage between the
objectives of the proposed system and the business objectives of the firm. It shows that the
proposed new system complements to the strategic direction at the firm.

Evaluating and Prioritizing Competing Proposals


After a manageable number of system proposals have been received, members of the steering
committee and systems professionals evaluate the pros and cons of each proposal. This is the
first major decision point in a projects life cycle.

64
Assessing the strategic contribution and feasibility of the system: - an important step in the
evaluation process is to identify those proposals that promise the greatest potential in supporting
the business objectives of the firm. Two important strategic objectives are improved operational
productivity and improved decision making.

Producing a Project Schedule


Project schedule document formally presents management’s commitment to the project. The
project schedule is a budget of the time and costs for all the phases of the SDLC. These phases
will be completed by a project team selected from systems professionals, end users, and other
specialists such as accountants and internal auditors.

Announcing the New System Project


The last step of the planning process-management’s formal announcement of the new system to
rest of the organization is the most delicate aspect of the SDLC. This is an exceedingly important
communiqué that, if successful, will pave the way for the new system and help to ensure its
acceptance among the user community.

The Accountant’s Role in Systems Planning


During the planning process, accountants are often called on to provide expertise in evaluating
the feasibility of projects. Their skills are particularly needed in specifying aspects of economic
and legal feasibility.
Systems Analysis
Systems analysis is actually a two-step process involving first a survey of the current system and
then an analysis of the user’s needs. A business problem must be fully understood by the systems
analyst before he or she can formulate a solution. An incomplete or defective analysis with lead
to an incomplete or defective solution. Therefore, systems analysis is the foundation for the rest
of the SDLC.
The deliverable product of this phase is a formal systems analysis report, which presents the
findings of the analysis and recommendations for the new system.

The Survey Step


Most systems are not developed from scratch. Usually, some form of information system and
related procedures are currently in place. The analyst often begins the analysis by determining
what elements, if any, of the current system should be preserved as part of the new system. This
involves a rather detailed system survey. Facts pertaining to preliminary questions about the
system are gathered and analyzed. As the analyst obtains a greater depth of understanding of the
problem, he or she develops more specific questions for which more facts must be gathered. This
process may go on through several iterations. When all relevant facts have been gathered and
analyzed, the analyst arrives at an assessment of the current system. Surveying the current
system has both disadvantages and advantages.

Disadvantages
- System analyst can be “sucked in” and then “bogged down by the task of surveying the
current dinosaur system

65
- Current system surveys stifle new ideas

Advantages
- It is a way to identify what aspects of the old system should be kept.
- To specify the conversion procedures of the old system into new system, the analyst must
know not only what is to be done by the new system but also what was done by the old
one. This requires a through understanding of the current system
- For determining conclusively the cause of reported problem symptoms.

GATHERING FACTS- The survey of the current system is essentially a fact-gathering activity.
The facts gathered by the analyst are pieces of data that describe key features, situations, and
relationships of the system. System facts fail into the following broad classes:
Data sources-these include external entities, such as customers or vendors, as well as
internal sources from other department.
Users- These include both managers and operations users. -
Data stores- Data stores are the files, data bases, accounts, and source documents used in the
system.
Processes-Processing tasks are manual or computer operations that represent a decision or an
action triggered by information.
Data flows- Data flows are represented by the movement of documents and reports between
data sources, data stores, processing tasks, and users.
Controls- These include both accounting and operational controls and may be manual
procedures or computer controls.
Transaction volumes-The analyst must obtain a measure of the transaction volumes for a
specified period of time. Many systems are replaced because they have reached their
capacity. Understanding the characteristics of a system transaction volume and its rate of
growth are important elements in assessing capacity requirements for the new system.
Error rates-Transaction errors are closely related to transaction volume. As a system
reaches capacity, error rates increase to an intolerable level. Although no system is perfect,
the analyst must determine the acceptable error tolerances for the new system.
Resources costs-The resources used by the current system include the costs of labor,
computer time, materials (such as invoices), and direct overhead. Any resource costs that
disappear when the current system is eliminated are called escapable costs. Later, when we
perform a cost-benefit analysis, escapable costs will be treated as benefits of the new
system.
Bottlenecks and redundant operations-the analyst should note points where data flows
come together to form a bottleneck. At peak-load periods, these can result in delays and
promote processing errors. Likewise, delays may be caused by redundant operations, such
as unnecessary approvals or sign offs. By identifying these problem areas during the survey
phase, the analyst can avoid making the same mistakes in the design of the new system.
FACT- GATHERING TECHNIQUES – systems analysts use several techniques together the above-
cited facts. Earlier, we saw how a JAD session is a valuable source of system facts. However, it

66
is unlikely that all relevant questions will be addressed in a single JAD session. There will still
be some gaps in the analyst’s understanding that need to be filled. Other fact-gathering
techniques include observation, task participation, personal interviews, and reviewing key
documents.
Observation – involves passively watching the physical procedures of the system. This allows
the analyst to determine what gets done, who performs the task, when they do it, how they do it,
why they do it, and how long it takes.
Advantages:
- Data gathered by observation can be highly reliable.
- Through observation, the system analyst can identify tasks that has been missed or
inaccurately described by other fact finding techniques.
- It is relatively inexpensive
Disadvantages
- People are usually uncomfortable being watched, they may perform differently when
being observed.
- The work being observed may not involve the level of difficulty or volume normally
experienced during that period.
- The tasks being observed are subject to various types of interruptions.
- People may let you see what they want you to see.
Questionnaires are used to ask more specific, detailed questions and to restrict the user’s
responses. This is a good technique for gathering objective facts about the nature of specific
procedures, volumes of transactions processed, sources of data, users of reports, and control
issues.
Advantages
- They can be answered quickly
- They are relatively inexpensive means for gathering data from a large number of
individuals.
- Individuals are more likely to provide the real facts.
- Responses can be tabulated and analyzed quickly
Disadvantages
- The number of respondents is often low.
- Questionnaires tend to be inflexible
- Impossible to analyze and observe the respondent’s body language.
- There is no immediate opportunity to clarify a vague or incomplete answer to any
question.
Types of Questionnaires
Free format questionnaires: - Offer the respondent greater latitude in the answer. A question is
asked, and the respondent records the answer in the space provided after the question.
Fixed format questionnaires: - Contains questions that require selection of predefined
responses.
There are three types of fixed format questions.
1. Multiple- choice questions
The respondent is given several answers, so that he can choose one or more of them
based on instruction
2. Ranking questions

67
The respondent is given several possible answers which are to be ranked in order of
preference or experience.
3. Rating questions
The respondent is given a statement and asked to use supplied responses to state on opinion.

Personal interview- interviewing is a method of extracting facts about the current system and
user perceptions about the requirements for the new system.
There are two roles assumed:
Interviewer (system analyst) - responsible for organizing and conducting the interview.
Interviewee- (owner & user of systems) are asked to respond to a series of questions.
Advantages
- Interviewing gives the analyst an opportunity to motivate the interviewee to respond
freely and openly to questions.
- It allows the systems analyst to probe for more feedback from the interviewee.
- Interviews permit the systems analyst to adapt or reword questions for each individual.
- Interviews give the analyst an opportunity to observe the interviewees nonverbal
communication.
Disadvantages
- It is a very time-consuming and therefore costly fact finding approach.
- Success of interviews is highly dependent on the systems analysts human relations skills.
- Interviewing may be impractical due to the location of interviewees
Types of Interviews
There are two types:
Unstructured interviews: - are conducted with only a general goal or subject in mind and with
few, if any, specific questions. The interviewer counts on the interviewee to provide a framework
and direct the conversations.
Structured interviews: - the interviewer has a specific set of questions to ask of the interviewee.
There are two types of questions interviews:
1. Open-ended questions- Allow the interviewee to respond in any way that seems
appropriate.
2. Close ended questions: - restrict answers to either specific choices or short, direct
responses.
Reviewing key Documents- the organization’s documents are another source of facts about
the system being surveyed. Examples of these include the following: organizational charts,
job descriptions, accounting records, charts of accounts, policy statements, descriptions of
procedures, financial statements, performance reports, system flowcharts, source documents,
transaction listings, budgets, forecasts, and mission statements.
Following the fact gathering phase, the analysts formally documents his or her impressions
and understanding about the system. This will take the form of notes, system flowcharts, and
various levels of data flow diagrams.

The Analysis Step


System analysis is an intellectual process that is commingled with fact gathering. The analyst is
simultaneously analyzing as he or she gathers facts. The mere recognition of a problem presumes
some understanding of the norm or desired state. It is therefore difficult to identify where the
survey ends and the analysis begins.

68
Systems Analysis Report - The event that marks the conclusion of the systems analysis phase is
a preparation of a formal systems analysis report. This report presents to management or the
steering committee the survey findings, the problems identified with the current system, the
user’s needs, and the requirements of the new system.

The Accountant’s Role in Systems Analysis


As a preliminary step to every financial audit, accountants conduct a systems survey to
understand the essential elements of the current system. The accountant’s experience and
knowledge in this area can be a valuable resource to the system analyst. Similarly, the
accountant’s understanding of end user information needs, internal control standards, audit trial
requirements, and mandated procedures is of obvious important to the task of specifying new
system requirements.

69
CHAPTER FOUR
DATA MODELING OF THE BUSINESS PROCESS AND DESIGN CONCEPTUAL VIEWS
Systems models play an important role in systems development. As a systems analyst or user,
you will constantly deal with unstructured problems. One way to structure such problems is to
draw models. A model is a representation of reality. Models can be built for existing systems as a
way to better understand these systems or for proposed systems as a way to document business
requirements or technical designs. An important concept is the distinction between logical and
physical models.
Logical models show what a system is or does. They are implementation independent; that is,
they depict the system independent of any technical implementation. As such, logical models
illustrate the essence of the system popular synonyms include essential model, conceptual model,
and business model.

Physical models show not only what a system is or does, but also how the system is physically
and technically implemented. They are implementation dependent because they reflect
technology choices and the limitations of those technology choices. Synonyms include
implementation model and technical model.

Let’s say we need a system that can store students. Grade point averages. For any given student,
the grade point average must be between 0.00 and 4.00 where 0.00 reflects an average grade of F
and 4.00 reflects an average grade of A. We further know that a new student does not have a
grade point average until he receives his first grade report. Every statement in this paragraph
reflects a business fact. It does not matter which technology we use to implement these facts; the
facts are constant they are logical. Thus, grade point average is a logical attribute with a logical
domain of 0.00-4.00 and a logical default value of null (meaning nothing).

Now, let’s get physical. First we have to decide how we are going to store the above attribute.
Let’s say we will use a table column in a Microsoft access database. We need a column name.
Let’s assume our company’s programming standards require the following name: ColGrade
pointAvg.
That is the physical filed name that will be used to implement the logical attribute grade point
average. Additionally, we could define the following physical properties for the database column
colGradepointAvg.
Data type NUMBER
Field size SINGLE (as in “single precision”)
Decimal places 2
Default value Null (as in “none”)
Validation rule BETWEEN 0 AND 4 INCLUSIVE
Required? NO
System analysts have long recognized the importance of separating business and technical
concerns (or the logical versus the physical concerns). That is why they use logical system

70
models to depict business requirements and physical system models to depict technical design.
Systems analysis activities tend to focus on the logical system models for the following reasons.
- Logical models remove biases that are the result of the way the current system is
implemented or the way any one person thinks the system might be implemented. Thus,
we overcome the “we’ve always done it that way” syndrome; consequently, logical
models encourage creativity.
- Logical models reduce the risk of missing business requirements because we are too
preoccupied with technical details. Such errors are almost always much more costly to
correct after the system is implemented. By separating what the system must do from
how the system will do it, we can better analyze the requirements for completeness,
accuracy, and consistency.
- Logical models allow us to communicate with end-users in non technical or less technical
languages. Thus, we don’t lose “business’ requirements in the technical jargon of the
computing discipline.
Data modeling is a technique for organizing and documenting a system’s DATA. Data
modeling is sometimes called database modeling because a data model is eventually
implemented as a database. It is also sometimes called information modeling.

Conceptual Design
The purpose of the conceptual design phase is to produce several alternative conceptual systems
that satisfy the system requirements identified during systems analysis. By presenting users with
a number of plausible alternatives, the systems professional avoids imposing preconceived
constraints on the new system. The user will evaluate these conceptual models and settle on the
alternatives that appear most plausible and appealing. By keeping systems design conceptual
throughout these phases of the SDLC, we minimize the investment of resources in alternative
designs that, ultimately, will be rejected.

An objective of the conceptual systems design phase of the SDLC is to reach consensus between
users and systems professionals on plausible alternative designs for the new system. An excellent
forum for this is prototyping within a JAD session. The use of CASE technology in such a
setting allows users to “cut and paste” together desirable features from competing designs to
construct a set of plausible alternatives. The user’s ability to envision and implement changes to
design is greatly enhanced by these media. When advanced tools and techniques are not
available (or not used), the conceptual design process becomes less interactive and less dynamic,
which may make reaching consensus more difficult. Whatever the method employed, the key to
this process is collaboration between end users and systems specialists to reach a common
understanding.
During this stage the company decides how to meet user needs. The first task is to identify and
evaluate appropriate design alternatives. There are many strategies or techniques for performing
systems design. They include modern structured analysis, information engineering, prototyping,
JAD, RAD, and object-oriented design. These strategies are often viewed as competing
alternative approaches to systems design. In reality, certain combinations complement one
another.
The Structured Design Approach
The structured design approach is a disciplined way of designing systems from the top down. It
consists of starting with the big picture of the proposed system that is gradually decomposed into

71
more and more detail until it is fully understood. Under this approach the business process under
design is usually documented by data flow and structure diagrams.

Structured design techniques help developers deal with the size and complexity of programs.

Modern structured design is a process-oriented technique for breaking up a large program into
a hierarchy of modules that result in a computer program that is easier to implement and
maintain (change). Synonyms (although technically inaccurate) are top-down program design
and structured programming.

The concept is simple. Design a program as a top-down hierarchy of modules. A module is a


group of instructions-a paragraph, block, subprogram, or subroutine. The top-down structure of
these modules is developed according to various design rules and guidelines. (Thus, merely
drawing a hierarchy or structure chart for a program is not structured design.)
Structured design is considered a process technique because its emphasis is on the PROCESS
building blocks in our information system-specifically, software processes. Structured design
seeks to factor a program into the top-down hierarchy of modules that have the following
properties:

- Modules should be highly cohesive; that is, each module should accomplish one and only
one function. Theoretically this makes the modules reusable in future programs.
- Modules should be loosely coupled; in other words, modules should be minimally
dependent on one another. This minimizes the effect that future changes in one module will
have on other modules.

The software model derived from structured design is called a structure chart. The structure
chart is derived by studying the flow of data through the program. Structured design is performed
during systems design. It does not address all aspects of design; for instance, structured design
will not help you design inputs, databases, or files.
Structured design has lost some of its popularity with many of today's applications that call for
newer techniques that focus on event-driven and object-oriented programming techniques.
However, it is still a popular technique for the design of mainframe-based application software
and to address coupling and cohesion issues at the system level.

72
A simple process made (also called a data flow diagram)

Club
member Member order response

Accounts Credit rating & limit Process


member
orders
Credit rating Credit rating and
and limit limit

Bonus Club
Process
Process order memb
bonus
automatic er
orders
orders
Order to be
filled

Order to be Warehouse order to be


filled filled

Revised
Existing order details Orders automatic order

73
Top-Down decomposition of the structured design approach
Data

1.0 2.0
Context level DFD
Output
Input Process
Process
Data

2.1
2.2
2.3
Input Intermediate
Process Process Output level DFD
Data Process

Data Data

Input
2.3.1 2.3.2 2.3.3
Elementary
Output Level DFD
Process Process Process
Input
Program
Module 1 Structure diagram

Module 2 Module 3 Module 4

Module 5 Module 6 Module 7 Module 8

We can see from these diagrams how the systems designer follows a top down approach. The
designer starts with an abstract description of the system and through successive steps, redefines
this view to produce a more detailed description. In our example, process 2.0 in the context
diagram is decomposed into an intermediate level DFD.
Process 2.3 in the intermediate DFD is further decomposed into an elementary DFD. This
decomposition could involve several levels to obtain sufficient details. Let’s assume that three
levels are sufficient in this case. The final step transforms process 2.3.3 into a structure diagram
that defined the program modules that will constitute the process.
HOW MUCH DESIGN DETAIL IS NEEDED? The conceptual design phase should highlight the
differences between critical features of competing systems rather than their similarities.
Therefore, system designs at this point should be general. The designs should identify all the
inputs, outputs, processes, and special features necessary to distinguish one alternative from
another. In same cases, this may be accomplished at the context diagram level. In situations
where the important distinctions between systems are subtle, designs may need to be represented
by lower-level DFDs and even with structure diagrams. However, detailed DFDs and structure
diagrams are more commonly used at the detailed design phase of the SDLC.

74
The following two alternative conceptual designs for a purchasing system. These designs lack
the details needed to implement the system. For instance, they do not include such necessary
components as:
 Data base record structures.
 Processing details.
 Specific control techniques.
 Formats for input screens and source documents.
 Output report formats.
The designs do, however, possess sufficient detail to demonstrate how the two systems are
conceptually different in their functions. To illustrate, let’s examine the general features of each
system.
Option A is a traditional batch processing system. The initial input for the process is the
purchase requisition from inventory control. When inventories reach their predetermined reorder
points, new inventories are ordered according to their economic order quantity. Transmittal of
purchase orders to suppliers takes place once a day via the mail.

In contrast, Option B employs EDI technology. The trigger to this system is a purchase
requisition from production planning. The purchases system determines the quantity and the
vendor and then transmits the order on-line via EDI software to the vendor.

Both alternatives have pros and cons. A benefit Option A is its simplicity of design, eases of
implementation, and lower demand for systems resources than Option B. A negative aspect of
Option A is that it requires the firm to carry inventories. On the other hand, Option B may allow
the firm to reduce or even eliminate inventories. This benefit comes at the cost of more
expensive and sophisticated system resources. It is premature, at this point, to attempt to evaluate
the relative merits of these alternatives. This is done formally in the next phase in the SDLC. At
this point, system designers are concerned only with identifying plausible system designs.

75
Alternative conceptual designs for a purchasing system
Inventory record Option A, batch system
Vendor data

Batches of Batches of Daily pickup Delivered by


Purchase purchase by post office post office Vendor
Inventory Purchasin Mail Post
requisition orders
control g process room office
process process

Bill of materials vendor data Option B, EDI system

Customer purchase electronic electronics


Customer order Production requisitions Purchasing purchase order transaction
EDI Vendor’s
process EDI system
planning system

Electronic sale
order

Vendors order
processing
systems

Information Engineering (IE) is a models-driven and DATA-centered, but process-sensitive


technique to plan, analyze, and design information systems. IE models are pictures that illustrate
and synchronize the system, data and processes.
The data models in IE are called entity relationship diagrams. The process models in IE use the
same data flow diagrams that were invented for structured analysis. IE’s contribution was to
define an approach for integrating and synchronizing the data and process models.

Information engineering is said to be a DATA-centered paradigm because it emphasizes the study


and requirements analysis of DATA requirements before those of the Process and interface
requirements. This is based on the belief that data is a corporate resource that should be planned
and managed. Accordingly, systems analysts draw entity relationship diagrams ( ERDs) to model
the system’s raw data before they draw the data flow diagrams that illustrate how that data will
be captured, stored, used, and maintained.

Both information engineering and structured analysis attempt to synchronize data and process
models. The two approaches differ only in which model is draw first. IE draws the data models
first, while structured analysis draws the process models first.

76
A simple data model (also carried an entity relationship diagram)

Agreement
Member Member
Placed by is enrolled under
order places applies to

established by
sells generates; establishes
is sold on generated by

Product in featured in Promotion


features sponsors Club
is sponsored by

Object-Oriented Analysis and Design. Object-oriented analysis and design are the new kinds
on the block. Object techniques are an attempt to eliminate deliberately separated concerns of
DATA from those of PROCESSES in such techniques as structured analysis and design, and
information engineering. The object-oriented design approach is to build information systems
from reusable standard components or objects. This approach may be equated to the process of
building an automobile car manufactures do not create each new model from scratch. New
models are actually built from standard components that also go into other models. For example,
each model of car produced by a particular manufacturer may use the same type of engine,
gearbox, alternator, radio and so on. In fact, it may be that the only component actually created
from scratch for a new car model is the body.

The concept of reusability is central to the object-oriented approach to system design. Once
created, standard modules can be used in other systems with similar needs. The benefits of this
approach include reduced time and cost for development, maintenance, and testing and improved
user support and flexibility in the development process.

Object-oriented analysis and design (OOAD) attempts to merge the data and process concerns
into singular constructs called objects. Object-oriented analysis and design introduced object
diagrams that document a system in terms of its objects and their interactions.
Business objects might correspond to real things of importance in the business such as customers
and the orders they place for products. Each object consists of both the data that describes the
object and the processes that can create, read, update, and delete that object. With respect to the
information system building blocks, object-oriented analysis and design significantly changes the
paradigm. The DATA and PROCESS columns are essentially merged into a single OBJECT column.
The models then focus on identifying objects, building objects, and assembling appropriate
objects into useful information systems.

Object models are also extendable to the INTERFACE building blocks of the information system
framework. Most contemporary computer user interfaces are already based on object technology.
For example, the Microsoft windows interfaces use standard object such as windows, frames,

77
drop-down menus; radio buttons, check boxes, scroll bars, and the like. Object programming
technologies such as C++, Java, Smalltalk, and Visual Basic are used to construct and assemble
such INTERFACE objects.
Object-oriented analysis (OOA) has become so popular that a modeling standard has evolved
around it. The Unified Modeling Language (or UML) provides a graphical syntax for an entire
series of object models, such as the model illustrated in the figure below. The UML defines
several different types of diagrams that collectively model an information system or application
in terms of objects.

An object model (using the Unified Modeling Language standard)


STUDENT
- Id number
COURSE
- Name
- Grade Point Average - Subject
Has record for > - Number
+ Admit ( ) - Title
+ Register for classes ( ) - Credit
+ Withdraw ( ) + create a course ( )
+ change address ( ) + Delete from course master ( )
+ calculate GPA ( ) + change in course master ( )
+ Graduate ( )

TRANSCRIPT COURSE

- Semester
- Division
- Grade
+ Add ( )
+ Drop ( )
+ Complete ( )
+ Change grade ( )

ELEMENTS OF OBJECT ORIENTED APPROACH


A distinctive characteristic of the object-oriented approach is that both data and programming
logic, such as integrity tests, accounting rules, and updating procedures are encapsulated in
modules to represent objects. The following are the principal elements of this approach.
Objects Objects are equivalent to nouns in the English language. For example, vendors,
customers, inventory, and accounts, are all objects. These objects possess two characteristics:
attributes and operations. Attributes are equivalent to adjectives in the English language and
serve to describe the objects. Operations are equivalent to verbs and show actions that are
performed on objects and that may change their attributes. Figure below illustrates these points
with an inventory accounting example, the objects is inventor and its attributes are part number,
description, quantity on hand(QOH), reorder point(ROP), order quantity, and supplier number.

78
The operations that may be performed on inventory are reduce inventory(from product sales),
review available quantity on hand(QOH), reorder inventory (when QOH<ROP), and replace
inventory (from inventory receipts).

Attributes
Part number Description QOH ROP Order quantity Supplier number

Object Inventory

Operations
Reduce Review quantity Reorder Replace

Inventory

Water pump
Wheel bearing
Alternator

Classes and Instances. An object class is alogical grouping of individual objects that share the
same attributes and operations. An instance is a single occurrence of an object within a class. For
example, the figure below shows the inventory class consisting of several instances or specific
inventory types.

Object class Instance


Inheritance. Inheritance means that each object instance inherits the attributes and operations of
the class to which it belongs. For example, all instances within the inventory class hierarchy
share the attributes of part number, description, and quantity on hand. These attributes would be
defined once and only once for the inventory object. Thus, the object instances of wheel bearing,
water pump, and alternator will inherit these attributes. Likewise, these instances will inherit the
operations (reduce, review, reorder, and replace) defined for the class.
Object class can also inherit from other object classes.

THE ROLE OF ACCOUNTANTS IN CONCEPTUAL SYSTEMS DESIGN


We established in chapter 1 that designing AIS was a joint effort between the accounting
function of the organization and systems professionals. Accountants are responsible for the
conceptual system (the logical information flows) and systems professionals are responsible for
the physical system (the technical task of building the system). If important accounting
considerations are not conceptualized at this point, there may be overlooked completely, thus
exposing the organization to financial loss and potential litigation. While participating in the
conceptual design process, the accountant must be aware that each alternate system must be
adequately controlled, that audit trials must be preserved, and that accounting conventions and
legal requirements must be understood. This does not mean that these issues must be specified in

79
detail at this point. It does mean that they should be recognized as items that must be addressed
during the detailed design phase of the system.

SYSTEM EVALUATION AND SELECTION


The next phase in the SDLC is the procedure for selecting the one system from the set of
alternative conceptual designs that will go to the detailed design phase. The systems evaluation
and selection phase is an optimization process that seeks to identify the best system. This
decision represents a critical juncture in the SDLC. At this point, there is a great deal of
uncertainty about the system, and a poor decision here can be disastrous. The purpose of a formal
evaluation and selection procedure is to structure this decision-making process and thereby
reduce both uncertainty and the risk of making a poor decision.

There is no magic formula to ensure a good decision. Ultimately, the decision comes down to
management judgment. The objective is to provide a means by which management can make an
informed judgment. This selection process involves two steps:
1. perform a detailed feasibility study
2. Perform a cost-benefit analysis.
The results, of these evaluations are then reported formally to the steering committee for final
system selection.
PERFORM A DETAILED FEASIBILITY STUDY

We begin the system selection process by reexamining the feasibility factors that were evaluated
on a preliminary basis as part of the systems proposal. Originally, the scores assigned to these
factors were based largely on the judgment and intuition of the systems professional. Now that
specific system features have been conceptualized, the designer has a clearer picture of these
factors. Also, at the proposal stage, these factors were evaluated for the entire project. Now they
are evaluated for each alternative conceptual design.

INDEPENDENT EVALUATION – The task of performing a detailed feasibility study should be


performed by informed but independent evaluators. Objectivity is essential to a fair assessment
of each design. This group should consist of the project manager, a user representative, and
systems professionals who are not part of the project but have expertise in the specific areas
covered by the feasibility study. Also, for operational audit reasons, the group should contain a
member of the internal audit staff. Below are some of the issues that evaluators should consider.
The TELOS acronym introduced in the last chapter can again be used to classify the feasibility
factors to be considered.

Technical Feasibility In evaluating technical feasibility, a well-established and understood


technology represents less risk than an unfamiliar one. If the systems design calls for established
technology, the feasibility score will be high, say 9 or 10. The use of technology that is new (first
release) and unfamiliar to systems professionals who must install and maintain it, or a hybrid of
several vendors’ products, is a more risky option. Depending on the number and combination of
risk factors, the feasibility score for such technology will be lower.

80
Legal Feasibility In financial transaction processing systems, the legality of the system is always
an issue. However, legality is also an issue for non-financial systems that process sensitive data,
such as hospital patient records or personal credit ratings. Different systems designs may
represent different levels of risk when dealing with such data. The evaluator should be concerned
that the conceptual design recognizes critical control, security, and audit trial issues and that the
system does not violate laws pertaining to rights of privacy and/or the use and distribution of
information.

Operational Feasibility The availability of well-trained motivated, and experienced users are
the key issue in evaluating the operational feasibility of a design. If users lack these attributes,
the move to a highly technical environment may be risky and will require extensive retaining.
This may also affect the economic feasibility of the system. On the other hand, a user community
that is comfortable with technology is more likely to make a smooth transition to an advanced
technology system. The operational feasibility score of each alternative design should reflect the
expected ease of this transition.

Schedule Feasibility At this point in the design, the system evaluator is in a better position to
assess the likelihood that the system will be completed on schedule. The technology platform,
the systems design, and the need for user training may influence the original schedule. The
systems development technology being used is another influence. The use of CASE, JAD, and
prototyping can significantly reduce the development time in any systems design option.

Economic Feasibility The preliminary economic feasibility study was confined to assessing
management’s financial commitment to the overall project. This is still a relevant issue. If the
economic climate has changed since the preliminary study or if one or more of the competing
designs does not have management’s support, this should now be determined.
The original feasibility study could specify the project’s costs only in general terms. Now that
each competing design has been conceptualized and expressed in terms of its unique features and
processes, designers can be more precise in their estimates of the costs of each alternative. The
economic feasibility study can now be taken a step further by performing a cost-benefit analysis.

PERFORM A COST-BENEFIT ANALYSIS


Cost-benefit analysis helps management to determine whether (and by how much) the benefits
received from a proposed system will outweigh its costs. This technique is frequently used for
estimating the expected financial value of business investments. However, in this case, the
investment is an information system, and the costs and benefits are more difficult to identify and
quantify than those of traditional capital projects. Although imperfect for this setting, cost-
benefit analysis is employed because of its simplicity and the absence of a clearly better
alternative. In spite of its limitations, cost-benefit analysis, combined with feasibility factors, is a
useful tool for comparing competing systems designs.
There are three steps in the application of cost-benefit analysis: identify costs, identify benefits,
and compare costs and benefits. We discuss each of these steps below.

IDENTIFY COSTS. One method of identifying costs is to divide them into two categories: one-
time costs and recurring costs. One-time costs include the initial investment to develop and
implement the system. Recurring costs include operating and maintenance costs hat recur over
the life of the system. Table 4-1 shows a breakdown of typical one-time and recurring costs.

81
ONE-TIME AND RECURRING COSTS

One-Time Costs
Hardware acquisition
Site preparation
Software acquisition
Systems design
Programming and testing
Data conversion from old system to new system
Training personnel

Recurring Costs
Hardware maintenance
Software maintenance contracts
Insurance
Supplies
Personnel

One-Time costs
Hardware Acquisition This includes the cost of mainframe, minicomputers, microcomputers,
and peripheral equipment, such as tape drives and disk packs. The cost figures for items can be
obtained from the vendor.
Site Preparation This involves such frequently overlooked costs as building modifications, (for
example, adding air conditioning or making structural changes), equipment installation (which
may include the use of heavy equipment), and freight charges. Estimates of these costs can be
obtained from the vendor and the subcontractors who do the installation.
Software Acquisition These costs apply to all software purchased for the proposed system,
including operating system software (if not bundled with the hardware), network control
software, and commercial applications (such as accounting packages). Estimates of these costs
can be obtained from vendors.
Systems Design These are costs incurred by systems professionals performing the planning,
analysis, and design functions. Technically, such costs incurred up to this point are “sunk” and
irrelevant to the decision. The analyst should estimate only the costs needed to complete the
detailed design.
Programming and Testing Programming costs are based on estimates of the personnel-hours
required to write new programs and modify existing programs for the proposed system. System
testing costs involve bringing together all the individual program modules for testing as an entire
system. This must be a rigorous exercise if it is to be meaningful. The planning, testing, and
analysis of the results may demand many days of involvement from systems professionals, users,
and other stakeholders of the system. The experience of the firm in the past is the best basis for
estimating these costs.
Data conversion These costs arise in the transfer of data from one storage medium to another.
For example, the accounting records of a manual system must be converted to magnetic form

82
when the system becomes computer-based. This can represent a significant task. The basis for
estimating conversion costs is the number and size of the files to be converted.
Training Thee costs involve educating users to operate the new system. This could be done in an
extensive training program provided by an outside organization at a remote site or through on-
the-job training by in-house personnel. The cost of formal training can be easily obtained. The
cost of an in-house training program includes instruction time, classroom facilities, and
productivity.
Recurring Costs
Hardware maintenance This involves the cost of upgrading the computer (increasing the
memory), as well as preventive maintenance and repairs to the computer and peripheral
equipment. The organization may enter into a maintenance contract with the vendor to minimize
and budget these costs. Estimates for these costs can be obtained from vendors and existing
contracts.
Software Maintenance These costs include upgrading and debugging operating systems,
purchased applications, and in-house developed applications. Maintenance contracts with
software vendors can be used to specify these costs fairly and accurately. Estimate of in-house
maintenance can be derived from historical data.
Insurance This covers such hazards and disasters as fire, hardware failure, vandalism, and
destruction by disgruntled employees.
Supplies These costs are incurred through routine consumption of such items as printer ribbons
and paper, magnetic disks, magnetic tapes, and general office supplies.
Personnel Costs These are the salaries of individuals who are part of the information system.
Some employee costs are direct and easily identifiable, such as the salaries of operations
personnel exclusively employed as part of the system under analysis. Some personnel
involvement (such as the data base administrator and computer room personnel) is common to
many systems. Such personnel costs must be allocated on the basis of expected incremental
involvement with the system.

IDENTIFY BENEFITS. The next step in the cost-benefit analysis is to identify the benefits of the
system. These may be both tangible and intangible.
Tangible Benefits Tangible benefits are benefits that can be measured and expressed in financial
terms. Table 4-2 lists several types of tangible benefits.
Tangible benefits fall into two categories: those that increase revenue and those that reduce costs.
For example, assume a proposed EDI system will allow the organization to reduce inventories
and at the same time improve customer service by reducing stock outs. The reduction of
inventories is a cost-reducing benefit. The proposed system will use fewer resources
(inventories) than the current system. The value of this benefit is the dollar amount of the
carrying costs saved by the annual reduction in inventory. The estimated increase in sales due to
better customer service is a revenue-increasing benefit.

83
TANGIBLE BENEFITS
Increased Revenues
Increased sales within existing markets
Expansion into other markets

Cost Reduction
Labor reduction
Operating cost reduction (such as supplies and overhead)
Reduced inventories
Less expensive equipment
Reduced equipment maintenance

When measuring cost savings, it is important to include only escapable costs in the analysis.
Escapable costs are directly related to the system, and they cease to exist when the system ceases
to exist. Some costs that appear to be escapable to the user are not truly escapable and, if
included, can lead to a flawed analysis. For example, data processing centers often “charge back”
their operating costs to their user constituency through cost allocations. The charge-back rate
they use for this includes both fixed costs (allocated to users) and direct created by the activities
of individual users. The figure below illustrates this technique.
Assume the management in User Area B proposes to acquire a computer system and perform its
own data processing locally. One benefit of the proposal is the cost savings derived by escaping
the charge-back from the current data processing center. Although the user may see this as a
$400,000 annual charge, only the direct cost portion ($50,000) is escapable by the organization
as a whole. Should the proposal be approved, the remaining $350,000 of the charge-back does
not go away. This cost must now be absorbed by the remaining users of the current system.

84
User A User E

DP Center
Operating Costs:

Total Charge-back Fixed cost =


User Area B=$400,000 $1,200,000

Direct Traceable
Cost = $278,000

Allocated Foxed
Costs = $350,000
Direct Cost = $50,000 User D
User B

User C

Intangible Benefits Table 4-3 lists some common categories of intangible benefits. Although
intangible benefits are often of overriding importance in information system decisions, they
cannot be easily measured and quantified. For example, assume that a proposed point-of-sale
system for a department store will reduce the average time to process a customer sales
transaction from eleven minutes to three minutes. The time saved can be quantified and produces
a tangible benefit in the form of an operating cost saving. An intangible benefit is improved
customer satisfaction; no one likes to stand in long lines to pay for purchases. But what is the
true value of this intangible benefit to the organization? Increased customer satisfaction may
translate in to increase sales. More customers will buy at the store-and may be willing to pay
slightly more to avoid long checkout lines. But how do we quantify this translation? Assigning a
value is often highly subjective.
Systems professionals draw upon many sources in attempting to quantify intangible benefits and
manipulate them into financial terms. Some common techniques include customer (and
employee) opinion surveys, statistical analysis, expected value techniques, and simulation
models. Though systems professionals may succeed in quantifying some of these intangible
benefits, more often they must be content to simply state the benefits as precisely as good
judgment permits.

Because they defy precise measurement, intangible benefits are sometimes exploited for political
reasons. By overstating or understanding these benefits, a system may be pushed forward by its
proponents or killed by its opponents.

85
INTANGIBLE BENEFITS

Increased customer satisfaction


Improved employee satisfaction
More current information
Improved decision making
Faster response to competitor actions
More efficient operations
Better internal and external communications
Improved planning
Operational flexibility
Improved control environment

Compare costs and benefits.


The last step in the cost-benefit analysis is to compare the costs and benefits identified in the first
steps. The two most common methods used for evaluating information systems are net present
value and payback.

UPDATE ANOMALY – The update anomaly results from data redundancy in an unnormalized
table.
INSERTION ANOMALY-

DELETION ANOMALY – involves the unintentional deletion of of data from a table.


Normalized table meet two conditions:
1. All non key attributes in the table are dependent on (defined by) the primary key.
2. All non key attributes are independent of other non key attributes when these conditions
are met the table in question is in 3NF.
Unnormalized Data Base of Student Enrollments
Studt # student major course Crse dese. Insts Off. loc Tel num Grade
Hrs.
864 Salhi Acct Acct 315 Fin. Acct Ray 9-11 44 8-45-45 A
Acct 324 Mgt Acct Paul 8-10 48 8-8945 A
Math 201 Math for Jones 1-3 32 8-2345 B
mgt
867 Amara Mgmt Mgmt 201 Intro mgmt Buell 4-5 46 8-3436 C
Hist 201 E. hist Keno 9-11 34 8-2378 B
986 mills Acct Acct 201 Prin. Acct Ray 9-11 44 8-4545 B
Math 201 Math for Jones 1-3 32 8-2345 B
mgmt
Mgmt 201 Intro mgmt Buell 4-5 46 8-3436 C

Beginning with an unnormalized table,

The first step in the process is to identify and remove any repeating groups. Repeating groups are
multiple date values at the intersection of rows and columns. When this is done, the table is in

86
1NF. The next step is to identify and remove any partial dependencies. These are non key
attributes that are dependent on (defined by) only part of the primary key. At this point the table
is in second normal form (2NF).
The last step, which places the table in 3NF, is to remove

DESINGNING LOGICAL DATABASE

Two key features of a DBMS are the ability reduce data redundancy and the ability to associate
related data elements such as related fields and records. These functions are accomplished
through the use of keys, embedded pointers, and linked lists. An embedded pointers is a field
within a record containing the physical or relative address of a related record in another part of
the database. The record referred to may also contain an embedded pointer that points do a third
record, and so on. The series of records tied together by embedded pointers is a linked list.
Data Structures – Data structures are the bricks and mortar of the database. The data structures
allows records to be located, stored, and retrieved, and enables movement from one record to
another. Data structures have two fundamental components: organization and access method.
The organization of a file refers to the way records are physically arranged on the secondary
storage device.
The access method is the technique used to locate records and to navigate through the database.
No single structure is best for all processing tasks selecting one, therefore, involves a trade-off
between desirable features.
Typical Files processing operations
1. Retrieve a record from the file based on its primary key value
2. Insert a record into a file
3. Update a record in the file
4. Read a complete file of records
5. Find the next record in a file
6. Scan a file for records with common secondary keys
7. Delete a record form a file.
Sequential structure approach is efficient for operations 4 and 5. The principal advantage of
indexed random tries is in operation involving the processing of individual records (operations 1,
2, 3, and 6). Another advantage is their efficient use of dist storage.

The ISAM structure is moderately effective for operations 1 and 3. Direct access speed is
sacrificed to achieve very efficient performance in operations 4, 5 and 6.

Hashing structure is dusted to applications that require rapid access to individual records in
performing operations 1, 2, 3, and 6.

Pointer Structures
This approach stores in a field of one record the address (pointer) of a related record. The
pointers provide connections between the records pointers may also be used to link records
between files. The following figure presents the pointer structure, which in this example, is used
to create a linked-list file.

Sequential structures

87
Indexed structures
Hashing structures
Pointer structures
Types of pointers
Three types of pointers
- physical address pointers contains the actual dist storage location (cylinder, surface, and
records number) needed by controller. This physical address allows the system to access
the record directly without obtaining further information.
- A relative address pointers contains the relative position of a record in the file.
- A logical key pointer contains the primary key of the related record.
This key pointer contains the primary key of the related record’s physical address by a hashing
algorithm.

88
Hashing structure
A hashing structure employs an hashing algorithm that converts the primary key of a record
directly into a storage address. Hashing eliminates the needs for a separate inset.

100,000 = Record Size


Key value being sought
15943
Hashing Technique

Residual Translate to:


Cylinder 272
Surface 15
Records number 705

89
Types of pointers
Physical Address pointer
Record 1 Cyl 121 Record 1
Surface 05
Record 750

Physical Address

Relative Address pointer First records

135
Sequential
th
pointer to 135 Conversion Record 135 file
record in the file routine

Last record

Logical key pointer

key Hashing Record


9635 Algorithm 9631

Database Models

From the previous discussions, we can conclude that a database system is a collection of data
items such as records are tables and the association among them. It also incorporates other
elements such as relationships, constraints, and indexes. Hence organizing a database is not so
easy. Hence, to simplify matters, one needs a map showing how different elements of database
are associated. This map is called data model. Data modeling is a way of organizing a collection
of information pertaining to a system under investigation. A data model consists of two parts.

- A mathematical notation for describing the data and relationships.


- A set of operations used to manipulate that data.

90
Every database and database management system is based on particular database model. A
database model consists of rules and standards that define how data is organized in a database. It
provides a strong theoretical foundation of the database structure. From the theory emerges the
power of analysis, the ability to extract inferences and to create deductions that emerge from the
row data. Different models provide different conceptualizations of the database and they have
different outlooks and different perspectives. There are four basic types of database systems
hierarchical, network, relational, object-oriented and multidimensional models.

Conceptual Database structures (models)

Organizing a large database logically into records and identifying the relationships among these
records are complex and time-consuming tasks. Just consider the large number of different
records that are likely to be part of a corporate database and the numerous data elements
constituting those records. Even a small business is likely to have many different record types,
each of which possesses several distinict data elements. Then consider the many relationships
records may have with one another. For example, an invoice can be related to the customer who
purchased the merchandise, the salesperson who sold to the customer the merchandise, the
products included on the invoice, and the warehouse from which the merchandise was picked.

Several general types of record relationships can be represented in a database.


4. One-to-one relationships, as in a single parent record to a single child record or as in a
husband record in a monogamous society.

One-to-one Husband
relationship
Wife
5. one-to-many relationships, as in a single parent record to two or more child records-for
example, a teacher who teaches three single-section courses.

One-to-many Teacher
relationship

Course 3
Course 1 Course 2
6. may-to-one relationships, as in two or more parent records to a single child record-for
example, when thee administrators in a small kebelle share one secretary.

Many-to-one Chairman Police chief Fire chief


relationship

Secretary

91
7. Many-to-many relationships, as in two or more parent records to two or more child records-
for example, when two or more students are enrolled in two or more course.

Student 1 Student 2 Student 3

Course 1 Course 2 Course 3

The relationships among the many individual records stored in databases are based on one of
several logical data structures, or models. Database management system packages are designed
to use a specific data structure to provide end users with quick, easy access to information stored
in databases. Five fundamental database structures are the hierarchical, network, relational,
object-oriented, and multidimensional models.

The hierarchical and network models are termed navigational models because they possess
explicit links or path among data elements, but the relational models based upon implicit
linkages among data elements.

The Hierarchical Database Model

IN a hierarchical database, records are logically organized into a hierarchy of relationships. This
was a popular approach to data representation because it reflected, mote or less faithfully, many
aspects of an organization that are hierarchical in relationship. A hierarchically structured
database is arranged logically in an inverted three pattern. For example, an equipment database,
diagrammed below, may have building records, room records, equipment records, and repair
records. The database structure reflects the fact that repairs are made to equipment located in
rooms that are part of buildings.

92
Root parent room
Student 1

Siblings Children of root


Room 1 Room 2 Parent of equipment

Siblings children of room


EQUIP 1 EQUIP 2 EQUIP 3
Parents of repair

Repair 3 Children of
Siblings Repair 1 Repair 2 equipment

Leaf

All records in a hierarchy are called nodes. Each node is related to the others in a parent-child
relationship. Each parent record may have one or more child records, but no child record may have more
than one parent record. Thus, the hierarchical data structures implements one-to-one and one-to-many
relationships.

The top parent record in the hierarchy is called the root record, and the lowest file in a particular branch is
called a leaf. Files at the same level with the same parent are called siblings. This structure is also called a
tree structure.

A NAVIGATIONAL DATA BASE


The hierarchical data model is called a navigational database because traversing it requires following a
predefined path. This is established through explicit linkages (pointers) between related records. The only
way to access data at lower levels in the tree is by moving progressively downward from a root and along
the branches (pointers) of the tree until the desired record. For example, consider the partial database in
the following figure. To retrieve an employee record, the DBMS must first access the department record
(the root). That record contains a pointer to project record, which points to the employee record.

Department
data element

Project A data Project B data


element element

Employee 1 Employee 2
data element data element

93
DATA LINKAGES IN THE HIERARCHICAL MODEL. The following figure shows the data structures
and linkages of for the partial customer database. The purpose of this example is to illustrate the
navigational nature of the linkages between related records.

Customer file cust #


Previous record 1875 J. smith 1820 Next record

To head record in cash receipts let

To head record in sales invoice list

Invoice # $ amount ship date Pointers


Next invoice for Last invoice for
1921 800 2/10/91 customer 1875 customer 1875

Pointer to pointer to
next record next record 1
Item # only unit extended pointer
price price

9215 10 45.00 450.00

Assume that we wish to retrieve, for inquiry purposes, all data pertinent to a particular sales invoice
(number 1921) for a customer John Smith (Account number 1875). An access method used for this
situation is the hierarchical indexed direct access method (HIDAM) under this method, the root segment
(element) (customer file) of the database is organizes as an indexed sequential file. Lower-level records
(sales invoice) and invoice line item records) use pointers in a linked-list arrangement.

The pointer in the customer record (the level above) directs the access method to head in the appropriate
linked-list. The access method then compares the key value sought (invoice number 1921) against each
records in the list until it finds a match pointers in these records identify the supporting detail records (the
specific items sold) in the invoice line item file. The structure of the line item file is also a linked list
arrangement. The sales invoice and line item records are returned via the operating system and the DBMS
to user’s application for processing.

LIMITATIONS OF THE HIERARCHICAL MODEL – The hierarchical model presents a limited view of
data relationships. Based on the proposition that all business relationships are hierarchical (or can be
represented as such), this model does not always reflect reality. The following rules, which govern the
hierarchical model, reveal its operating constraints:
3. A parent record may have one or more child records. For example, in previous figure, customer is
the parent of both sales invoice and cash receipts.
4. No child record can have more than one parent

The second rule is often restrictive and limits the usefulness of the hierarchical model. Many firms need a
view of data associations that permit multiple parents such as that represented below.

Figure (a) Sales person Customer Sales person Customer


file file file file

94
Sales Sales Sales
invoice file invoice file invoice file

Natural Relationship Hierarchical representation with


data redundancy
Management, wishing to keep track of sales orders by customer and by salesperson, will with to
view sales order records as the logical child of both parents. However, this relationship,
although, logical, violates the single parent rule of the hierarchical model. Figure (b) shows the
most common way of resolving this problem. By duplicating the sales invoice file, we create two
separate hierarchical representations. Unfortunately, we achieve this improved functionality of a
cost-increased data redundancy. The network model deals with this problem more efficiently.

Hierarchically structured databases are less flexible than other database structures because the
hierarchy of records must be determined and implemented before a search can be conducted. In
other words, the relationships between records are relatively fixed by the structure. Ad hoc
queries made by users that require different relationships than are already implemented in the
database may be difficult or time consuming to accomplish. For example, a manager may wish to
identify vendors of equipment with a high frequency of repair. If the equipment record contains
the name of the original vender, such a query could be performed fairly directly. However, data
describing the original vender may be contained in a record that is part of another hierarchy. As a
result, there may not be any established relationship in a large database is not a minor task.

A hierarchical database management system usually processes structured, day-to-day operational


data rapidly. In fact, the hierarchy of records is usually specifically organized to maximize the
speed with which large batch operations such as payroll or sales invoices are processed.
Any group of records with a natural, hierarchical relationship to one another fits nicely within
this structure.

THE NETWORK DATABASE MODEL


A network database structure views all records in sets. Each set is composed of an owner record
and one or more member records. This is analogous to the hierarchy’s parent-children
relationship. This, the network model implements the one-to-one and the one-to-many record
structures. However, unlike the hierarchical model, the network model also permits a record to
be a member of more than one set at one time. The network model would permit the equipment
records to be the children of both the room records and the vendor records. This feature allow the
network model to implement the many-to-one and the many-to-many relationship types.

DATA LINKAGES IN A SIMPLE NETWORK- As with the hierarchical model, the network
model is a navigational database with explicit linkages between records. However, whereas the
hierarchical model allows only one path, the network model supports multiple paths to a
particular record. The structure can be accessed at either of the root level records (salesperson or
customer) by hashing their respective primary keys (SP # or cust #) or by reading their addresses
from an index. The path to the child record is explicitly defined by a pointer field in the parent
record . Notice the structure of the sales invoice file. In this example, each child now has two

95
parents and contains explicit links to other records that form linked-lists related to each parent.
For example, Invoice Number 1 is the child of salesperson number 1 and customer number 5.
This record has two links to related records. The first is a salesperson (SP) link to invoice
number 2. This represents a sale by salesperson number 1 to customer number 6. The second
pointer is the customer (C ) link to invoice number 3. This represents the second sale to customer
number 5, which was processed this time by salesperson number 2.
Salesperson Customer
File File
SP 1 BP2 Cust 5 Cust 6

Invoice SP1/cust 5 Invoice SP1/cust 6 Invoice SP2/cust 5 Invoice SP2/cust 5


#1 #2 #3 #4

Sales invoice Sp link Sp link


File

C link C link
Linkages in a network Database

DATA LINKAGES IN A MANY-TO-MANY RELATIONSHIP. The M:M association is a two-way


relationship in which each file in the set is both parent and child.

Navigating an M:M association requires creating a separate link file that contains pointer records in a
linked-list structure. The following figure illustrates the link file between the inventory and vendor files.
Notice that each inventory and vendor record exists only once, but there is a separate link record for each
item in vendor supplies and for each supplier of a given inventory item. This arrangement allows us to
find all vendors of a given inventory item and all inventories supplied by each vendor.

Link files may also contain accounting data. For example, the link file in this figure shows that the price
for inventory number 1356 from vendor number 1( $10) is not the same price changed by vendor number
3 (#12) similarly, the delivery time (days-lead time) and discounted offered (terms) are different. Data
that are unique to the item-vendor associations are stored in the unique link file record. Transaction
characteristics such as these can vary between vendors and even between different items from the same
vendor.

Unlike hierarchical data structures that require specific entrance points to find records in a hierarchy,
network data structures can be entered and traversed more flexibly.

Inven # Inven # Inven #


1356 1730 2512
Vendor link
inventory Vendor link
link
Inventory link Inventory link
Inventory/vendor

Unique Unique Unique Days Terms cost Days Terms cost 96


Data Data Data Lead 2/12/20 10 Lead 2/12/20 10 Unique
Time Time Data
5 3
Link file

Inventory link inventory link


inventory vendor link
link vendor link vendor link

Vendor # 2 Vendor # 3
Vendor # 1
vendor link

a link file in a many-to-many relationship


SYSTEMS ANALYSIS AND DESIGNING
AN INTRODUCTION TO SYSTEMS MODELING
Systems models play an important role in systems development. As a systems analyst or user,
you will constantly deal with unstructured problems. One way to structure such problems is to
draw models.

A model is a representation of reality.


Models can be built for existing systems as a way to better understand these systems or for
proposed systems as a way to document business requirements or technical designs. An
important concept, is the distinction between logical and physical models.

Logical models show what a system is or does. They are implementation independent; that is,
they depict the system independent of any technical implementation. As, such, logical models
illustrate the essence of the system popular synonyms include essential model, conceptual model,
and business model.

Physical models show not only what a system is or does, but also how the system is physically
and technically implemented. They are implementation dependent because they reflect
technology choices and the limitations of those technology choices. Synonyms include
implementation model and technical model.

Let’s say we need a system that can store students. Grade point averages. For any given student,
the grade point average must be between 0.00 and 4.00 where 0.00 reflects an average grade of F
and 4.00 reflects an average grade of A. We further know that a new student does not have a
grade point average until he receives his first grade report. Every statement in this paragraph
reflects a business fact. It does not matter which technology we use to implement these facts; the
facts are constant they are logical. Thus, grade point average is a logical attribute with a logical
domain of 0.00-4.00 and a logical default value of null (meaning nothing).

97
Now, let’s get physical. First e have to decide how we are going to store the above attribute.
Let’s say we will use a table column in a Microsoft access database. We need a column name.
Let’s assume our company’s programming standards require the following name: ColGrade
pointAvg. That is the physical filed name that will be used to implement the logical attribute
grade point average. Additionally, we could define the following physical properties for the
database column colGradepointAvg.

Data type Number


Field size single (as in “single precision”)
Decimal places 2
Default value Null (as in “none”)
Validation rule Between 0 and 4 inclusive
Required? No

System analysts have long recognized the importance of separating business and technical
concerns (or the logical versus the physical concerns). That is why they use logical system
models to depict business requirements and physical system models to depict technical design.
Systems analysis activities tend to focus on the logical system models for the following reasons.
- Logical models remove biases that are the result of the way the current system is
implemented or the way any one person thinks the system might be implemented. Thus,
we overcome the “we’ve always done it that way” syndrome, consequently, logical
models encourage creativity.
- Logical models reduce the risk of missing business requirements because we are too
preoccupied with technical details. Such errors are almost always much more costly to
correct after the system is implemented. By separating what the system must do from
how the system will do it, we can better analyze the requirements for completeness,
accuracy, and consistency.
- Logical models allow us to communicate with end-users in non technical or less technical
languages. Thus, we don’t lose “business’ requirements in the technical jargon of the
computing discipline.
Data modeling is a technique for organizing and documenting a system’s DATA. Data
modeling is sometimes called database modeling because a data model is eventually
implemented as a database. It is also sometimes called information modeling.
Systems analysis is a problem-solving technique that decomposes a system into its
component pieces for the purpose of studying how well those component parts work and
interact to accomplish their purpose.
Presumably, we do a systems analysis in order to subsequently perform a systems design.
Systems design (also called system synthesis) is a complementary problem solving technique
(to systems analysis) that reassembles a system’s component pieces back into a complete
system-it is hoped an improved system. This may involve adding, deleting, and changing
pieces relative to the original system.

Systems analysis is a term that collectively describes the early phases of systems
development. For our discussion, we offer the following definition.

98
Information systems analysis is define as those development phases in a project that
primarily focus on the business problem, independent of any technology that can or will be
used to implement a solution to that problem.

Notice that emphasis is placed on business issues, not technical or implementation concerns.
Systems analysis is driven by the business concern of system owners and system users.
Hence, it addresses the DATA, PROCESS and INTERFACE building blacks from system
owners’ and system users’ perspectives.
The documentation and deliverables produced by systems analysis tasks are typically stored
in a repository.
A repository is a location (or set of locations) where systems analysts, systems designers, and
system builders keep the documentation associated with one or more systems or projects.

Fundamentally, systems analysis is about problem solving. There are many approaches to
problem solving, therefore, there are many approaches to systems analysis some of the more
popular systems analysis approaches include structured analysis, information engineering,
discovery prototyping, and object-oriented analysis.
These approaches are often viewed as competing alternatives. In reality, certain combinations
can and should actually complement one another.
Structured analysis, information engineering (IE), and object-oriented analysis are examples
of model-driven approaches. Model-driven analysis emphasizes the drawing of pictorial
system models to document and validate both existing and/or proposed systems. Ultimately,
the system model becomes the blue point for designing and constructing an improved system.
Model is a representation of either reality or vision. Just as “a picture is worth a thousand
words,” most models use pictures to represent the reality or vision.
Examples of models with which you may already be familiar include flowcharts, structure or
hierarchy charts, and organization chares.
Today, model-driven approaches are almost always enhanced by the use of automated tools.
Let’s briefly examine today’s three most popular model –driven analysis and design
approaches.
Structured analysis is a process-centered technique that is used to model either an existing
system, define business requirements for a new system, or both.
The models are pictures that illustrate the system’s component pieces processes and their
associated inputs, outputs, and files. By process-centered, we mean the emphasis in this
technique is on the process-building blocks in your information system framework.
Structured analysis is simple in concept. Systems analysts draw a series of process models
called data flow diagrams (DFD) that depict the existing and/or proposed processes in a
system along with their inputs, outputs, and files. Ultimately, these process models serve as
blue prints for both business processes to be implemented and computer programs to be
purchased or constructed.

Process models, in various forms including data low diagrams, helps organizations to
visualize their processes and ultimately eliminate or reduce bureaucracy. This process models
can return value to an organization even if the goal is not to design or purchase software to
automate those processes.

99
Information systems design is defined as those tasks that focus on the specification of a
detailed computer-based solution. It is also called physical design. Thus, whereas systems
analysis emphasized the business problem, systems design focuses on the technical or
implementation concerns of the system.

The design models are often derived from logical models that were developed earlier in
models-driven analysis. Ultimately, the system design models become the blue prints for
constructing and implementing the new system.
Modern structured design is a process-oriented technique for breaking up a large program
into a hierarchy of modules that result in a computer program that is easier to implement and
maintain (change). Synonyms (although technically inaccurate) are top-down program design
and structured programming.
Design a program as a top-down hierarchy of modules is simple concept. A module is a
group of instructions- a paragraph, block, subprogram, or subroutine. The top-down structure
of these modules is developed according to various design rules and guidelines. (Thus,
merely drawing a hierarchy or structure chart for a program is not structured design.
Structured design seeks to factor a program into the top-own hierarchy of modules that have
the following properties.
- Modules should be highly cohesive that is, each module should accomplish one and only
one function. This makes the modules reusable in future programs.
- Modules should be loosely coupled: in other words, modules should be minimally
dependent on one another. This minimizes the effect that future changes in one module
will have on other modules.
The software model derived from structured design is called a structure chart. The structure chart
is derived by studying the flow of data through the program, structured design is performed
during system design. It does not address all aspects of design-for instance structured design will
not help you design inputs, outputs, or databases.

100
A simple process made (also called a data flow diagram)
Club
member Member order response

Accounts Credit rating & limit Process


member
orders
Credit rating Credit rating and
and limit limit

Bonus Club
Process
Process order memb
bonus
automatic er
orders
orders
Order to be
filled

Order to be Warehouse order to be


filled filled

Revised
Existing order details Orders automatic order

Information Engineering (IE) is a models-driven and DATA-centered, but process-sensitive


technique to plan, analyze, and design information systems. IE models are pictures that illustrate
and synchronize the system, data and processes.

The data models in IE are called entity relationship diagrams. The process models in IE use the
same data flow diagrams that were invented for structured analysis. IE’s contribution was to
define an approach for integrating and synchronizing the data and process models.

Information engineering is said to be a DATA-centered paradigm because it emphasizes the


study and requirements analysis of DATA requirements before those of the Process and interface
requirements. This is based on the belief that data is a corporate resource that should be planned
and managed. Accordingly, systems analysis drawn entity relationship diagrams (ERDs) to
model the system’s raw data before they draw the data flow diagrams that illustrate how that data
will be captured, stored, used, and maintained.

Both information engineering and structured analysis attempt to synchronize data and process
models. The two approaches differ only in which model is draw first. IE draws the data middles
first, while structured analysis draws the process models first.

A simple data model (also carried an entity relationship diagram)

101
Member Placed by is enrolled under
order Member Agreement
places applies to

established by
sells generates; establishes
is sold on generated by

in featured in
Product Promotion Club
features sponsors
is sponsored by

Object-oriented analysis and design. Object-oriented analysis and design are the new kinds on
the block. Object techniques are an attempt to eliminate deliberately separated concerns of
DATA from those of processes in such techniques as structured analysis and design, and
information engineering.
Object-oriented analysis and design (OOAD) attempts to merge the data and process concerns
into singular constructs called objects. Object-oriented analysis and design introduced object
diagrams that document a system in terms of its objects and their interactions.
Business objects might correspond to real things of importance in the business such as customers
and the orders they place for products. Each object consists of both the data that describes the
object and the processes that can create, read, update, and delete that object.

With respect to the information system building blocks, object-oriented analysis and design
significantly changes the paradigm. The DATA and PROCESS columns are essentially merged
into a single Object column. The models then focus on identifying objects, building objects, and
assembling appropriate objects into useful information systems.

Object models are also extendable to the interface building blocks of the information system
framework. Most contemporary computer user interfaces are already based on object technology,
for example, the Microsoft windows interfaces use standard object such as windows, frames,
drop-down menus; radio buttons, check boxes, scroll bars, and the like. Object programming
technologies such as C++, Java, Smalltalk, and visual basic are used to construct and assemble
such interface objects.

Object-oriented analysis (OOA) has become so popular that a modeling standard has evolved
around it. The unified modeling language (or UML) provides a graphical syntax for an entire
series of object models, such as the model illustrated in the figure below. The UML defines
several different types of diagrams that collectively model an information system or application
in terms of objects.

102
An object model (using the unified modeling language standard)
STUDENT
- Id number
- Name COURSE
- Grade Point Average Has records for - Subject
- Number
+ Admit ( ) - Title
+ Register for classes ( ) - Credit
+ Withdraw ( ) + create a course ( )
+ change address ( ) + Delete from course master ( )
+ calculate GPA ( ) + change in course master ( )
+ Graduate ( )

TRANSCRIPT COURSE

- Semester
- Division
- Grade
+ Add ( )
+ Drop ( )
+ Complete ( )
+ Change grade ( )

DETAILD DESIGN AND SYSTEMS IMPLEMENTATION

The purpose of the detailed design phase is t produce a detailed description of the proposed
systems analysis and is in accordance with the conceptual design. In this phase, all system
components –user views, data base tables, processes, and controls-are meticulously specified. At
the end of this phrase, these components are presented formally in a detailed design report. This
report constitute a set of “blueprints” that specify input screen formats, output report layouts,
data base structures, and process logic. These completed plans then proceed to the final phase in
the SDLC – systems implementation-where the system is physically constructed. In the final
phase, data base structures are created and populated with data, applications are coded and
tested, equipment is purchased and installed, employees are trained, and the system is
documented. This phase concludes with the conversion to the new system and termination of the
old system.
Detailed Systems Design
The design sequence
The detailed systems design phase of the SDLC follows a logical sequence of events data model
the business process and design conceptual views, design the normalized database tables, design
the physical user views (output and input views), develop the process modules, specify the
system controls, and reform a system walkthrough.

103
An Iterative Approach. Typically, the design sequence listed above is not a purely linear
process. Inevitably, system requirements change during the detailed design phase, causing the
designer to revisit previous steps.

DATA MODEL THE BUSINESS PROCESS AND DESIGN CONCEPTUAL VIEWS


Data modeling is the task of formalizing the data requirements of the business process as a
conceptual model. The primary tool for data modeling is the entity relationship (ER) diagram.
This technique is used to depict the entities or data objects in the system. To do so, the ER
diagram employs three basic symbols/ (1) Entities are represented as rectangles. An entity is a
resource, event, or agent involved in the business; (2) Attributes are data that describe the
characteristics or properties of entities. These are represented by circle attached to entities; and
(3) Relationships among entities are depicted by diamond symbols.

The degree of association between two entities is represented by cardinality the number of
records in one file that are linked to a single record in another file.
Although we can deduce aspects of the business process from the ER diagram, its primary
purpose is to identify the data attributes of the key entities in a process. These attributes represent
the conceptual user views that must be supported by the data base.

DESIGN PHYSICAL USER VIEWS


The physical views are the media for conveying and presenting data. These include output
reports, documents, and input screens.

DESIGN OUTPUT VIEWS. Output is the information produced by the system to support user
tasks and decisions. At the transaction processing level, output tends to be extremely detailed.
Revenue and expenditure cycle systems product control reports for lower-level management and
operational documents to support daily activities. Conversion cycle systems reports for
scheduling production, managing inventory, and cost management. These systems also produce
documents for controlling the manufacturing process.

The general ledger/financial reporting systems (GL/FRS) and the management reporting system
produce output that is more summarized. The intended users of these systems are management,
stockholders, and other interested parties outside the firm.

The Structured Design Approach


The structured design approach is a disciplined way of designing systems from the top down. It
consists of starting with the big picture of the proposed system that is gradually decomposed into
more and more detail until it is folly understood. Under this approach the business process under
design is usually documented by data flow and structure diagrams.

Top-down decomposition of the structured design approach

Data
- We can see from these diagrams how the systems designer follows a top-down approach.
The designer starts with an abstract description of the system and, through successive

104
steps, redefines this view to produce a more detailed description. In our example, process
2.0 in the context diagram is decomposed into an intermediate level DFD process 2.3 in
the intermediate DRD is further decomposed into an elementary DFD. This
decomposition could involve several levels to obtain sufficient details. Let’s assume that
three levels are sufficient in this case. The final step transforms process 2.3.3 into a
structure diagram that defined the program modules that will constitute the process.
How much design detail is needed? The conceptual design phase should highlight the differences
between critical featured of competing systems rather than their similarities. Therefore, system
designs at this point should be general. The designs should identify all the inputs, outputs,
processes, and special features necessary to distinguish one alternative from another. In same
cases, this may be accomplished at the context diagram level. In situations where the important
distinctions between systems are subtle, designs may need to be represented by lower-level
DRDs and even with structure diagrams. However, detailed DFDs and structure diagrams are
more commonly used at the detailed design phase of the SDLC.

Let’s examine the general features of two alternative conceptual designs for a purchasing system.

Inventory records Render data


Option A, batch system

Batches of Batches of Daily pickup Derivede by


purchase purchase by post office post office
Inventory Purchasin Mail Post
regulating orders
control g process room office
process process

Bill of materials Vendor data Option B, BDI system

Customer purchase electronic electronics


Customer order Productio requisitions Purchasing purchase order transaction
EDI Vendor’s
n process
system EDI system
planning

Electronic sales
Vendors order
processing
systems

105
Option a is a traditional batch processing system. The initial input for the process is the purchase
requisition from inventory control. When inventories reach their predetermined reorder points,
new inventories are ordered according to their economic order quantity. Transmitted of purchase
orders to suppliers takes place once a day via the mail.

In contrast, option B employs EDI technology. The trigger to this system is a purchase
requisition from production planning. The purchases system determines the quantity and the
vendor and then transmits the order on-line via EDI software to the vendor.

Both alternatives have pros one cons. A benedit option A and its simplicity of design, ease of
implementation, and lower demand for systems resources than option B. A negative aspect of
option A is that it requires the firm to carry inventories. On the other hand, option B may allow
the firm to reduce or even eliminate inventories. It is premature, at this point to attempt to
evaluate the relative merits of these alternatives. This is done formally in the next phase in the
SDLC. At this point, system designers are concerned only with identifying plausible system
designs.

The object-oriented approach


The object-oriented design approach is to build information systems from reusable standard
components or objects. This approach may be equated to the process of building an automobile
car manufactures do not create each new model from scratch. New models are actually built from
standard components that also go into other models. For example, each model of car produced by
a particular manufacturer may use the same type of engine, gearbox, alternator, radio and so on.
In fact, it may be that the only component actually created from scratch for a new car model is
the body.

The concept of reusability is central to the object-oriented approach to system design. Once
created, standard modules can be used in other systems with similar needs. The benefits of this
approach include reduced time and cost for development, maintenance, and testing and improved
user support and flexibility in the development process.

106
THE ROLE OF ACCOUNTANTS IN CONCEPTUAL SYSTEMS DESIGN

We established in charter that designing a AIS was a joint effort between the accounting function
of the organization and systems professionals. Accountants are responsible for the conceptual
system (he logical information flows) and systems professionals are responsible for the physical
system (the technical task of building the system). It important accounting considerations are not
conceptualized at this point, the may be over looked completely, thus exposing the organization
to financial loss and potential litigation. While participating in the conceptual design process, the
accountant must be aware that each alternate system must be adequately controlled, that audit
trials must be preserved, and that accounting conventions and legal requirements must be
understood. This does not mean that these issues must be specified in detail at this point. It does
mean that they should be recognized as items that must be addressed during the detailed design
phase of the system.

SYSTEM EVALUATION AND SELECTION

The next phase in the SDLC is the procedure for selecting the one system from the set of
alternative conceptual designs that will go to the detailed design phase. The systems evaluation
and selection phase is an optimization process that seeks to identify. The best system. This
decision represents a critical juncture in the SDLC. At this point, there is a great deal o
uncertainty about the system, and a poor decision here can be disastrous. The purpose of a formal
evaluation and selection procedure is to structure this decision-making process and thereby
reduce both uncertainty and the risk of making a poor decision.

There is no magic formula to ensure a good decision. Ultimately, the decision comes down to
management judgment. The objective is to provide a means by which management can make an
informed judgment. This selection process involves two steps:
3. perform a detailed feasibility study
4. perform a cost-benefit analysis.
The result, of these evaluations are then reported formally to the steering committee for final
system selection.

107
Systems Planning and Systems Analysis
We shall treat the SDLC subject matter conceptually. That is, the focus will be on that is done
rather than how it is done.

Systems Planning
The objective of systems planning is to link individual system projects or applications to the
strategic objectives of the firm. In fact, the basis for the systems plan is the organization’s
business plan, which specifies where the firm plans to go and how it will get there. There must
be congruence between the individual projects and the business plan, or the firm may fail to meet
its objectives. Effective systems planning provides this goal congruence.
Who should do systems planning?
Most firms that take systems planning seriously establish a systems steering committee to
provide guidance and review the status of system projects.
Typical responsibilities for a steering committee include:
6. Resolving conflicts that a rise from new systems
7. Reviewing projects and assigning priorities
8. Budgeting funds for systems development
9. Reviewing the status of individual projects under development
10. Determining at various checkpoints throughout the SDLC whether to continue with the
project or terminate it.
Systems planning occurs at two levels: strategic planning and project planning.

STRATEGIC SYSTEM PLANNING


Technically, strategic system planning is not part of the SDLC because the SDLC pertains to
specific applications. The strategic plan is concerned with the allocation of such systems
resources as employees, hardware, software, and telecommunications. It is important that the
strategic plan avoid excessive detail.

PROJECT PLANNING
The purpose of project planning is to allocate resources to individual applications within the
framework of the strategic proposals, evaluating each proposal’s feasibility and contribution to
the business plan, prioritizing individual projects, and scheduling the work to be done. The
product of this phase consists of two formal documents; the project proposal and the project
schedule.

Project planning includes the following steps:


Recognizing the Problem
The need for a new, improved information system may be manifested in various symptom. The
point at which the problem is recognized is important. This is often a function of the philosophy
of a firm’s management. The reactive management philosophy characterizes an extreme position,
in contrast to this is the philosophy of proactive management.
Who reports problem symptoms? Typically, symptoms are first reported by lower-level manages
and operations personnel. Occasionally, top management initiates system request on rare
occasions, computer specialists will identify problem symptoms and will initiate a system
solution.

108
DEFINING THE PROBLEM
It is tempting to take a leap in logic form symptom recognition problem definition. The manager
(user) must avoid this. It is important to keep an open mind and avoid direct attention and
resources in the wrong direction. The manager must specify the nature of the problem as he or
she sees it based on the nature of the difficulties identified.

The manager reports this problem definition to the computer systems professionals within the
firm. This begins an interactive process between the systems professionals and the user, which
results in a formal project proposal that will go before the steering committee for approval.

SPECIFYING SYSTEM OBJECTIVES


There must be harmony between the strategic objectives of the firm and the operational
objectives of the information system. Broad strategic objectives shape the narrower functional
objectives at the tactical and operations levels. These functional objectives lead to the
identification of information needs and serve to set out the operational objectives for the
information system.

109
DETERMINING PROJECT FEASIBILITY
A preliminary project feasibility study is conducted at this early stage to determine how best to
proceed with the project. By assessing the major constraints on the proposed system,
management can evaluate the project’s feasibility, or likelihood for success, before committing
large amounts of financial and human resources. The acronym TELOS identifies five aspects to
project feasibility. Technical feasibility, economic feasibility, legal feasibility, operational
feasibility and schedule feasibility.

PREPARING A FORMAL PROJECT PROPOSAL


The systems project proposal provides management with a basis for deciding whether or not to
proceed with the project. The formal proposal serves two purposes. First, it summarizes the
findings of the study conducted to this point into a general recommendation for a new or
modified system. This enables management to evaluate the perceived problem along with the
proposed system as a feasible solution. Second, the proposal outlines. Second, The proposal
outlines the linkage between the objectives of the proposed system and the business objectives of
the firm. It shows that the proposed new system complements to the strategic direction at the
firm.
EVALUATING AND PRIORITIZING COMPETING PROPOSALS
After a manageable number of system proposals have been received, members of the steering
committee and systems professionals evaluate the pros and cons of each proposal. This is the
first major decision point in a projects life cycle.
Assessing the strategic contribution and feasibility of the system: - an important step in the
evaluation process is to identify these proposals that promise the greatest potential in supporting
the business objectives of the firm. Two important strategic objectives are improved operational
productivity and improved decision making.

Producing a project schedule


Project schedule document formally presents management’s commitment to the project. The
project schedule is a budget of the time and costs for all the phases of the SDLC. These phases
will be completed by a project team selected from systems professionals, end users, and other
specialists such as accountants and internal auditors.

ANNOUNCING THE NEW SYSTEM PROJECT


The last step of the planning process-management’s formal announcement of the new system to
rest of the organization is the most delicate aspect of the SDLC. This is an exceedingly important
communiqué that, if successful, will pave the way for the new system and help ensure its
acceptance among the user community.

THE ACCOUNTANT’S ROLE IN SYSTEMS PLANNING


During the planning process, accountants are often called on to provide expertise in evaluating
the feasibility of projects. Their skills are particularly needed in specifying aspects of economic
and legal feasibility.

Systems Analysis
Systems analysis is actually a two-step process involving first a survey or the current system and
then an analysis of the user’s needs. A business problem must be fully understood by the systems

110
analyst before he or she can formulate a solution. An incomplete or defective analysis with lead
to an incomplete or defective solution. Therefore, systems analysis is the foundation for the rest
of the SDLC.
The deliverable product of this phase is a formal systems analysis report, which product of this
phase is a formal systems analysis report, which presents the findings of the analysis and
recommendations for the new system.

The survey step


Most systems are not developed from scratch. Usually, some form of information system and
related procedures are currently in place. The analyst often begins the analysis by determining
what elements, if nay, of the current system should be preserved as part of the new system. This
involves a rather detailed system survey. Facts pertaining to preliminary questions about the
system are gathered and analyzed. As the analyst obtains a greater depth of understanding of the
problem, he or she develops more specific questions for which more facts must be gathered. This
process may go on through several iterations. When all relevant facts have been gathered and
analyzed, the analyst arrives at an assessment of the current system. Surveying the current
system has both disadvantages and advantages.

Disadvantages
- System analyst can be “sucked in” and then “bogged down by the task of surveying the
current dinosaur system
- Current system surveys stifle new ideas

Advantages
- It is a way to identify what aspects of the old system should be kept.
- To specify the conversion procedures of the old system into new system, the analyst must
know not only what is to be done by the new system but also what was some by the old
one. This requires a through understanding of the current system
- For determining conclusively the cause of reported problem symptoms.

Gathering Facts- The survey of the current system is essentially a fact-gathering activity. The
facts gathered by the analyst are pieces of data that describe key features, situations, and
relationships of the system. System facts fail into the following broad classes:
- Data sources - controls
- Users - Transaction volumes
- Data stores - Error rates
- Processes - Resources costs
- Data flows - Bottlenecks and redundant operations

Fact- Gathering techniques – systems analysts use several techniques together the above-cited
facts. Earlier, we saw how a IAD session is a valuable source of system facts. However, it is
unlikely that all relevant questions will be addressed in a single IAD session. There will still be
some gaps in the analyst’s understanding that need to be filed. Other fact-gathering techniques
include observation, task participation, personal interviews, and reviewing key documents.
The analysis Step

111
System analysis is an intellectual process that is commingled with fact gathering. The analyst is
simultaneously analyzing as he or she gathers facts. The mere recognition of a problem presumes
some understanding of the norm or desired state. It is therefore difficult to identify where the
survey ends and the analysis begins.

SYSTEMS ANALYSIS REPORT - The event that marks the conclusion of the systems analysis
phase is a preparation of a formal systems analysis report. This report presents to management or
the analysis report. This report presents to management or the steering committee the survey
findings, the problems identified with the current system, the user’s needs, and the requirements
of the new system.

THE ACCOUNTANT’S ROLE IN SYSTEMS ANALYSIS


As a preliminary step to every financial audit, accountants conduct a systems survey to
understand the essential elements of the current system. The accountant’s experience and
knowledge in this area can be a valuable resource to the system analyst. Similarly, the
accountant’s understanding of end user information needs, internal control standards, audit trial
requirements, and mandated procedures is of obvious important to the task of specifying new
system requirements.
Conceptual Design
The purpose of the conceptual design phase is to produce several alternative conceptual systems
that satisfy the system requirements identified during systems analysis. By presenting users with
a number of plausible alternatives, the systems professional avoids imposing preconceived
constraints on the new system. The user will evaluate these conceptual models and settle on the
alternatives that appear most plausible and appealing.

By keeping systems design conceptual throughout these phases of the SDLC, we minimize the
investment of resources in alternative designs that, ultimately, will be rejected.

An objective of the conceptual systems design phase of the SDLC is to reach consensus between
users and systems professionals on plausible alternative designs for the new system. (see at the
back).
During this stage the company decides how o meet user needs. The first task is to identify and
evaluate appropriate design alternatives. We describes two design approves to conceptual
systems design. The structured approach and the object-oriented approach.
The structured approach develops each new system from scratch from the top down. Object-
oriented design builds systems from the bottom up through the assembly of reusable modules
rather than creating each system from scratch.

The structured design consists of starting with the “big picture” of the proposed system that is
gradually decompose into more and more detail until it is fully understood. Under this approach
the business process under design is usually documented by data flow and structure diagrams.

The designer starts with an abstract description of the system, and through successive steps,
redefines this view to produce a more detailed description.

The OOD approach to build IS from reusable standard components or objects.

112
Elements of the object-oriented approach
Attributes
Objects-object operations

Classes and instances- An object class is a logical grouping of individual objects that share the
same attributes and operations. An instances is a single occurrence of an object within a class
Inheritance- means that each object instance inherits the attributes and operations of the class to
which it belongs, object classes can also inherit from other object classes.

A system life cycle divides the life of an information system into two stages, systems
development and systems operation and support first you build it then you use it, keep it running,
and support it. Eventually, you cycle back from operations an support to redevelopment. Figure
3.1.illustrates the two life cycle stages and the two key events between the stages.
- When a system cycle from development to operation and support, a conversion must take
place.
At some point, obsolescence occurs and a system cycles from operation to redevelopment.

113
Conversion

Life cycle Life cycle stage


stage Life time of
a system System operation
System and support
development
Using information
Using system Technology
development Obsolescence
methodology
A system may be in more than one stage at the same time. For example, version one may be in
operation and support while version two is in development.

So what is a systems development methodology? Figure 3.1 also demonstrates that a systems
development methodology implements the development stage of the system life cycle.
A systems development methodology is a very formal and precise system development process
that defines a set of activities, methods, best practices, deliverables, and automated tools for
system developers and project managers to use, to develop and maintain most or all information
systems and software.

Methodologies ensure that a consistent, reproducible approach is applied to all projects.


Methodologies reduce the risk associated with shortcuts and mistakes. Finally, methodologies
produce complete and consistent documentation from one project to the next. These advantages
provide one overriding benefit-as development teams and staff constantly change, the results of
prior work can be easily retrieved and understood by those who follow:
Methodologies can be homegrown, however, many businesses purchase their development
methodology.
General principles that should underlie all systems development methodologies are:
- Principle 1: Get the owners and users involved analysts, programmers, and other
information technology specialists frequently refer to “my system”. This attitude crates
an “us-versus-them” conflict between technical staff and the users and management.
Although analysts and programmers work hard to create technologically impressive
solutions, those solutions often backfire because they don’t address the real organization
problems or they introduce new problems. For this reason, system owner and user
involvement is necessary for successful systems development. The individuals
responsible for systems development must make time for owners and users, insist on their
participation, and seek agreement from them on all decisions that may affect them.
- Principle 2: Use a problem-solving approach a methodology is a problem-solving
approach to building systems. The term problem include real problems, opportunities for
improvement, and directives from management. The classic problem-solving approach is
as follows:
6. Study and understand the problem and its context.
7. Define the requirements of a suitable solution.
8. Identify candidate solutions and select the “best” solution

114
9. Design and/or implement the solution
10. observe and evaluate the solution’s impact, and refine the solution accordingly.
Systems analysts should approach all projects using a problem –solving approach.
Principle 3: Establish phases and activities all life cycle methodologies prescribe phases and
activities. The number and scope of phases and activities varies from author to author, expert to
expert, and company to company. The phases or activities are:
- Preliminary investigation or system planning
- Problem analysis or system analysis
- Requirements analysis
- Decision analysis or system selection
- Design o detailed design
- Construction
- Implementation

Each phase serves a role in the problem-solving process some phases identify problems, while
others evaluate, design and implement solutions
Principle 4: Establish standards- an organization should embrace standards for both information
systems and the process used to develop those systems. In medium to large organizations, system
owners, users, analysts, designers, and builders come and go. Some will be promoted, some will
quit; and others will be reassigned. To promote good communication between constantly
changing managers, users, and information technology professionals, you must develop
standards to ensure consistent systems development.

Standards should minimally encompass the following:


- Documentation
- Quality
- Automated tools
- Information technology
These standards will be documented and embraced within the context of the chosen system
development process or methodology.
The need for documentation standards underscores a common failure of many analysts-the
failure of document as an ongoing activity during the life cycle. Documentation should be a
working by-product of the entire systems development effort. Documentation reveals
strengths and weaknesses of the system to multiple stockholders (system owners, system
users, system designers, and system builders) before the system is built. It stimulates user
involvement and reassures management about progress.
Quality standards ensure that the deliverables of any phase or activity meet business and
technology expectations. They minimize the likelihood of missed business problems and
requirements, as well as flawed designs and program errors (bugs). Frequently, quality
standards are applied to documentation produced during development, but quality standards
must also be applied to the technical end products such as databases, programs, user and
system interfaces, and networks.
Automated tool standards prescribe technology that will be used to develop and maintain
information systems and to ensure consistency, completeness, and quality. Today’s
developers use automated tools (such as Microsoft access or visual basis or computer-aided

115
systems engineering (CASE) to facilitate the completion of phases and activities, produce
documentation, analyze quality, and generate technical solutions.

Finally, information technology standards direct technology solutions and information


systems to a common technology architecture or configuration. This is similar to automated
tools except the focus is on the underlying technology of the finished product, the
information systems themselves. For example, an organization may standardize on specific
computers and peripherals, operating systems, database management systems, network
topologies, user interfaces, and software architectures. The intent is to reduce effort and cost
required to provide high quality support and maintenance of the technologies themselves.
Information technology standards also promote familiarity, ease of learning, and ease of use
across all information systems by limiting the technology choices. Information technology
standards should not inhibit the investigation or use of appropriate emerging technologies
that could benefit the business.
Principle 5: justify systems as capital investments information systems are capital
investments, just as a fleet of trucks and a new building are. Even if management fails to
recognize information system as an investment, you should not. When considering a capital
investment, two issues must be addressed.

First, for any problem, there are likely to be several possible solution. The analyst who fails
to look at alternatives may be doing the business a disservice.
Second, after identifying alternative solutions, the systems analyst should exam evaluate each
possible solution for feasibility, especially for cost-effectiveness and risk management.
Cost-effectiveness is defined as the result obtained by striking a balance between the cost of
developing and operating an information system and the benefits derived from the system.
Cost-effectiveness I measured using a technique called cost-benefit analysis. Risk
management is the process of identifying, evaluating, and controlling what might go wrong
in a project before it becomes a threat to the successful completion of the project or
implementation of the information system.
Principle 6: Don’t be afraid to cancel or revise scope a significant advantage of the phased
approach to systems development is that it provides several opportunities to reevaluate cost-
effectiveness and feasibility. There is often a temptation to continue with a project only
because of the investment already made. In the long run canceled projects are less costly than
implemented disasters.
Feasibility should be measured throughout the life cycle. The scope and complexity of an
apparently feasible project can change after the initial problems and opportunities are fully
analyzed or after the system has been designed. Thus, a project that is feasible at one point
may become infeasible later.
We advocate a creeping commitment approach to systems development. Using creeping
commitment approach, multiple feasibility check points are built into any systems
development methodology. At each feasibility checkpoint, all costs are considered sunk
(meaning not recoverable). They are, therefore, irrelevant to the decision. Thus, the project
should be reevaluated at each checkpoint to determine if it remains feasible to continue
investing time, effort, and resources.
At each checkpoint, the analyst should consider the following options:
- Cancel the project if it is no longer feasible

116
- Reevaluate and adjust the costs and schedule if project scope is to be increased
- Reduce the scope if the project budget and schedule are frozen and not sufficient to cover
all project objectives.
Principle 7: Divide and conquer consider the old saying, “If you want to learn anything, you
must not try to learn everything-at least not all at once.” For this reason, we divide a system into
subsystems and components to more easily conquer the problem and build the larger system. By
dividing a larger problem (system) into more easily managed pieces (subsystems), the analyst
can simplify the problem-solving process. This divide and conquer approach also complements
communication and project management by allowing different pieces of the system to be
delegated to different stockholders.
Principle 8: Design systems for growth and change many systems analysts develop systems to
meet only today’s user requirements because of the pressure to develop the system as quickly as
possible. Although this may seen to be a necessary short term strategy, it frequently leads to
long-term problems.
System scientists describe the natural and inevitable decay of all systems over entropy. When the
cost of maintaining the current system exceeds the cost o developing a replacement system-the
current system has reached entropy and became obsolete. But system entropy can be managed.
Today’s tools and techniques make it possible to design systems that can grow and change as a
requirements grow and change.
Flexibility and adoptability do not happen by accident – they must be built into a system.
IN-HOUSE SYSTEMS DEVELOPMENT
Organizations usually acquire information systems in two ways: (1) they develop customized
systems in-house through formal systems development activities and (2) they purchase
commercial system from software vendors. Many organizations require systems that are highly
tuned to their unique operations. These firms design their own information systems through in-
house systems development activities. These systems are developed through a formal process
called the systems development life cycle (SDLC), is shown in Fig 3.1.
Fig 3.1. The systems Development life cycle
Systems Analysis
- Do initial investigation
Systems - Do systems survey
Planning - Do feasibility study
Project proposal - Determine information needs
and schedules And system requirements
- Deliver systems requirements

Operation and maintenance


Conceptual design
Operate system Identify and evaluation design
Modify system
Implementation and conversion alternatives
Do ongoing maintenance develop design specifications
Detailed/physical
Deliver Designsystem
conceptual design
Deliver
Develop improved system
implementation and conversion plan
Install hardware and software Design output
Train personnel Design
Test the system System data base
selection
Design input
System selection report
Complete documentation Develop programs
Convert from old to new system Develop procedures 117
Fine-tune and do postimplmentation review Design controls
Deliver operational system Deliver developed system
Back end activities – these stages deal with the technical issues of how the physical system will
accomplish its objectives.

Throughout the life cycle planning must be done and behavioral aspects of change must be
considered.
Feasibility analysis and decision points
Economic feasibility
Technical feasibility
Legal feasibility
Scheduling feasibility
Operational feasibility
The SDLC in figure 3.1 is a seven stage process consisting of two major phases: new systems
development and maintenance.
NEW SYSTEMS DEVELOPMENT. The first six stages of the SDLC describe the activities that
all new systems should undergo. Conceptually, new systems development involves five steps:
identify the problem, understand what is to be done, consider alternative solutions, select the best
solution, and finally, implement the solution. Each stage of the SDLC produces a set of required
documentation that marks the completion of the stage.

MAINTENANCE- once a system is implemented, it enters the second phase in its life cycle-
maintenance. Maintenance involves changing systems to accommodate changes in user needs.
This may be relatively trivial, such as modifying the system to produce a new report or changing
the length of a data field. Maintenance may also be more extensive, such as making major
changes to an application’s logic and user interface.

FRONT END AND BACK END – The first four stages in the SDLC are often called front end
activities. These are concerned, conceptually, with what the system should do. The last three
stages are the back end activities. These stages deal with the technical issues of how the physical
system will accomplish its objectives.

The participants is systems development can be classified into three broad groups: systems
professionals, end users, and stockholders.

Why are accountants involved with SDLC- The SDLC process is of interest to accountants for
two reasons. First, the creation of an information system represents a significant financial
transaction that consumes both financial and human resources such transactions must be planned,
authorized, scheduled, accounted for, and continued, accountants are as concerned, with the
integrity of this process as they are with any manufacturing process that has financial resource
implications.

The second and more pressing concern for accountants is with the products that emerge from the
SDLC. The quality of accounting information systems rests directly on the SDLC activities that
produce them. The accountant’s responsibility is to ensure that the systems apply proper

118
accounting conventions and rules, and possess adequate controls. Therefore, accountants are
concerned with the quality of the process that produces AIS.

IMPROVING SYSTEMS DEVELOPMENT THROUGH AUTOMATION


System development projects are not always success stories. In fact, by the time they are
implemented, some systems are absolute or defective and must be replaced. Historically, the
SDLC has been plagued by three problems that account for most systems failures. These
problems are:
- Poorly specified systems requirements
- Ineffective development techniques
- Lack of user involvement in systems document.

Documentations is the narratives, flowcharts, diagrams, and other written material that explain
how a system works. This information covers the who, what, when, why and how of data entry,
processing, storage, information output, and system controls. Since a picture is worth a thousand
words, one popular means of documenting a system is to develop diagrams, flow charts, tables
and other graphical representations of information. They are then supplemented by a narrative
description of the system, a written step by step explanation of system components and
interactions. The most common systems documentation tools and techniques are
1. Entity relationship diagrams
2. Data flow diagrams
3. document flowcharts
4. system flowcharts
5. program flowcharts
6. decision tables

Entity Relationship Diagram (ER diagram) is a documentation technique used to represent the
relationship between entities (resources, events, and agents in a systems.
Participates Internal
in agent
Get resource
Resource A Inflo A
w Participates External
in agent

Econom
ic
duality
Participates Internal
in agent
Give up
Resource B outflo resource B
w Participates External
in agent

Data Flow Diagrams (DFD)

119
Uses a graphical description of the source and destination of data, how data flows within an
organization, the processes performed on the data, and how data is stored. It uses symbols to
represent the processes, data sources, data flows and entities in a system.

120
Symbol Description
Square Data sources and destinations

Data flow

or Transformation processes

or A stop of data such as a transaction file, a master file,


or a reference file,

DFD Rules

Rule A . – No process can have only output

Incorrect Correct

If an object has only outputs, then it must be a source so a process must have both input and
output.

Rule B. No process can have only input. If an object has only input then it must be a sink

Incorrect correct

Rule C. a process has verb phrase label


Data store
Rule d. Data cannot move directly from one data store to another data store. A data must be
moved by a process.
Incorrect Correct

Rule E. Data cannot move directly from an outside sink or to a data store. Data must be moved
by a process which receives data from the sources and place the data into the data store.

Incorrect Correct

121
Rule F; Data cannot move directly to an outside sink from a data store. Data must be moved by a
process.

Rule G. a data store has a noun phrase label.


Source / sink
Rule H . Data cannot move directly from a source to a sink. It must be moved by a process if the
data are of any concern to our system.

If it not concern to our system If it is concern to our system

Rule H – a source /sink has a noun phrase label


Data flow
Rule I- A data flow has only one direction of flow between

Symbols It is better to use two data flow


Retrieved

Data store updated


Rule I. A Fork in a data flow means that exactly the same data goes from a common location to
two or more different process, data stores, source or sinks

Incorrect Correct
A A

B A

Rule K.
Incorrect A
A
B
B

Rule L. A data flow cannot go directly back to the same process it leaves.

Incorrect Correct
A

122
B
A A

A data flow to a data stores means update (delete or change)


A data flow from a data sources means retrieve or use
A data flow has a noun phrase label.

DFDs show what logical tasks are being done, but not how they are done or who (or what) is
performing them.

Subdividing the DFD


DFDs are subdivided into successively lower levels in order to provide ever increasing amounts
of details, since few systems can be fully diagramed on one sheet of paper. Since users have
differing needs, a variety if levels can better satisfy these requirements.

The highest6-level DFD is referred to as a context diagram. A context diagram provides the
reader with a summary-level view of a system. It depicts a data processing systems as well as the
external entities that are the sources and destinations of the system’s inputs and outputs.

Context DFD for payroll processing procedures


Departments
Time cards
Tax reports & payments Government
agencies

Employee pay-checks Employee


Payroll Payroll checks
processing Payroll report
System Bank

Employee data
Mgmt
Human
resources
Intermediate level DFD
The elementary DFD

Elements of DFD
Data Sources and Destinations
Data flows
Processes
Data stores
Data dictionary

123
Top-Down decomposition of the structured design approach
Data
1.0
2.0 Context level DFD
output
Input
Process Process

2.1 2.2 2.3


Input Intermediate
output
Process Process Process level DFD

Data Data

Input 2.3.1 2.3.3


2.3.2

Process output Elementary


Process Process Level DFD
Input

Program
Module 1

Module 2 Module 3 Module 4


Flow charts
A flow charts is Module
an analytical
5 technique
Module 6 used to Module
describe7 some aspects
Moduleof8 an information system
in a clear, concise, and logical manner. Flow charts use a standard set of symbols do describe
pictorially the transaction processing procedures used by a company and the flow of data through
a systems flowcharts are commonly used by accountants to represent both the logical and
physical elements of manual systems.
Flow charting symbols. Symbols can be divided into the following four categories.

1. Input/output symbols – represent devices or media that provide input to or record output
from processing operations
2. Processing symbols – Either show what type of device is used to process data or indicate
when processing is completed manually
3. Storage symbols represent the devices used to store data that the system is not currently
using.

124
4. Flow & miscellaneous symbols indicate the flow of data and goods. They also represent
such operations as where flowchart begin or end, where decisions are made, and when to
add explanatory notes to flowcharts.
Symbols Description
Terminal showing source or destination
of documents and reports

Source document or report

Manual operation

File for storing source documents as


reports

Accounting records (journal, registers)

Calculated batch total

On-page connector
Off-page connector

Description of process or
comments

Document flowline

Document flowchart

Department A Department B Department C

Outside Doc 2 A
process

Doc 1 Proce Accounting


ss 3 Records
Process Accounting
2 records

Process
1 Doc 2 Doc 3

Doc 1 fi
le Outside
Doc 2 process
Doc 3 A 125
Fil
e
Each symbols used to create flowcharts has a special meaning that is easily conveyed by its
shape. The shape indicates and describes the operations performed and the input, output,
processing, and storage media employed.

Document Flowcharts
A document flowchart- is a graphical description of the flow of document and information
between departments pr areas of responsibility within an organization. Document flowcharts
trace a document from its cradle to its grave. They show where each documents originates, its
distribution, the purposes for which it is used, its ultimate disposition, and everything that
happens as it flows through the system.
A document flowchart is particularly useful in analyzing the adequacy of control procedures in a
system, such as internal checks and segregation of functions. Flowcharts that describe and
evaluate internal controls are often referred to as internal control flowcharts. The document
flowchart can reveal weaknesses or inefficiencies in a system, such as inadequate
communication flows, unnecessary complexity in document flows, or procedures responsible for
causing wasteful delays.

Computer System Flowcharts


System flowcharts a graphical description of the relationship birr among the key elements-input
sources, programs/processing, and output in an information system. System flowcharts also
depict the type of media being used (such as paper, magnetic tape, magnetic disks, and
terminals). A system flowchart provides a broad view of an entire system in a concise format.
Such high-level documentation does not provide the operational details that are sometimes
needed. For example, an auditor wishing to assess the correctness of the logic used in the edit
program cannot do so from the system flowchart. This level of detail is provided by program
flowcharts.
Program Flowchart
Every program represented in a system flowchart should have a supporting program flowchart
that describes its logic. A program flowchart illustrates the sequence of logical operations
performed by a computer in executing a program.
Program flowcharts employ a subset of the symbols shown in previous figure. A flow line
connects the symbols and indicates the sequence of operations. The processing symbol
represents a data movement or arithmetic calculation. The input/output symbol represents either
the reading of input or the writing of output. The decision symbol represents or comparison of
one or more variables and the transfer of flow to alternative logic paths. All points where the
flow to alternative logic paths. All points where the flow begins or ends are represented by the
terminal symbol. Connectors, labeled with a digit or a capital letter-represent the continuation of

126
the logic flow at a different location. Several exit connectors may have the same label, but labels
can have only one entry connector. Once designed and approved, the program flowchart serves
as the blueprint for coding the computer program.

Relationship between systems and program flowcharts. A program flowchart describes the
specific logic to perform a process shown on a systems
systems flowchart Program flowchart
Input Input data

Storage Process

If a
Output condition is
met

Perform
calculation

Update record

A simple program flowcharts for processing credit orders

127
Decision Tables
A decision table is a tabular representation of decision logic. For any given situation , a decision
table lists all the conditions (the **) that are possible in making a decision. It also lists the
alternative actions (the thens) as well. Each unique relationship (if this condition exists, then take
this action) is referred to as a decision rule.
A decision table has four parts: the condition and action stubs and the condition and action
entries. The condition stub contains the various logic conditions for which the input data are
tested. For example, there are three entries in the condition stub in Table B. Note that they
correspond to the three decision symbols in the program flowchart.
The condition entry consists of a set of vertical columns, each representing a decision rule in
which the entries must be either yes (Y), no (N), or the dash ( - ). The dash indicate an indifferent
result of the condition test. For example when credit is not approved, the quantity or inventory on
hand is not relevant.

The action stub contains the actions the program should take. Table B indicates that four actions
can be undertaken: reject the order, back-order, fill the order, or give a discount. The action entry
columns indicate when an action is to occur. They display an X is the action is performed (if the
input data meet the condition tests). A blank indicates the action is not performed.
Table A
General Form of a Decision Table
Stub
Entry
Condition Rule
Condition
(Specific conditions)

Action Rule
Action
(Specific actions)

Table B
Simple Decision Table for Processing
a b c d
Credit approved N Y Y Y
Order inventory on hand - N Y Y
Order > 500 units - - N Y
Reject order X
Back-order X

128
Fill order X X
Give 20% discount X

129

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy