Chapter 3 (Eis)
Chapter 3 (Eis)
Chapter 3 (Eis)
INFORMATION
SYSTEMS AND ITS
COMPONENTS
LEARNING OUTCOMES
People
Resources
Hardware
Computer
System
Components Software
Data Resources
Networking and
Communication
System
Objectives of
Objectives Controls
Information Controls
Systems (IS) Classification Nature of
Information
System
Auditing Resources
Environmental
Controls
Logical Access
Audit Trail Controls
Managerial
Controls
Application
Controls
3.1 INTRODUCTION
Over the past few centuries, the world has moved on from connection amongst
individuals to more of connection amongst systems. We now have systems that are
constantly exchanging information about various things and even about us, many
a times without human intervention. This inter-networking of physical devices,
vehicles, smart devices, embedded electronics, software, sensors or any such device
is often referred to as IoT (Internet of Things).
What is interesting about various emerging technologies is that at its core we have
some key elements, namely, People, Computer Systems (Hardware, Operating
System and other Software), Data Resources, Networking and Communication
System. In this chapter, we are going to explore each of those key elements.
CONTROL
(Decision Makers,
Auto Control) FEEDBACK
User
Fig. 3.2.1: Functions of Information Systems
Three basic activities of an information system that are defined above, helps
enterprise in making decisions, control operations, analyze problems and create
new products or services as an output, as shown in Fig. 3.2.1. Apart from these
activities, information systems also need feedback that is returned to appropriate
members of the enterprises to help them to evaluate at the input stage.
Information Systems are networks of hardware and software that people and
organizations use to create, collect, filter, process and distribute data. Information
Systems are interrelated components working together to collect, process, and
store and disseminate information to support decision-making, coordination,
control, analysis and visualization in an organization. An Information System
comprise of People, Hardware, Software, Data and Network for communication
support shown in Fig. 3.3.1.
Here, people mean the IT professionals i.e. system administrator, programmers and
end users i.e. the persons, who can use hardware and software for retrieving the
desired information. The hardware means the physical components of the
computers i.e. server or smart terminals with different configurations like
corei3/corei5/corei7 processors etc. and software means the system software
(different types of operating systems e.g. UNIX, LINUX, WINDOWS etc.), application
software (different type of computer programs designed to perform specific task)
and utility software (e.g. tools). The data is the raw fact, which may be in the form
of database. The data may be alphanumeric, text, image, video, audio, and other
forms. The network means communication media (Internet, Intranet, Extranet etc.).
In the ever-changing world, innovation is the only key, which can sustain long-run
growth. More and more firms are realizing the importance of innovation to gain
competitive advantage. Accordingly, they are engaging themselves in various
innovative activities. Understanding these layers of information system helps any
enterprise grapple with the problems it is facing and innovate to perhaps reduce
total cost of production, increase income avenues and increase efficiency of
systems.
devices work together. The CPU is built on a small flake of silicon and can
contain the equivalent of several million transistors. We can think of
transistors as switches which could be ”ON” or “OFF” i.e., taking a value of 1
or 0. The processor or CPU is like the brain of the computer.
(iii) Data Storage Devices refers to the memory where data and programs are
stored. Various types of memory techniques/devices are given as follows:
(a) Internal Memory: This includes Processer Registers and Cache
Memory.
Processor Registers: Registers are internal memory within CPU,
which are very fast and very small.
Cache Memory: To bridge the huge speed differences between
Registers and Primary Memory, we have cache memory. Cache is a
smaller, faster memory, which stores copies of the data from the
most frequently used main memory locations so that
Processor/Registers can access it more rapidly than main memory.
(b) Primary Memory/Main Memory: These are devices in which any
location can be accessed by the computer’s processor in any order (in
contrast with sequential order). There are two types of primary memory
as discussed in Table 3.3.1:
Table 3.3.1: RAM vs ROM
(c) Secondary Memory: CPU refers to the main memory for execution of
programs, but these main memories are volatile in nature and hence
cannot be used to store data on a permanent basis in addition to being
small in storage capacity. The secondary memories are available in
bigger sizes; thus programs and data can be stored on secondary
memories.
Secondary storage differs from primary storage in that it is not directly
accessible by the CPU. The features of secondary memory devices are
non-volatility (contents are permanent in nature), greater capacity (they
are available in large size), greater economy (the cost of these is lesser
compared to register and RAMs) and slow speed (slower in speed
compared to registers or primary storage).
(d) Virtual Memory: Virtual Memory is in fact not a separate device but an
imaginary memory area supported by some operating systems (for
example, Windows) in conjunction with the hardware. If a computer
lacks in required size of the Random-Access Memory (RAM) needed to
run a program or operation, Windows uses virtual memory to
compensate. Virtual memory combines computer’s RAM with
temporary space on the hard disk. When RAM runs low, virtual memory
moves data from RAM to a space called a paging file. Moving data to
and from the paging file frees up RAM to complete its work. Thus,
Virtual memory is an allocation of hard disk space to help RAM and
depicted in the Fig. 3.3.2.
Secondary
Memory
(iv) Output Devices: Computer systems provide output to decision makers at all
levels in an enterprise to solve business problems, the desired output may be
in visual, audio or digital forms. Output devices are devices through which
system responds. Visual output devices like, a display device visually conveys
text, graphics, and video information. Information shown on a display device
is called soft copy because the information exists electronically and is
displayed for a temporary period. Display devices include CRT monitors, LCD
monitors and displays, gas plasma monitors, and televisions. Some types of
output are textual, graphical, tactile, audio, and video.
• Textual output comprises of characters that are used to create words,
sentences, and paragraphs.
• Graphical outputs are digital representations of non-text information
such as drawings, charts, photographs, and animation.
• Tactile output such as raised line drawings may be useful for some
individuals who are blind.
• Audio output is any music, speech, or any other sound.
• Video output consists of images played back at speeds to provide the
appearance of full motion.
Most common examples of output devices are Speakers, Headphones, Screen
(Monitor), Printer, Voice output communication aid, Automotive navigation
system, Video, Plotter, Wireless etc.
II. Software
Software is defined as a set of instructions that tell the hardware what to do.
Software is created through the process of programming. Without software, the
hardware would not be functional. Software can be broadly divided into two
categories: Operating Systems Software and Application Software as shown in
the Fig. 3.3.3. Operating systems manage the hardware and create the interface
between the hardware and the user. Application software is the category of
programs that do some processing/task for the user.
SOFTWARE
machine, which is more powerful and easy to use. Some prominent Operating
systems used nowadays are Windows 7, Windows 8, Linux, UNIX, etc.
All computing devices run an operating system. For personal computers, the most
popular operating systems are Microsoft’s Windows, Apple’s OS X, and different
versions of Linux. Smart phones and tablets run operating systems as well, such as
Apple’s iOS, Google Android, Microsoft’s Windows Phone OS, and Research in
Motion’s Blackberry OS.
A variety of activities are executed by Operating systems which include:
♦ Performing hardware functions: Operating System acts as an intermediary
between the application program and the hardware by obtaining input from
keyboards, retrieve data from disk and display output on monitors
♦ User Interfaces: Nowadays, Operating Systems are Graphic User Interface (GUI)
based which uses icons and menus like in the case of Windows.
♦ Hardware Independence: Operating System provides Application Program
Interfaces (API), which can be used by application developers to create
application software, thus obviating the need to understand the inner workings
of OS and hardware. Thus, OS gives us hardware independence.
♦ Memory Management: Operating System allows controlling how memory is
accessed and maximize available memory and storage.
♦ Task Management: This facilitates a user to work with more than one
application at a time i.e. multitasking and allows more than one user to use the
system i.e. time sharing.
♦ Networking Capability: Operating systems can provide systems with features
and capabilities to help connect computer networks like Linux & Windows 8.
♦ Logical Access Security: Operating systems provide logical security by
establishing a procedure for identification and authentication using a User ID
and Password.
♦ File management: The operating system keeps a track of where each file is
stored and who can access it, based on which it provides the file retrieval.
(b) Application Software
As the personal computer proliferated inside organizations, control over the
information generated by the organization began splintering. Say the customer
service department creates a customer database to keep track of calls and problem
reports, and the sales department also creates a database to keep track of customer
information. Which one should be used as the master list of customers? As another
example, someone in sales might create a spreadsheet to calculate sales revenue,
while someone in finance creates a different one that meets the needs of their
department. However, it is likely that the two spreadsheets will come up with
different totals for revenue. Which one is correct? And who is managing all this
information? To resolve these issues, various specific purpose applications were
created.
Application software includes all that computer software that causes a computer to
perform useful tasks beyond the running of the computer itself. It is a collection of
programs which address a real-life problem of its end users which may be business
or scientific or any other problem. Application Suite like MS Office 2010 which has
MS Word, MS Excel, MS Access, etc.; Enterprise Software like SAP; Content Access
Software like Media Players, Adobe Digital etc. are some examples of Application
Software.
3.3.3 Data Resources
You can think of data as a collection of facts. For example, your street addresses,
the city you live in a new phone number are all pieces of data. Like software, data
is also intangible. By themselves, pieces of data are not very useful. But aggregated,
indexed and organized together into a database, data can become a powerful tool
for businesses. For years’ business houses, have been gathering information with
regards to customers, suppliers, business partners, markets, cost, and price
movement and so on. After collection of information for years’ companies have
now started analyzing this information and creating important insights out of data.
Data is now helping companies to create strategy for future. This is precisely the
reason why we have started hearing a lot about data analytics in past few years.
♦ Data: Data, plural of Datum, are the raw bits and pieces of information with
no context that can either be quantitative or qualitative. Quantitative data is
numeric, the result of a measurement, count, or some other mathematical
calculation. Qualitative data is descriptive. “Ruby Red,” the color of a 2013
Ford Focus, is an example of qualitative data. By itself, data is not that useful.
For it to be useful, it needs to be given context. For example - “15, 23, 14, and
85″ are the numbers of students that had registered for upcoming classes that
would-be information. Once we have put our data into context, aggregated and
analyzed it, we can use it to make decisions for our organization.
♦ Database: A set of logically inter-related organized collection of data is
Database. The goal of many Information Systems is to transform data into
information to generate knowledge that can be used for decision making. To do
this, the system must be able to take data, put the data into context and provide
tools for aggregation and analysis.
♦ Database Management Systems (DBMS): DBMS may be defined as a
software that aid in organizing, controlling and using the data needed by the
application programme. They provide the facility to create and maintain a
well-organized database. These systems are primarily used to develop and
analyze single-user databases and are not meant to be shared across a
network or Internet, but are instead installed on a device and work with a
single user at a time. Various operations that can be performed on these files
are as follows:
• Adding new files to database,
• Deleting existing files from database,
• Inserting data in existing files,
• Modifying data in existing files,
• Deleting data in existing files, and
• Retrieving or querying data from existing files.
Commercially available Data Base Management Systems are Oracle, MySQL,
SQL Servers and DB2 etc. DBMS packages generally provide an interface to
view and change the design of the database, create queries, and develop
reports. Microsoft Access and Open Office Base are examples of personal
DBMS.
♦ Database Models: Databases can be organized in many ways, and thus take
many forms. A Database Model is a type of data model that determines the
logical structure of a database and fundamentally determines in which manner
data can be stored, organized and manipulated. Let’s now look at the database
model hierarchy. Hierarchy of database is as under:
• Database: This is a collection of Files/Tables.
• File or Table: This is a collection of Records. It is also referred as Entity.
• Record: This is a collection of Fields.
• Field: This is a collection of Characters, defining a relevant attribute of Table
instance.
• Characters: These are a collection of Bits.
FIELD
Root
Parent of Room
Children of Root
Parents of equipment
Children of Room
Parents of Repair
set
Repair Members of repair
Repair Repair Repair Repair Repair
Invoice 6
Invoice 1 Invoice 2 Invoice 3 Invoice 4 Invoice 5 vendor repair invoice
set
Equip 1 Equip 2 Equip 3 Equip 4 Equip 5 Equip 6 Equip 7 Equip 8 Owners of equipment-
repair invoice set
In this, all the tables are related by one or more fields, so that it is possible to
connect all the tables in the database through the field(s) they have in
common. For each table, one of the fields is identified as a Primary Key, which
is the unique identifier for each record in the table. Keys are commonly used
to join or combine data from two or more tables. Popular examples of
relational databases are Microsoft Access, MySQL, and Oracle.
D. Object Oriented Data Base Model: It is based on the concept that the world
can be modeled in terms of objects and their interactions. An Object-
Oriented Database provides a mechanism to store complex data such as
images, audio and video, etc. An object-oriented database (also referred to
as Object-Oriented Database Management System or OODBMS) is a set of
objects. In these databases, the data is modeled and created as objects.
OODBMS helps programmers make objects which are an independently
functioning application or program, assigned with a specific task or role to
perform. In the Fig. 3.3.7, the light rectangle indicates that ‘engineer’ is an
object possessing attributes like ‘date of birth’, ‘address’, etc. which is
interacting with another object known as ‘civil jobs’. When a civil job is
commenced, it updates the ‘current job’ attribute of the object known as
‘engineer’, because ‘civil job’ sends a message to the latter object.
Civil Job
Team
Part of structure
Engineer
Engineer ID No.
Civil Jobs
Date of Birth
Address
Employment Date
Current Job
Experience
Class of structure
Civil Architect
Engineer
analyzing the data that is needed for day-to-day operations is not a good idea; we
do not want to tax the operations of the company more than we need to. Further,
organizations also want to analyze data in a historical sense: How does the data we
have today compare with the same set of data this time last month, or last year?
From these needs arose the concept of the data warehouse.
The concept of the Data Warehouse is simple: Extract data from one or more of
the organization’s databases and Load it into the data warehouse (which is itself
another database) for storage and analysis. However, the execution of this concept
is not that simple. A data warehouse should be designed so that it meets the
following criteria:
It uses non-operational data. This means that the data warehouse is using a
copy of data from the active databases that the company uses in its day-to-day
operations, so the data warehouse must pull data from the existing databases
on a regular, scheduled basis.
The data is time-variant. This means that whenever data is loaded into the data
warehouse, it receives a time stamp, which allows for comparisons between
different time periods.
The data is standardized. Because the data in a data warehouse usually comes
from several different sources, it is possible that the data does not use the same
definitions or units. For example, our Events table in our Student Clubs database
lists the event dates using the mm/dd/yyyy format (e.g., 01/10/2013). A table in
another database might use the format yy/mm/dd (e.g.13/01/10) for dates. For
the data warehouse to match up dates a standard date format would have to be
agreed upon and all data loaded into the data warehouse would have to be
converted to use this standard format. This process is called Extraction-
Transformation-Load (ETL).
There are two primary schools of thought when designing a data warehouse:
Bottom-Up and Top- Down.
• The Bottom-Up Approach starts by creating small data warehouses,
called data marts, to solve specific business problems. As these data marts
are created, they can be combined into a larger data warehouse.
• The Top-Down Approach suggests that we should start by creating an
enterprise-wide data warehouse and then, as specific business needs are
identified, create smaller data marts from the data warehouse.
Inventory Marketing
Control
Data Sales
Logistics
Warehouse
Accounting
Shipping
Purchasing Management
Reporting
CRM
Databases
Data Interpretation/
Target transformation Data Evaluation Business
Data Mining Knowledge
Data Warehouse
e. Data Mining: In this, various data mining techniques are applied on the data to
discover the interesting patterns. Techniques like clustering and association
analysis are among the many different techniques used for data mining.
f. Pattern Evaluation and Knowledge Presentation: This step involves
visualization, transformation, removing redundant patterns etc. from the
patterns we generated.
g. Decisions / Use of Discovered Knowledge: This step helps user to make use of
the knowledge acquired to take better decisions.
In some cases, a data-mining project is begun with a hypothetical result in mind.
For example, a grocery chain may already have some idea that buying patterns
change after it rains and want to get a deeper understanding of exactly what is
happening. In other cases, there are no presuppositions and a data-mining
program is run against large data sets to find patterns and associations. Refer Fig.
3.3.9.
3.3.4 Networking and Communication Systems
In today’s high speed world, we cannot imagine an information system without an
effective and efficient communication system; which is a valuable resource which
helps in good management. Telecommunication networks give an organization the
capability to move information rapidly between distant locations and to provide
the ability for the employees, customers, and suppliers to collaborate from
anywhere, combined with the capability to bring processing power to the point of the
application. All of this offers firm important opportunities to restructure its business
processes and to capture high competitive ground in the marketplace. Through
telecommunications, this value may be:
(i) An increase in the efficiency of operations;
(ii) Improvements in the effectiveness of management; and
(iii) Innovations in the marketplace.
Computer Network is a collection of computers and other hardware
interconnected by communication channels that allow sharing of resources and
information. Where at least one process in one device can send/receive data
to/from at least one process residing in a remote device, then the two devices are
said to be in a network. A network is a group of devices connected to each other.
Network and Communication System: These consist of both physical devices and
software, links the various pieces of hardware and transfers the data from one
physical location to another. Computers and communications equipment can be
connected in networks for sharing voice, data, images, sound and video. A network
links two or more computers to share data or resources such as a printer.
Every enterprise needs to manage its information in an appropriate and desired
manner. The enterprise must do the following for this:
♦ Knowing its information needs;
♦ Acquiring that information;
♦ Organizing that information in a meaningful way;
♦ Assuring information quality; and
♦ Providing software tools so that users in the enterprise can access information
they require.
Each component, namely the computer in a computer network is called a ‘Node’.
Computer networks are used for exchange of data among different computers and
to share the resources like CPU, I/O devices, storages, etc. without much of an
impact on individual systems. In real world, we see numerous networks like
Telephone/ mobile network, postal networks etc. If we look at these systems, we
can analyze that networks could be of two types:
♦ Connection Oriented networks: Wherein a connection is first established
between the sender and the receiver and then data is exchanged like it happens
in case of telephone networks.
♦ Connectionless Networks: Where no prior connection is made before data
exchanges. Data which is being exchanged in fact has a complete contact
information of recipient and at each intermediate destination, it is decided how
to proceed further like it happens in case of postal networks.
These real-world networks have helped model computer networks. Each of these
networks is modeled to address the following basic issues:
♦ Routing: It refers to the process of deciding on how to communicate the data
from source to destination in a network.
♦ Bandwidth: It refers to the amount of data which can be sent across a network
in given time.
♦ Resilience: It refers to the ability of a network to recover from any kind of error
like connection failure, loss of data etc.
♦ Contention: It refers to the situation that arises when there is a conflict for some
common resource in a network. For example, network contention could arise
when two or more computer systems try to communicate at the same time.
The following are the important benefits of a computer network:
♦ Distributed nature of information: There would be many situations where
information must be distributed geographically. E.g. in the case of Banking
Company, accounting information of various customers could be distributed
across various branches but to make Consolidated Balance Sheet at the year-
end, it would need networking to access information from all its branches.
♦ Resource Sharing: Data could be stored at a central location and can be shared
across different systems. Even resource sharing could be in terms of sharing
peripherals like printers, which are normally shared by many systems. E.g. In the
case of a CBS, Bank data is stored at a Central Data Centre and could be accessed
by all branches as well as ATMs.
♦ Computational Power: The computational power of most of the applications
would increase drastically if the processing is distributed amongst computer
systems. For example: processing in an ATM machine in a bank is distributed
between ATM machine and the central Computer System in a Bank, thus
reducing load on both.
♦ Reliability: Many critical applications should be available 24x7, if such
applications are run across different systems which are distributed across
network then the reliability of the application would be high. E.g. In a city, there
could be multiple ATM machines so that if one ATM fails, one could withdraw
money from another ATM.
♦ User communication: Networks allow users to communicate using e-mail,
newsgroups, video conferencing, etc.
Telecommunications may provide these values through the following impacts:
(a) Time compression: Telecommunications enable a firm to transmit raw data
and information quickly and accurately between remote sites.
(b) Overcoming geographical dispersion: Telecommunications enable an
organization with geographically remote sites to function, to a degree, as
though these sites were a single unit. The firm can then reap benefits of scale
and scope which would otherwise be unobtainable.
(c) Restructuring business relationships: Telecommunications make it possible
to create systems which restructure the interactions of people within a firm
Preventive
Detective
Corrective
Environmental
Physical Access
Logical Access
Managerial
Application
also), etc., and Passwords. The above list contains both of manual and
computerized, preventive controls.
The following Table 3.4.1 shows how the same purpose is achieved by using
manual and computerized controls.
Table 3.4.1: Preventive Controls
Purpose Manual Control Computerized Control
Restrict unauthorized Build a gate and post a Use access control
entry into the security guard. software, smart card,
premises. biometrics, etc.
Restrict unauthorized Keep the computer in Use access control, viz.
entry into the software a secured location and User ID, password, smart
applications. allow only authorized card, etc.
person to use the
applications.
(B) Detective Controls: These controls are designed to detect errors, omissions
or malicious acts that occur and report the occurrence. In other words,
Detective Controls detect errors or incidents that elude preventive controls.
For example, a detective control may identify account numbers of inactive
accounts or accounts that have been flagged for monitoring of suspicious
activities. Detective controls can also include monitoring and analysis to
uncover activities or events that exceed authorized limits or violate known
patterns in data that may indicate improper manipulation. For sensitive
electronic communications, detective controls can indicate that a message
has been corrupted or the sender’s secure identification cannot be
authenticated. Some of the examples of Detective Controls are as follows:
Review of payroll reports; Compare transactions on reports to source
documents; Monitor actual expenditures against budget; Use of automatic
expenditure profiling where management gets regular reports of spend to
date against profiled spend; Hash totals; Check points in production jobs;
Echo control in telecommunications; Duplicate checking of calculations; Past-
due accounts report; The internal audit functions; Intrusion Detection System;
Cash counts and bank reconciliation and Monitoring expenditures against
budgeted amount.
The main characteristics of such controls are given as follows:
• Clear understanding of lawful activities so that anything which deviates
from these is reported as unlawful, malicious, etc.;
♦ Both automatic and manual fire alarms may be placed at strategic locations
and a control panel may be installed to clearly indicate this.
♦ Besides the control panel, master switches may be installed for power and
automatic fire suppression system. Different fire suppression techniques like
Dry-pipe sprinkling systems, water based systems, halon etc., depending
upon the situation may be used.
♦ Manual fire extinguishers can be placed at strategic locations.
♦ Fireproof Walls; Floors and Ceilings surrounding the Computer Room and
Fire Resistant Office Materials such as waste-baskets, curtains, desks, and
cabinets should be used.
♦ Fire exits should be clearly marked. When a fire alarm is activated, a signal
may be sent automatically to permanently manned station.
♦ All staff members should know how to use the system. The procedures to be
followed during an emergency should be properly documented are Fire
Alarms, Extinguishers, Sprinklers, Instructions / Fire Brigade Nos., Smoke
detectors, and Carbon dioxide based fire extinguishers.
♦ Less Wood and plastic should be in computer rooms.
♦ Use a gas based fire suppression system.
♦ To reduce the risk of firing, the location of the computer room should be
strategically planned and should not be in the basement or ground floor of
a multi-storey building.
♦ Regular Inspection by Fire Department should be conducted.
♦ Fire suppression systems should be supplemented and not replaced by
smoke detectors.
II. Electrical Exposures: These include risk of damages that may be caused
due electrical faults. These include non-availability of electricity, spikes
(temporary very high voltages), fluctuations of voltage and other such risk.
Table 3.4.2(B): Controls for Electrical Exposure
♦ The risk of damage due to power spikes can be reduced using Electrical Surge
Protectors that are typically built into the Un-interruptible Power System
(UPS).
♦ Un-interruptible Power System (UPS)/Generator: In case of a power failure,
the UPS provides the back up by providing electrical power from the battery
to the computer for a certain span of time. Depending on the sophistication
of the UPS, electrical power supply could continue to flow for days or for just
a few minutes to permit an orderly computer shutdown.
♦ Voltage regulators and circuit breakers protect the hardware from temporary
increase or decrease of power.
♦ Emergency Power-Off Switch: When the need arises for an immediate power
shut down during situations like a computer room fire or an emergency
evacuation, an emergency power-off switch at the strategic locations would
serve the purpose. They should be easily accessible and yet secured from
unauthorized people.
i. Locks on Doors
• Cipher locks (Combination Door Locks) - Cipher locks are used in low
security situations or when many entrances and exits must be usable all the
time. To enter, a person presses a four-digit number, and the door will
unlock for a predetermined period, usually ten to thirty seconds.
• Bolting Door Locks – A special metal key is used to gain entry when the
lock is a bolting door lock. To avoid illegal entry, the keys should not be
duplicated.
• Electronic Door Locks – A magnetic or embedded chip-based plastics card
key or token may be entered a reader to gain access in these systems.
ii. Physical Identification Medium: These are discussed below:
• Personal Identification Numbers (PIN): A secret number will be assigned
to the individual, in conjunction with some means of identifying the
individual, serves to verify the authenticity of the individual. The visitor will
be asked to log on by inserting a card in some device and then enter their
PIN via a PIN keypad for authentication. His/her entry will be matched with
the PIN number available in the security database.
• Plastic Cards: These cards are used for identification purposes. Customers
should safeguard their card so that it does not fall into unauthorized hands.
• Identification Badges: Special identification badges can be issued to
personnel as well as visitors. For easy identification purposes, their color of
the badge can be changed. Sophisticated photo IDs can also be utilized as
electronic card keys.
iii. Logging on Facilities: These are given as under:
• Manual Logging: All visitors should be prompted to sign a visitor’s log
indicating their name, company represented, their purpose of visit, and person
to see. Logging may happen at both fronts - reception and entrance to the
computer room. A valid and acceptable identification such as a driver’s license,
business card or vendor identification tag may also be asked for before
allowing entry inside the company.
• Electronic Logging: This feature is a combination of electronic and biometric
security systems. The users logging can be monitored and the unsuccessful
attempts being highlighted.
iv. Other means of Controlling Physical Access: Other important means of
controlling physical access are given as follows:
• Video Cameras: Cameras should be placed at specific locations and
monitored by security guards. Refined video cameras can be activated by
motion. The video supervision recording must be retained for possible future
play back.
(C) Logical Access Controls: These are the controls relating to logical access to
information resources such as operating systems controls, application software
boundary controls, networking controls, access to database objects, encryption
controls etc. Logical access controls are implemented to ensure that access to
systems, data and programs is restricted to authorized users to safeguard
Asynchronous Attacks
They occur in many environments where data can be moved synchronously across
telecommunication lines. Data that is waiting to be transmitted are liable to
unauthorized access called Asynchronous Attack. These attacks are hard to detect
because they are usually very small pin like insertions and are of following types:
♦ Data Leakage: This involves leaking information out of the computer by means
of dumping files to paper or stealing computer reports and tape.
♦ Subversive Attacks: These can provide intruders with important information
about messages being transmitted and the intruder may attempt to violate the
integrity of some components in the sub-system.
♦ Wire- Tapping: This involves spying on information being transmitted over
communication network.
♦ Piggybacking: This is the act of following an authorized person through a secured
door or electronically attaching to an authorized telecommunication link that
intercepts and alters transmissions. This involves intercepting communication
between the operating system and the user and modifying them or substituting new
messages.
♦ Hackers: Hackers try their best to overcome restrictions to prove their ability.
Ethical hackers most likely never try to misuse the computer intentionally;
♦ Employees (authorized or unauthorized);
♦ IS Personnel: They have easiest to access to computerized information since
they come across to information during discharging their duties. Segregation
of duties and supervision help to reduce the logical access violations;
♦ Former Employees: should be cautious of former employees who have left
the organization on unfavorable terms;
♦ End Users; Interested or Educated Outsiders; Competitors; Foreigners;
Organized Criminals; Crackers; Part-time and Temporary Personnel; Vendors
and consultants; and Accidental Ignorant – Violation done unknowingly.
Some of the Logical Access Controls are listed below:
I. User Access Management: This is an important factor that involves following:
• User Registration: Information about every user is documented. Some
questions like why and who is the user granted the access; has the data
owner approved the access, and has the user accepted the responsibility?
etc. are answered. The de-registration process is also equally important.
• Privilege management: Access privileges are to be aligned with job
requirements and responsibilities and are to be minimal w.r.t their job
functions. For example, an operator at the order counter shall have direct
access to order processing activity of the application system.
• User password management: Passwords are usually the default screening
point for access to systems. Allocations, storage, revocation, and reissue of
password are password management functions. Educating users is a critical
component about passwords, and making them responsible for their password.
• Review of user access rights: A user’s need for accessing information
changes with time and requires a periodic review of access rights to check
anomalies in the user’s current job profile, and the privileges granted earlier.
II. User Responsibilities: User awareness and responsibility are also important
factors and are as follows:
• Password use: Mandatory use of strong passwords to maintain confidentiality.
• Unattended user equipment: Users should ensure that none of the
equipment under their responsibility is ever left unprotected. They should also
secure their PCs with a password and should not leave it accessible to others.
the caller’s number to establish a new connection. This limits access only
from authorized terminals or telephone numbers and prevents an intruder
masquerading as a legitimate user. This also helps to avoid the call
forwarding and man-in-the middle attack.
IV. Operating System Access Control: Operating System(O/S) is the computer
control program that allows users and their applications to share and access
common computer resources, such as processor, main memory, database and
printers. Major tasks of O/S are Job Scheduling; Managing Hardware and
Software Resources; Maintaining System Security; Enabling Multiple User
Resource Sharing; Handling Interrupts and Maintaining Usage Records.
Operating system security involves policy, procedure and controls that
determine, ‘who can access the operating system,’ ‘which resources they can
access’, and ‘what action they can take’. Hence, protecting operating system
access is extremely crucial and can be achieved using following steps.
• Automated terminal identification: This will help to ensure that a
specified session could only be initiated from a certain location or
computer terminal.
• Terminal log-in procedures: A log-in procedure is the first line of
defense against unauthorized access as it does not provide unnecessary
help or information, which could be misused by an intruder. When the
user initiates the log-on process by entering user-id and password, the
system compares the ID and password to a database of valid users and
accordingly authorizes the log-in.
• Access Token: If the log on attempt is successful, the Operating System
creates an access token that contains key information about the user
including user-id, password, user group and privileges granted to the
user. The information in the access token is used to approve all actions
attempted by the user during the session.
• Access Control List: This list contains information that defines the
access privileges for all valid users of the resource. When a user
attempts to access a resource, the system compasses his or her user-id
and privileges contained in the access token with those contained in
the access control list. If there is a match, the user is granted access.
• Discretionary Access Control: The system administrator usually
determines; who is granted access to specific resources and maintains
the access control list. However, in distributed systems, resources may
be controlled by the end-user. Resource owners in this setting may be
check if preventive controls discussed so far are working. If not, this control
will detect and report any unauthorized activities.
• Event logging: In Computer systems, it is easy and viable to maintain
extensive logs for all types of events. It is necessary to review if logging is
enabled and the logs are archived properly. An intruder may penetrate the
system by trying different passwords and user ID combinations. All incoming
and outgoing requests along with attempted access should be recorded in
a transaction log. The log should record the user ID, the time of the access
and the terminal location from where the request has been originated.
• Monitor System use: Based on the risk assessment, a constant monitoring
of some critical systems is essential. Define the details of types of accesses,
operations, events and alerts that will be monitored. The extent of detail
and the frequency of the review would be based on criticality of operation
and risk factors. The log files are to be reviewed periodically and attention
should be given to any gaps in these logs.
• Clock Synchronization: Event logs maintained across an enterprise
network plays a significant role in correlating an event and generating
report on it. Hence, the need for synchronizing clock time across the
network as per a standard time is mandatory.
VI. Controls when mobile: In today’s organizations, computing facility is not
restricted to a certain data center alone. Ease of access on the move
provides efficiency and results in additional responsibility on the
management to maintain information security. Theft of data carried on
the disk drives of portable computers is a high-risk factor. Both physical
and logical access to these systems is critical. Information is to be
encrypted and access identifications like fingerprint, eye-iris, and smart
cards are necessary security features.
3.4.3 Classification based on “Audit Functions”
Auditors might choose to factor systems in several different ways. Auditors have
found two ways to be especially useful when conducting information systems
audits. These are discussed below:
A. Managerial Controls: In this part, we shall examine controls over the
managerial controls that must be performed to ensure the development,
implementation, operation and maintenance of information systems in a planned
and controlled manner in an organization. The controls at this level provide a stable
progress against plan and to ensure software released for production use is
authentic, accurate, and complete. Refer Table 3.4.5.
Table 3.4.5: Phases of Program Development Life Cycle
Phase Controls
Planning Techniques like Work Breakdown Structures (WBS), Gantt charts and
PERT (Program Evaluation and Review Technique) Charts can be used to
monitor progress against plan.
Control The Control phase has two major purposes:
♦ Task progress in various software life-cycle phases should be
monitored against plan and corrective action should be taken in
case of any deviations
♦ Control over software development, acquisition, and
implementation tasks should be exercised to ensure software
released for production use is authentic, accurate, and complete.
Design A systematic approach to program design, such as any of the structured
design approaches or object-oriented design is adopted.
Coding Programmers must choose a module implementation and integration
strategy (like Top-down, Bottom-up & Threads approach), a coding strategy
(that follows percepts of structured programming), and a documentation
strategy (to ensure program code is easily readable & understandable).
Testing Three types of testing can be undertaken:
♦ Unit Testing – which focuses on individual program modules;
♦ Integration Testing – Which focuses in groups of program
modules; and
♦ Whole-of-Program Testing – which focuses on whole program.
These tests are to ensure that a developed or acquired program
achieves its specified requirements.
Operation Management establishes formal mechanisms to monitor the status of
and operational programs so maintenance needs can be identified on a
Maintenance timely basis. Three types of maintenance can be used are as follows:
♦ Repair Maintenance – in which program errors are corrected;
♦ Adaptive Maintenance – in which the program is modified to meet
changing user requirements; and
♦ Perfective Maintenance - in which the program is tuned to
decrease the resource consumption.
data must be available to users when it is needed, in the location where it is needed,
and in the form in which it is needed. Further it must be possible to modify data
easily and the integrity of the data be preserved. If data repository system is used
properly, it can enhance data and application system reliability. It must be
controlled carefully, however, because the consequences are serious if the data
definition is compromised or destroyed. Careful control should be exercised over
the roles by appointing senior, trustworthy persons, separating duties to the extent
possible and maintaining and monitoring logs of the data administrator’s and
database administrator’s activities.
V. Quality Assurance Management Controls
Quality Assurance management is concerned with ensuring that the –
♦ Information systems produced by the information systems function achieve
certain quality goals; and
♦ Development, implementation, operation and maintenance of Information
systems comply with a set of quality standards.
Quality Assurance (QA) personnel should work to improve the quality of
information systems produced, implemented, operated, and maintained in an
organization. They perform a monitoring role for management to ensure that –
♦ Quality goals are established and understood clearly by all stakeholders; and
♦ Compliance occurs with the standards that are in place to attain quality
information systems.
VI. Security Management Controls
Information security administrators are responsible for ensuring that information
systems assets categorized under Personnel, Hardware, Facilities, Documentation,
Supplies Data, Application Software and System Software are secure. Assets are
secure when the expected losses that will occur over some time, are at an
acceptable level. The control’s classification based on “Nature of Information
System Resources – Environmental Controls, Physical Controls and Logical Access
Controls are all security measures against the possible threats. However, despite the
controls on place, there could be a possibility that a control might fail. Disasters
are events / incidents that are so critical that has capability to hit business
continuity of an entity in an irreversible manner.
When disaster strikes, it still must be possible to recover operations and mitigate
losses using the last resort controls - A Disaster Recovery Plan (DRP) and Insurance.
A comprehensive DRP comprise four parts – an Emergency Plan, a Backup Plan,
a Recovery Plan and a Test Plan. The plan lays down the policies, guidelines, and
procedures for all Information System personnel. Adequate insurance must be able
to replace Information Systems assets and to cover the extra costs associated with
restoring normal operations.
BCP (Business Continuity Planning) Controls: These controls are related to
having an operational and tested IT continuity plan, which is in line with the overall
business continuity plan, and its related business requirements to make sure IT
services are available as required and to ensure a minimum impact on business in
the event of a major disruption
VIII. Operations Management Controls
Operations management is responsible for the daily running of hardware and software
facilities. Operations management typically performs controls over the functions as below:
(a) Computer Operations: The controls over computer operations govern the
activities that directly support the day-to-day execution of either test or
production systems on the hardware/software platform available.
(b) Network Operations: This includes the proper functioning of network
operations and monitoring the performance of network communication
channels, network devices, and network programs and files. Data may be lost
or corrupted through component failure.
(c) Data Preparation and Entry: Irrespective of whether the data is obtained
indirectly from source documents or directly from, say, customers, keyboard
environments and facilities should be designed to promote speed and
accuracy and to maintain the wellbeing of keyboard operators.
(d) Production Control: This includes the major functions like- receipt and
dispatch of input and output; job scheduling; management of service-level
agreements with users; transfer pricing/charge-out control; and acquisition
of computer consumables.
(e) File Library: This includes the management of an organization’s machine-
readable storage media like magnetic tapes, cartridges, and optical disks.
(f) Documentation and Program Library: This involves that documentation
librarians ensure that documentation is stored securely; that only authorized
personnel gain access to documentation; that documentation is kept up-to-
date and that adequate backup exists for documentation. The documentation
may include reporting of responsibility and authority of each function;
Definition of responsibilities and objectives of each functions; Reporting
characters within a set of data), substitution (replace text with a key-text) and
product cipher (combination of transposition and substitution).
♦ Passwords: User identification by an authentication mechanism with
personal characteristics like name, birth date, employee code, function,
designation or a combination of two or more of these can be used as a
password boundary access control.
♦ Personal Identification Numbers (PIN): PIN is similar to a password
assigned to a user by an institution a random number stored in its database
independent to a user identification details, or a customer selected number.
Hence, a PIN may be exposed to vulnerabilities while issuance or delivery,
validation, transmission and storage.
♦ Identification Cards: Identification cards are used to store information
required in an authentication process. These cards are to be controlled
through the application for a card, preparation of the card, issue, use and
card return or card termination phases.
♦ Biometric Devices: Biometric identification e.g. thumb and/or finger
impression and eye retina etc. are used as boundary control techniques.
II. Input Controls: Data that is presented to an application as input data must
be validated for authorization, reasonableness, completeness, and integrity. These
controls are responsible for ensuring the accuracy and completeness of data and
instruction input into an application system. Input controls are important and
critical since substantial time is spent on input of data, involve human intervention
and are, therefore error and fraud prone. These are of following types as shown in
the Fig. 3.4.3:
Control Explanation
Error Occasionally, processors might malfunction because of design errors,
Detection manufacturing defects, damage, fatigue, electromagnetic interference,
and and ionizing radiation. The failure might be transient (that disappears
Correction after a short period), intermittent (that reoccurs periodically), or
permanent (that does not correct with time). For the transient and
intermittent errors; retries and re-execution might be successful,
whereas for permanent errors, the processor must halt and report error.
Multiple It is important to determine the number of and nature of the execution
Execution states enforced by the processor. This helps auditors to determine
States which user processes will be able to carry out unauthorized activities,
such as gaining access to sensitive data maintained in memory regions
assigned to the operating system or other user processes.
Timing An operating system might get stuck in an infinite loop. In the absence
Controls of any control, the program will retain use of processor and prevent
other programs from undertaking their work.
Component In some cases, processor failure can result in significant losses.
Replication Redundant processors allow errors to be detected and corrected. If
processor failure is permanent in multicomputer or multiprocessor
architectures, the system might reconfigure itself to isolate the failed
processor.
(ii) Real Memory Controls: This comprises the fixed amount of primary
storage in which programs or data must reside for them to be executed
or referenced by the central processor. Real memory controls seek to
detect and correct errors that occur in memory cells and to protect areas
of memory assigned to a program from illegal access by another program.
(iii) Virtual Memory Controls: Virtual Memory exists when the addressable
storage space is larger than the available real memory space. To achieve
this outcome, a control mechanism must be in place that maps virtual
memory addresses into real memory addresses.
(iv) Data Processing Controls: These perform validation checks to identify
errors during processing of data. They are required to ensure both the
completeness and the accuracy of data being processed. Normally, the
processing controls are enforced through database management
system that stores the data. However, adequate controls should be
enforced through the front-end application system also to have
consistency in the control process.
V. Database Controls
Protecting the integrity of a database when application software acts as an
interface to interact between the user and the database, are called Update
Controls and Report Controls.
Major Update Controls are as follows:
♦ Sequence Check between Transaction and Master Files:
Synchronization and the correct sequence of processing between the
master file and transaction file is critical to maintain the integrity of
updating, insertion or deletion of records in the master file with respect
to the transaction records. If errors, in this stage are overlooked, it leads
to corruption of the critical data.
♦ Ensure All Records on Files are processed: While processing, the
transaction file records mapped to the respective master file, and the
end-of-file of the transaction file with respect to the end-of-file of the
master file is to be ensured.
♦ Process multiple transactions for a single record in the correct
order: Multiple transactions can occur based on a single master record
(e.g. dispatch of a product to different distribution centers). Here, the
order in which transactions are processed against the product master
record must be done based on a sorted transaction codes.
♦ Maintain a suspense account: When mapping between the master
record to transaction record results in a mismatch due to failure in the
corresponding record entry in the master record; then these
transactions are maintained in a suspense account.
7. Controlled
1. Organizational
evolution of
costs of data loss Control and Audit of Computer-
based Information Systems computer use
Information Systems Auditing
a. Improved
d. Improved System
Safeguarding of assets
efficiency
Organization
b. Improved Data
Integrity c. Improved System
effectiveness
collected before computers but now, there is a fear that privacy has eroded
beyond acceptable levels.
7. Controlled evolution of computer Use: Use of Technology and reliability of
complex computer systems cannot be guaranteed and the consequences of
using unreliable systems can be destructive.
3.5.2 Tools for IS Audit
Today, organizations produce information on a real-time, online basis. Real-time
recordings need real-time auditing to provide continuous assurance about the
quality of the data that is continuous auditing. Continuous auditing enables
auditors to significantly reduce and perhaps to eliminate the time between
occurrence of the client’s events and the auditor’s assurance services thereon.
Errors in a computerized system are generated at high speeds and the cost to
correct and rerun programs are high. If these errors can be detected and corrected
at the point or closest to the point of their occurrence the impact thereof would be
the least. Continuous auditing techniques use two bases for collecting audit
evidence. One is the use of embedded modules in the system to collect, process,
and print audit evidence and the other is special audit records used to store the
audit evidence collected.
Types of Audit Tools: Different types of continuous audit techniques may be used.
Some modules for obtaining data, audit trails and evidences may be built into the
programs. Audit software is available, which could be used for selecting and testing
data. Many audit tools are also available; some of them are described below:
(i) Snapshots: Tracing a transaction is a computerized system can be performed
with the help of snapshots or extended records. The snapshot software is built
into the system at those points where material processing occurs which takes
images of the flow of any transaction as it moves through the application.
These images can be utilized to assess the authenticity, accuracy, and
completeness of the processing carried out on the transaction. The main
areas to dwell upon while involving such a system are to locate the snapshot
points based on materiality of transactions when the snapshot will be
captured and the reporting system design and implementation to present
data in a meaningful way.
(ii) Integrated Test Facility (ITF): The ITF technique involves the creation of a
dummy entity in the application system files and the processing of audit test
data against the entity as a means of verifying processing authenticity,
accuracy, and completeness. This test data would be included with the normal
production data used as input to the application system. In such cases the
auditor must decide what would be the method to be used to enter test data
and the methodology for removal of the effects of the ITF transactions.
(iii) System Control Audit Review File (SCARF): The SCARF technique involves
embedding audit software modules within a host application system to
provide continuous monitoring of the system’s transactions. The information
collected is written onto a special audit file- the SCARF master files. Auditors
then examine the information contained on this file to see if some aspect of
the application system needs follow-up. In many ways, the SCARF technique
is like the snapshot technique along with other data collection capabilities.
(iv) Continuous and Intermittent Simulation (CIS): This is a variation of the
SCARF continuous audit technique. This technique can be used to trap
exceptions whenever the application system uses a database management
system. During application system processing, CIS executes in the following
way:
• The database management system reads an application system
transaction. It is passed to CIS. CIS then determines whether it wants to
examine the transaction further. If yes, the next steps are performed or
otherwise it waits to receive further data from the database
management system.
• CIS replicates or simulates the application system processing.
• Every update to the database that arises from processing the selected
transaction will be checked by CIS to determine whether discrepancies
exist between the results it produces and those the application system
produces.
• Exceptions identified by CIS are written to an exception log file.
• The advantage of CIS is that it does not require modifications to the
application system and yet provides an online auditing capability.
(v) Audit Hooks: There are audit routines that flag suspicious transactions. For
example, internal auditors at Insurance Company determined that their
policyholder system was vulnerable to fraud every time a policyholder
changed his or her name or address and then subsequently withdrew funds
from the policy. They devised a system of audit hooks to tag records with a
name or address change. The internal audit department will investigate these
tagged records for detecting fraud. When audit hooks are employed, auditors
can be informed of questionable transactions as soon as they occur. This
(iii) Surveillance: The IS auditor needs to understand how video and human
surveillance are used to control and monitor access. He or she needs to
understand how (and if) video is recorded and reviewed, and if it is
effective in preventing or detecting incidents.
(iv) Guards and dogs: The IS auditor needs to understand the use and
effectiveness of security guards and guard dogs. Processes, policies,
procedures, and records should be examined to understand required
activities and how they are carried out.
(v) Key-Card systems: The IS auditor needs to understand how key-card
systems are used to control access to the facility. Some points to
consider include: Work zones: Whether the facility is divided into
security zones and which persons are permitted to access which zones
whether key-card systems record personnel movement; What processes
and procedures are used to issue key-cards to employees? etc.
3.6.3 Auditing Logical Access Controls
(a) Role of IS Auditor in Auditing Logical Access Controls
Auditing Logical Access Controls requires attention to several key areas that include
the following:
(i) Network Access Paths: The IS auditor should conduct an independent
review of the IT infrastructure to map out the organization’s logical access
paths. This will require considerable effort and may require the use of
investigative and technical tools, as well as specialized experts on IT network
architecture.
(ii) Documentation: The IS auditor should request network architecture and
access documentation to compare what was discovered independently
against existing documentation. The auditor will need to determine why any
discrepancies exist. Similar investigations should take place for each
application to determine all of the documented and undocumented access
paths to functions and data.
(b) Audit of Logical Access Controls
(I) User Access Controls: User access controls are often the only barrier
between unauthorized parties and sensitive or valuable information. This
makes the audit of user access controls particularly significant. Auditing user
access controls requires keen attention to several key factors and activities in
four areas:
(i) Auditing User Access Controls: These are to determine if the controls
themselves work as designed. Auditing user access controls requires
attention to several factors, including:
♦ Authentication: The auditor should examine network and system
resources to determine if they require authentication, or whether any
resources can be accessed without first authenticating.
♦ Access violations: The auditor should determine if systems,
networks, and authentication mechanisms can log access violations.
These usually exist in the form of system logs showing invalid login
attempts, which may indicate intruders who are trying to log in to
employee user accounts.
♦ User account lockout: The auditor should determine if systems and
networks can automatically lock user accounts that are the target of
attacks. A typical system configuration is one that will lock a user
account after five unsuccessful logins attempts within a short period.
♦ Intrusion detection and prevention: The auditor should determine
if there are any IDSs or IPSs that would detect authentication-bypass
attempts. The auditor should examine these systems to see whether
they have up-to-date configurations and signatures, whether they
generate alerts, and whether the recipients of alerts act upon them.
♦ Dormant accounts: The IS auditor should determine if any automated
or manual process exists to identify and close dormant accounts.
Dormant accounts are user (or system) accounts that exist but are
unused. These accounts represent a risk to the environment, as they
represent an additional path between intruders and valuable or
sensitive data.
♦ Shared accounts: The IS auditor should determine if there are any
shared user accounts; these are user accounts that are routinely (or
even infrequently) used by more than one person. The principal risk
with shared accounts is the inability to determine accountability for
actions performed with the account.
♦ System accounts: The IS auditor should identify all system-level
accounts on networks, systems, and applications. The purpose of each
system account should be identified, and it should be determined if
each system account is still required (some may be artefacts of the initial
implementation or of an upgrade or migration). The IS auditor should
determine who has the password for each system account, whether
accesses by system accounts are logged, and who monitors those logs.
(ii) Auditing Password Management: The IS auditor needs to examine
password configuration settings on information systems to determine
how passwords are controlled. Some of the areas requiring examination
are- how many characters must a password have and whether there is
a maximum length; how frequently must passwords be changed;
whether former passwords may be used again; whether the password is
displayed when logging in or when creating a new password etc.
(iii) Auditing User Access Provisioning: Auditing the user access
provisioning process requires attention to several key activities,
including:
♦ Access request processes: The IS auditor should identify all user
access request processes and determine if these processes are
used consistently throughout the organization.
♦ Access approvals: The IS auditor needs to determine how
requests are approved and by what authority they are approved.
The auditor should determine if system or data owners approve
access requests, or if any accesses are ever denied.
♦ New employee provisioning: The IS auditor should examine the
new employee provisioning process to see how a new employee’s
user accounts are initially set up. The auditor should determine if
new employees’ managers are aware of the access requests that
their employees are given and if they are excessive.
♦ Segregation of Duties (SOD): The IS auditor should determine if
the organization makes any effort to identify segregation of
duties. This may include whether there are any SOD matrices in
existence and if they are actively used to make user access request
decisions.
♦ Access reviews: The IS auditor should determine if there are any
periodic access reviews and what aspects of user accounts are
reviewed; this may include termination reviews, internal transfer
reviews, SOD reviews, and dormant account reviews.
(iv) Auditing Employee Terminations: Auditing employee terminations
requires attention to several key factors, including:
♦ Queue lengths at each node; Number of errors occurring on each link or at each
node; Number of retransmissions that have occurred across each link; Log of
errors to identify locations and patterns of errors;
♦ Log of system restarts; and
♦ Message transit times between nodes and at nodes.
IV. Processing Controls
The audit trail maintains the chronology of events from the time data is received
from the input or communication subsystem to the time data is dispatched to the
database, communication, or output subsystems.
Accounting Audit Trail
♦ To trace and replicate the processing performed on a data item.
♦ To follow triggered transactions from end to end by monitoring input data
entry, intermediate results and output data values.
♦ To check for existence of any data flow diagrams or flowcharts that describe
data flow in the transaction, and whether such diagrams or flowcharts
correctly identify the flow of data.
♦ To check whether audit log entries recorded the changes made in the data
items at any time including who made them.
Operations Audit Trail
♦ A comprehensive log on hardware consumption – CPU time used, secondary
storage space used, and communication facilities used.
♦ A comprehensive log on software consumption – compilers used, subroutine
libraries used, file management facilities used, and communication software used.
V. Database Controls
The audit trail maintains the chronology of events that occur either to the database
definition or the database itself.
Accounting Audit Trail
♦ To confirm whether an application properly accepts, processes, and stores
information.
♦ To attach a unique time stamp to all transactions.
Access
Application Program Management Service
Network
Architecture Management Desk
Operations
Project Vulnerability Technical
Systems Management
Telecom Management Support
Design
Incident
Development Project Management
Systems
Engineering
Compliance
Quality Database Management
Assurance
Disaster
DataCentre Recovery
Planning
DataInput
♦ User: Users are individuals (at any level of the organization) who use assets
in the performance of their job duties. Each user is responsible for how he or
she uses the asset, and does not permit others to access the asset in his or
her name. Users are responsible for performing their duties lawfully and for
conforming to organization policies.
These generic roles and responsibilities should apply across the organization chart
to include every person in the organization.
3.7.3 Job Titles and Job Descriptions
A Job Title is a label that is assigned to a job description. It denotes a position in
the organization that has a given set of responsibilities, and which requires a certain
level and focus of education and prior experience.
An organization that has a program of career advancement may have a set of career
paths or career ladders that are models showing how employees may advance. For
each job title, a career path will show the possible avenues of advancement to other
job titles, and the experience required to reach those other job titles.
Job titles in IT have matured and are quite consistent across organizations. This
consistency helps organizations in several ways:
♦ Recruiting: When the organization needs to find someone to fill an open
position, the use of standard job titles will help prospective candidates more
easily find positions that match their criteria.
♦ Compensation base lining: Because of the chronic shortage of talented IT
workers, organizations are forced to be more competitive when trying to
attract new workers. To remain competitive, many organizations periodically
undertake a regional compensation analysis to better understand the levels
of compensation paid to IT workers in other organizations. The use of
standard job titles makes the task of comparing compensation far easier.
♦ Career advancement: When an organization uses job titles that are
consistent in the industry, IT workers have a better understanding of the
functions of positions within their own organizations and can more easily plan
how they can advance. The remainder of this section includes many IT job
titles with a short description (not a full job description by any measure) of
the function of that position.
Virtually all organizations also include titles that denote the level of experience,
leadership, or span of control in an organization. These titles may include executive
vice president, senior vice president, vice president, senior director, director,
SUMMARY
In the present contemporary world, apart from change the thought-provoking
terminology is business which is a driving force behind change and how to insight
into trade is a dynamic called integration. Organizations of the 1990’s concentrated
on the re-engineering and redesign of their business processes to endorse their
competitive advantage. To endure in the 21st century, organizations have started
paying attention on integrating enterprise-wide technology solutions to progress
their business processes called Business Information Systems (BIS). Now, every
organization integrates part or all of its business functions together to accomplish
higher effectiveness and yield. The thrust of the argument was that Information
Technology (IT), when skillfully employed could in various ways differentiate an
organization from its competition, add value to its services or products in the eyes
of its customers, and secure a competitive advantage in comparison to its
competition.
Although information systems have set high hopes to companies for their growth
as it reduces processing speed and helps in cutting cost but most of the research
studies show that there is a remarkable gap between its capabilities and the
business-related demands that senior management is placing on it. We learnt how
any enterprise to be effective and efficient must use Business Process Automation
(BPA), which is largely aided by Computers or IT. Information systems, which forms
the backbone of any enterprise comprises of various layers such as: Application
software, Database Management Systems (DBMS), System Software: Operating
Systems, Hardware, Network Links and People-Users.
This Chapter has provided an overview on the importance of information systems
in an IT environment and how information is generated. there has been a detailed
discussion on Information System Audit, its need and the method of performing
the same. Chapter outlines the losses that an organization may face, incase, it does
not get it audited.
with the hardware”. Explain what virtual memory is and what is its importance in
memory management?
(b) Resilience
(c) Contention
(d) Bandwidth
9. Under Application Controls, __________maintains the chronology of events that
occur when a user attempts to gain access to and employ systems resources.
(a) Boundary Controls
(b) Input Controls
(c) Communication Controls
(d) Processing Controls
10. Under Application Controls, __________ maintains the chronology of events that
occur either to the database definition or the database itself.
(a) Output Controls
(b) Input Controls
(c) Database Controls
(d) Processing Controls
11. Which of these is not a mobile operating system?
(a) Android
(b) iOS
(c) Tywin
(d) Windows Phone OS
12. Which of these is not an example of Relational Database?
(a) Access
(b) MySQL
(c) Java
(d) Oracle
13. Which of the following is not a step involved in the Data Mining Process?
(a) Data Integration
(b) Data Selection
(c) Data Transformation
(d) Data distribution
14. _________ technique involves embedding audit software modules within a host
application system to provide continuous monitoring of the system’s
transactions.
(a) Audit hooks
(b) SCARF
(c) Integrated Test Facility (ITF)
(d) Continuous and Intermittent Simulation (CIS)
15. SCARF stands for ____________.
(a) System Control Audit Review File
(b) System Control Audit Report File
(c) Simulation Control Audit Review File
(d) System Control Audit Review Format
Answers
1 (d) 2 (b) 3 (a) 4 (c) 5 (d) 6 (b)
7 (d) 8 (b) 9 (a) 10 (c) 11 (c) 12 (c)
13 (d) 14 (b) 15 (a)