IT Auditing

Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

Journal of Accounting and Public Policy 24 (2005) 61–75

www.elsevier.com/locate/jaccpubpol

Minimizing cost of continuous audit:


Counting and time dependent strategies
a,*
Jagdish Pathak , Ben Chaouch a, Ram S. Sriram b

a
Odette School of Business, University of Windsor, 401 Sunset Avenue, Windsor,
Ont., Canada N9B 3P4
b
School of Accountancy, Robinson College of Business, Georgia State University,
P.O. Box 4050, Atlanta, GA 30302-4050, USA

Abstract

Why do we need to continuously audit databases? The answer to this question


depends on several factors, including the users and the applications that have accessed
the data, the timing and the type of data modifications such as permissions or schema
and so on. Many studies pertain to the technical feasibility of continuous auditing of
databases, but do not consider the economic feasibility of such auditing. This paper
helps fill this void in the literature. We examine certain strategies that have been sug-
gested in the database auditing literature (see, e.g. [Orman, L.V., 2001. Database audit
and control strategies. Information Technology and Management 2(1), 27–51]) with
major and minor modifications. Orman studied the counting, periodic and hybrid audit-
ing strategies with the objective of minimizing the number of errors introduced during
database access. Unlike Orman whose focus is on assessing the number of errors enter-
ing into the system (technical feasibility), we focus on the long run operating cost of run-
ning database audit. We use results from regenerative stochastic processes to derive
expressions for the long run average cost under the counting and periodic auditing strat-
egies. Future directions for research are also proposed.
 2004 Published by Elsevier Inc.

*
Corresponding author. Tel.: +1 519 253 3131; fax: +1 519 973 7073.
E-mail addresses: jagdish@uwindsor.ca, jppathak@computer.org (J. Pathak).

0278-4254/$ - see front matter  2004 Published by Elsevier Inc.


doi:10.1016/j.jaccpubpol.2004.12.004
62 J. Pathak et al. / Journal of Accounting and Public Policy 24 (2005) 61–75

Keywords: Database auditing; Renewal theory; Cost of continuous audits; Information assurance

1. Introduction

Databases and online systems have dramatically changed the way businesses
transact and keep the records. 1 The database-supported online systems enable
businesses to conduct their businesses electronically, leading to greater effi-
ciency in information processing and in production and sales operations. With
databases and online real-time processing systems, business can retrieve, clas-
sify, and report activities far more quickly (Amer et al., 1987; Orman, 1990;
Date, 1995; Alles et al., 2002). Unlike in the past, where reports were produced
several months after events and transactions took place, businesses today can
provide financial statements and other activity reports instantaneously,
enhancing their usefulness to investors.
While databases and online real-time processing systems provide sev-
eral advantages, they also introduce control and security concerns about a
clientÕs transaction processing system. The one-time data entry with simulta-
neous entry into multiple system interfaces and input into cross-functional
records make data monitoring for errors and inconsistencies quite diffi-
cult (Laudon, 1986; Redman, 1992; Wang et al., 1995; Orman, 2001). While
audit trails mitigate some of these concerns, audit trails are often vague and
obscured, making error detection difficult, raising questions about the
quality of information produced by these systems (Fernandez et al.,
1981).
Quality of information is very important to auditors. Before they express an
opinion about a clientÕs financial reports, auditors must ensure that the infor-
mation generated through a clientÕs system is reliable (Elliott, 1997; Alles et al.,
2002). This requires that the auditors evaluate the adequacy of the controls sur-
rounding a clientÕs information processing systems and test them for errors and
inconsistencies. In a conventional audit, auditors perform such an evaluation
and testing only after a clientÕs reporting period has ended and when the audit
of financial reports begins. Even then, auditors do not examine one hundred
percent of a clientÕs system and select only a sample of transactions and exam-
ine them for errors and inconsistencies. Such limited examination, while ade-
quate for evaluating simpler processing systems, is not sufficient to evaluate
complex processing systems. In complex processing systems, the transaction
volume is very high and the transaction processing systems are generally inte-

1
The term large database refers to databases that are updated online and are used in industries
such as transportation, insurance, banking, or government departments such as immigration or
treasury.
J. Pathak et al. / Journal of Accounting and Public Policy 24 (2005) 61–75 63

grated across functions—e.g. manufacturing, inventory, and record-keeping.


Consequently, a single error in processing could affect multiple records and
the high transaction volume would make detection of these errors difficult.
Therefore, examining these systems for errors and inconsistencies once during
a year or by limited inspection is unlikely to make the auditor confident about
the system reliability. These systems demand more frequent inspections and
necessitate greater monitoring to be effective. They must be monitored on a
continuous basis, where the auditor gathers information about a clientÕs system
electronically and through embedded monitoring tools and evaluates the sys-
tem reliability at regular intervals during the year (Groomer and Murthy,
1989; Elliott, 1997; Rezaee et al., 2001).
In the past, several studies have addressed the technical feasibility of contin-
uous auditing (Groomer and Murthy, 1989; Vasarhelyi and Halper, 1991;
Vasarhelyi et al., 1991; Halper et al., 1992; Vasarhelyi et al., 2003). However,
one of the important questions that these studies have not answered is the eco-
nomic feasibility of continuous audit. Will there be a demand for continuous
audit and if auditors adopt continuous monitoring, will it be cost-effective
for both businesses and the auditors? While the supporters of continuous
auditing (e.g. Elliott committee (2002)) claim that there is great demand for
such audits, businesses and auditors have not enthusiastically embraced contin-
uous audit of clientsÕ systems. As Alles et al. (2002) state, one of the reasons for
the low adoption is the high cost of implementation. Therefore, this paper
examines continuous audit for their economic feasibility and reports on the
long run operating costs of continuous audit and how the costs change depend-
ing on the monitoring strategy used.
This study develops an analytical methodology to identify the long run cost
of continuous audit of a large database. Audit for this purpose denotes the exe-
cution of the integrity constraints, identification of errors and correction of
those errors before they are reentered into the system. The analytical model
is used to compare two continuous auditing strategies proposed by Orman
(2001), counting and periodic monitoring of databases. The analytical model
identifies the optimal number of transactions after which audit must begin un-
der the counting strategy and the optimal time that must elapse before audit
must begin under the periodic strategy. The long run average cost of audit is
computed for each monitoring alternative. This study uses the theory of regen-
erative Markov processes to analytically represent the cost of continuous mon-
itoring under the two strategies (Cinlar, 1975). This is one of the first studies in
accounting literature to use the regenerative Markov processes to examine
costs of continuous audit. We expect the results to contribute to an under-
standing of the cost effectiveness and economic feasibility of continuous
auditing.
The remainder of the paper is organized as follows. The next section de-
scribes the role of databases in online transaction processing systems and the
64 J. Pathak et al. / Journal of Accounting and Public Policy 24 (2005) 61–75

strategies that must be followed to detect, control, and audit the databases and
maintain their integrity. The Section 3 describes the audit methodologies devel-
oped by the authors to identify the optimal cost of continuous auditing. This
section also includes numerical examples to illustrate the cost under each audit
strategy. The last section compares the cost strategies and summarizes the find-
ings and also provides future directions for research in this area.

2. Auditing database integrity—the monitoring strategies

In a database, integrity constrains are used as the principle tools to detect


errors (Date, 1995; Weber, 1988; Orman, 2001). The integrity constraints are
programmed controls integrated within a DBMS. The integrity constraints ver-
ify the objective behind each transaction and require that they are satisfied be-
fore an input is accepted for processing (Davis and Weber, 1986). The integrity
constraints can be invoked in two basic ways: as automatic and as continuous
integrity tools to monitor data and transactions as they occur or as intermittent
tools invoked either by a DBMS or by a human agent (e.g. auditor) to verify
controls and data input and processing.
Monitoring a database at all times through the use of integrity constraints is
the most secure approach to error detection because it monitors one hundred
percent of the transactions and requires that all integrity constraints are satis-
fied before a transaction is accepted for processing. However, continuous exe-
cution of the integrity constraints would be prohibitively expensive for a large
database that handles thousands of transactions. Transactions will not be pro-
cessed if the integrity constraints are not satisfied and this results in significant
delays in transaction processing. Additionally, the delay in transaction process-
ing could lead to additional errors because some of the data might be lost or
become out-dated (Redman, 1992; Wang et al., 1995). These are some of the
reasons why continuous monitoring is rarely used. An alternative to continu-
ous monitoring of all transactions is intermittent monitoring where, integrity
constraints are imposed only at certain intervals. For example, the integrity
constraints could be invoked, say after every n transactions have taken place
or after x amount of time has passed. Because this approach enforces integrity
constraints less frequently than continuous monitoring, it overcomes some of
the process delays and system shutdowns issues that continuous monitoring
creates. At the same time, because occasional use of integrity constraints mon-
itors less than one hundred percent of the transactions and errors introduced
into the database between two scheduled monitoring periods would go
undetected.
There is no perfect approach to integrity monitoring and there are cost-ben-
efit trade-off regardless of which approach is selected. In the current study, we
examine two monitoring strategies— counting strategy and periodic monitor-
J. Pathak et al. / Journal of Accounting and Public Policy 24 (2005) 61–75 65

ing strategy proposed by Orman (2001). In his study, Orman discusses three
periodic monitoring strategies, counting strategy, periodic strategy and a hy-
brid strategy that combines the features of both counting and periodic strate-
gies. We chose to examine only two of the primary strategies proposed by
Orman because the third strategy, hybrid strategy, requires that the integrity
constraints are executed after each transaction, but not in their complete form.
The hybrid strategy requires that the integrity constraints are imposed only on
one transaction at a time, ignoring the impact of errors from one transaction
on other transactions. Orman points out that the frequent execution of integ-
rity constraints demanded by the hybrid strategy would increase the average
error rates without improving monitoring quality. Therefore, we believe the hy-
brid strategy proposed by Orman is unsuitable for a real-world audit situation
because, while the strategy increases the error rates and the costs of monitor-
ing, it does not improve the data quality.
We should also point out that our study goes beyond the strategies or
the analytical techniques developed by Orman. Orman (2001) focused only
on error minimization under various monitoring strategies. He did not analyze
the costs of such monitoring. We extend the OrmanÕs analytical technique by
including costs of monitoring under the two strategies and report on the twin
issues of error minimization and cost minimization. We also do not use the
periodic strategy in the original form discussed in Orman (2001). Unlike the
OrmanÕs study where periodic monitoring is required after a fixed length of
time has passed, we use a variable length of time. We believe that the variable
time represents real-world monitoring situations more closely than fixed mon-
itoring periods. Further, our variable length of time considers both the number
of transactions that arrived during the period P the number of transactions
processed during the audit time. We also use a different methodological ap-
proach. While Orman uses standard queuing models, we use the MarkovÕs re-
newal theory (queuing models also use renewal cycles, a variant of renewal
theory) because it more appropriately represents line processes (e.g. transac-
tions entering a database). We relax many of the restrictive assumptions used
in the Orman study to make them more representative of real-world monitor-
ing situations. The less restrictive assumptions make our results more robust.
In the following section, we describe the two principle monitoring strategies
used in this study, in greater detail.

2.1. Counting strategy

This approach requires that a database is monitored for errors and other
integrity violations after every n transactions. A transaction for this purpose
would include input, deletion, or modification of one or more of the records
within a database. One of the critical requirements of counting strategy is
the selection of n, the number of transactions after which a database must
66 J. Pathak et al. / Journal of Accounting and Public Policy 24 (2005) 61–75

be monitored for integrity violations. If n is too large, errors would go unde-


tected and if n is too small, the costs of monitoring would increase. The disad-
vantage of the counting strategy is that many errors could go undetected for as
long as the required number of transactions n is yet not entered into the system.

2.2. Periodic strategy

This approach requires that the database is monitored after a certain amount
of time has passed. This approach would detect errors and data violations intro-
duced since the last monitoring period. One of the critical requirements of peri-
odic strategy is that the optimal period for monitoring is selected appropriately.
If the interval period between two inspections is large, errors introduced into the
database during the intervening period could go undetected and uncorrected.
Data, information, and reports generated during the intervening period would
contain errors and would lead to incorrect decisions. If the interval between
inspections is too small, it would increase the monitoring costs.
In our earlier discussion, we pointed out that both counting and periodic
monitoring strategies demand a certain level of error tolerance and a trade-
off between data accuracy and timeliness of data. If data accuracy predominate
the choice of monitoring strategy, timeliness would be sacrificed leading to data
obsolescence. If timeliness predominate the choice of monitoring strategy, data
accuracy would be sacrificed leading to greater errors in the database. This
study takes these factors into consideration and approaches the cost of moni-
toring as a joint function of erroneous data and lack of updated-ness. The next
section develops and presents the analytical methodology incorporating the
twin issues of error rates and timeliness under the two strategies.

3. Minimizing cost of continuous audit—the methodology

One of the primary functions of an auditor during an audit process is to ver-


ify the quality of the data under audit. A principle measure of data quality is
the number of errors within a database. Errors are minimized when a database
is monitored for errors. Orman (2001) developed an analytical methodology to
observe the error rates under various monitoring strategies. The analytical pro-
cess developed by Orman is subjected to several assumptions: (1) all errors are
equally important; (2) each error arrives independently of the other and arrives
in a random fashion; (3) each error requires T units of deterministic time to
monitor, detect, and correct. 2 While these simplifying assumptions produce

2
Orman tested his analytical model after relaxing some of the assumptions and found the basic
results to hold even after the assumptions were relaxed.
J. Pathak et al. / Journal of Accounting and Public Policy 24 (2005) 61–75 67

reasonable results, they are less representative of a real audit setting. All errors
are not equally important and some would be more serious than others. For
example, an error in the date of an internal memorandum would be considered
less serious than a mistake in the pay rate of contract employees. Errors do not
arrive at fixed intervals of time and they could be introduced at any time during
a transaction processing sequence. Errors do not take equal amount of time to
monitor, detect, or correct. While some errors may require shorter clearance
times, others may necessitate much longer times. Therefore, building on Orman
(2001) work, our analysis made the following assumptions:

1. Transactions arrive in a database according to a Poisson process at the rate


of r transactions per unit-time.
2. Each transaction requires T units of time to process and validate, of the
length of time T depends on the transaction type and hence, it varies from
one transaction to the next.
3. The process and validation time for successive transactions, T1, T2, T3, . . ., Tn
are independent random variables with a probability density function f.
4. Transactions are checked and validated one at a time during the audit
period.

This study uses the Markov renewal theory as the foundation for developing
the analytical model. A basic result in renewal theory is that the average cost
per unit-time can be obtained by using the following relationship (see, Cinlar,
1975):

LongRunACPUT ¼ ECincurredDuringRC=ECT;

where LongrunACPUT denotes long run average cost per unit-time, ECT de-
nots expected cycle time, and ECincurredDuringRC denotes expected cost in-
curred during a renewal cycle.

3.1. Analytical model for counting strategy

Under the counting strategy, monitoring transactions for errors begins after
a fixed number of transactions have arrived on the processing line. Let n be the
number of transactions after which error monitoring must begin. The monitor-
ing for errors continues until all transactions have been checked for errors
including, all transactions that arrived on line during the monitoring period.
Fig. 1 illustrates this concept using n = 5. represents the occurrence of a
transaction.
The time that elapses between two consecutive audit periods is called a cycle
time and is denoted as C. The cycle time C can be subdivided into C1, the time
it takes to audit n transactions and the transactions that arrived on line during
68 J. Pathak et al. / Journal of Accounting and Public Policy 24 (2005) 61–75

Fig. 1. Counting strategy.

Fig. 2. Counting strategy: audit cycle time.

the audit period and C2, the time that elapses before audit starts again when no
audit is conducted. Fig. 2 illustrates these concepts.
The expected value of cycle time E(C) is:
EðCÞ ¼ EðC 1 Þ þ EðC 2 Þ:
Let 1/r be the mean rate of arrival of update transactions. We assume that
the transactions arrive as a Poisson process, i.e., the time of arrival of each
transaction is independent of all the previous arrivals. If n transactions arrive
on line after completion of one audit and the start of the next audit when no
errors are checked, then expected value of E(C2) is:
n
EðC 2 Þ ¼ :
r
Since the objective is minimizing the long run cost of auditing the database,
we require one or more cost parameters. Suppose a fixed audit cost of A dollars
is incurred each time an audit is performed and an additional cost of a dollars
per unit-time is incurred for each transaction that is held in queue and being
delayed without being processed during the period of the audit. The delay cost
Dn that occurs during C2, when no audit is taking place is then computed as:
 
n1 1 anðn  1Þ
Dn ¼ $a þ  þ ¼ :
r r 2r
Let Xn represent the expected time until audit is finished, given that the audit
is started after every n transactions.
Under the counting approach, the long run average cost per unit-time would
be:
J. Pathak et al. / Journal of Accounting and Public Policy 24 (2005) 61–75 69

aðn  1Þn=2r þ A þ Dn
:
n=r þ X n
Before computing Xn and Dn, we must determine the probability of the num-
ber of transactions likely to arrive during the period when another transaction
is being audited for errors. The probability that i transactions will arrive while
another transaction is being audited is denoted as pi. Since the transactions to
the database arrive according to a Poisson process, according to Cinlar (1975),
Z 1 i
ðruÞ
pi ¼ eru f ðuÞdu;
0 i!
where f(u) is the probability density function of the time it takes to validate a
transaction and k is the number of new transactions that arrive while a given
transaction is being checked and validated. Then, the expected time to com-
plete the audit would be the time taken to validate the first transaction plus
the time taken to process the remaining (n  1) + k transactions now in queue.
Let E(T1) be the expected time required to audit a transaction. Using these
parameters, we compute Xn and then, Dn, as:
X
1
X n ¼ EðT 1 Þ þ X n1þk P k : ð1Þ
k¼0

Because (1) is a linear function of n,


X n ¼ a þ bn:

Substituting this functional form in both sides of Eq. (1) and solving for a
and b we obtain,
nEðT 1 Þ
Xn ¼ :
1  rEðT 1 Þ

Suppose 1  rE(T1) > 0 or rE(T1) < 1 and let s = rE(T1). Then,


ns
Xn ¼ :
rð1  sÞ

The expected delay cost incurred during the audit of the first transaction is
given by anEðT 1 Þ þ 12 arEðT 21 Þ, where EðT 21 Þ is the second moment of T1.
Therefore,
1 X1
Dn ¼ anEðT 1 Þ þ arEðT 21 Þ þ Dn1þk P k : ð2Þ
2 k¼0

A solution to Eq. (2) is a quadratic function of n, i.e.,


Dn ¼ a þ bn þ cn2 :
70 J. Pathak et al. / Journal of Accounting and Public Policy 24 (2005) 61–75

Substituting the functional form into Eq. (2) and solving for the parameters
a, b and, c, we obtain
" #
a 1 rnEðT 1 Þ2
Dn ¼ nðn  1ÞEðT 1 Þ þ nEðT 1 Þ þ nEðT 1 Þ þ :
1  rEðT 1 Þ 2 2ð1  rEðT 1 ÞÞ

Summarizing the results so far, the expected cycle time is


n n nEðT 1 Þ nð1  rEðT 1 ÞÞ þ rnEðT 1 Þ
þ Xn ¼ þ ¼ :
r r 1  rEðT 1 Þ rð1  rEðT 1 ÞÞ
Therefore,
n n 1 rð1  rEðT 1 ÞÞ
þ Xn ¼ and ¼ n :
r rð1  rEðT 1 ÞÞ þ Xn r
n
h i
1 ÞÞ aðn1ÞnÞ
Thus, long run average cost per unit-time is rð1rEðT
n 2r
þ A þ Dn .

Substituting the expression for Dn, we compute long run average cost per
unit-time as:
rð1  rEðT 1 ÞÞA rð1  rEðT 1 ÞÞ aðn  1Þn a
þ þ
n n 2r ð1  rEðT 1 ÞÞ
1 rnEðT 21 Þ
 nðn  1ÞEðT 1 ÞnEðT 1 Þ þ
2 2ð1  rEðT 1 ÞÞ
and after simplification, we have

Long run average cost per unit-time


rð1  sÞA a r2 EðT 21 Þ
¼ þ þ 2s þ ðn  1Þ ; ð3Þ
n 2 ð1  sÞ

where s = rE(T1); and, the optimal value of n is:


rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
2rð1  sÞA
n¼ : ð4Þ
a

3.2. Numerical illustration of counting strategy

We now substitute numbers in place of the alphabets in Eqs. (3) and (4) to
illustrate the choice of optimal numbers of transactions before audit can begin
and the long run dollar cost of the audit that includes the cost of delay.
Suppose the fixed cost of audit A = $100; the delay cost for each transaction
that is held in queue during audit a = $1 per unit-time, the rate of arrival of
transaction per unit-time r = 1; average time to validate a transaction
E(T1) = 0.25 time units; and standard deviation r = 0.125 time units. Substitut-
J. Pathak et al. / Journal of Accounting and Public Policy 24 (2005) 61–75 71

ing the values into Eq. (4), we obtain the optimum number of transactions after
which integrity constraints must be executed again.
r 0:125
¼ ¼ 0:50ð50%Þ;
EðT 1 Þ 0:25
2 2
EðT 21 Þ ¼ r2 þ EðT 1 Þ ¼ ð0:125Þ þ 0:252 ¼ 0:0156 þ 0:0625 ¼ 0:0781;
s ¼ rEðT 1 Þ ¼ 1  0:25 ¼ 0:25 and ð1  sÞ ¼ 1  0:25 ¼ 0:75;
rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
2  1  ð0:75Þ  100
)n ¼ ¼ 12:2474:
1
That is, after 12 transactions have entered the system, the integrity con-
straints will be invoked to do error checking. Substituting the values into
Eq. (3) provides the long run average cost per unit time of auditing after 12
transactions have entered the system.
1  0:75  100 1 12  0:0781
þ þ 2  0:25 þ 11 ¼ 6:25 þ 5:8021
12 2 0:75
¼ $12:0521=unit-time;

n ¼ 13:

1  0:75  100 1 12  0:0781


cos t ¼ þ þ 2  0:25 þ 12
13 2 0:75
¼ 5:7692 þ 6:3021 ¼ $12:0713=unit-time:
That is, the long run average cost per unit of audit time under the counting
strategy is $12.0713.

3.3. Model to evaluate cost of periodic strategy

The periodic strategy requires that the audit for errors in transactions begin
after fixed intervals of time. That is, audit starts after every P units of time has
elapsed. The audit consists of all transactions that entered the system during
the period P when no audit was conducted and all transactions that entered
the system during the period the audit was conducted. Fig. 3 illustrates these
concepts.
Similar to the counting strategy, the objective is minimizing the long run
cost of auditing the database. Therefore, we need one or more cost parameters.
Let us assume that a fixed audit cost of A dollars is incurred each time an audit
is performed and an additional cost of a dollars per unit-time is incurred for
each transaction that is held in queue and being delayed without being pro-
cessed. Before determining the long run average cost per unit of time, it is
72 J. Pathak et al. / Journal of Accounting and Public Policy 24 (2005) 61–75

Fig. 3. Periodic audit strategy.

essential to determine the length of time P that elapses between the end of the
last audit and the start of the new audit. This model will be developed under
the same assumptions used for counting strategy.
The cycle time consists of two items: P, the time that elapses since the com-
pletion of the previous audit and the start of the new audit and an audit time E.
The audit time is a random quantity. Therefore, the expected cycle length E(C)
will be denoted as:
EðCÞ ¼ P þ E:
Suppose during the time period P, the number of transactions that occurred
is n. Then,
EðCÞ ðcycle time given nÞ ¼ P þ X n ;
where Xn is the expected time until the audit is finished, given that the audit is
started with n transactions. Xn is the same factor defined earlier in the counting
approach. That is,
nEðT 1 Þ
Xn ¼ :
1  rEðT 1 Þ

Similarly, let pn be the probability that n transactions arrive during the wait-
ing period P, then we have,
n
ðrP Þ
p ¼ erP :
n!
Therefore,
X
1
nEðT 1 Þ
EðCycle lengthÞ ¼ P þ pn 
n¼0
1  rEðT 1 Þ
X
1 n
ðrP Þ nEðT 1 Þ P þ rPEðT 1 Þ
¼Pþ erP  ¼ :
n¼0
n! 1  rEðT 1 Þ 1  rEðT 1 Þ
Let rE(T1) = s, substituting s in the above equation, we get
¼ P þ sP =1  s ¼ P =1  s:
J. Pathak et al. / Journal of Accounting and Public Policy 24 (2005) 61–75 73

It can be shown that the average delay costs during period P equals: 12 arP 2 .
Given that n transactions occurred during P, the expected delay cost is Dn. 3
Substituting the various parameters, the expected delay cost incurred during
a periodic cycle is:
1 X1
ðrP Þ
n
¼ arP 2 þ Dn erP þ A;
2 n¼0
n!
where
a 1 rnEðT 21 Þ
Dn ¼ nðn  1ÞEðT 1 Þ þ nEðT 1 Þ þ
1  rEðT 1 Þ 2 2ð1  rEðT 1 ÞÞ
a 1 2 r2 PEðT 21 Þ
Aþ rP þ rPEðT 1 Þ þ
1  rEðT 1 Þ 2 2fð1  rEðT 1 ÞÞg
and the long run average cost per unit-time is:
Af1  rEðT 1 Þg 1 ar2 EðT 21 Þ
þ arP þ arEðT 1 Þ þ : ð5Þ
P 2 2f1  rEðT 1 Þg
The optimal period P to wait between audits is:
1 1
2Af1  rEðT 1 Þg 2
2Af1  sg 2
P ¼ ¼ : ð6Þ
ar ar

3.4. Numerical illustration of periodic strategy

We now substitute numbers in place of the alphabets in Eqs. (5) and (6) to
illustrate the optimal amount of time that must be allowed before start of a new
audit and the long run dollar cost of the audit that includes the cost of delay.
Suppose the fixed cost of each monitoring audit A = $100; the delay cost of
each transaction that is held in queue during audit a = $1 per unit-time, the rate
of arrival of the transaction per unit-time r = 1; and average time to validate a
transaction E(T1) = 0.25 time units. Since s = rE(T1), s = 1 · 0.25 or 0.25 time
units. Substituting these numbers into Eq. (6),
1
2  100  0:75 2
P ¼ ¼ 12:24 time-units:
11

To compute the long run average cost per unit, substituting the above values
into Eq. (5),

3
The derivation of Dn was discussed as Eq. (3) under counting strategy.
74 J. Pathak et al. / Journal of Accounting and Public Policy 24 (2005) 61–75

100ð1  0:25Þ 12:24 0:0781


þ þ 0:25 þ ¼ $12:55 per unit time:
12:24 2 ð2  0:75Þ
The numerical example shows that, under periodic audit, a new audit should
begin after seven units of time has elapsed from the conclusion of the previous
audit and audit of all transactions that arrived during this time period and all
transactions that occurred during this period would cost an average of $12.55
per unit of time.

4. Summary and conclusions

Auditors are required to assure the quality of data and information contained
in corporate databases as part of their audit process. The use of online and real-
time transaction processing has made the audit for data quality difficult and re-
quires auditors to monitor the databases on a near continuous basis. Although
several studies have examined the technical feasibility of continuous audit of
databases, the economic feasibility of such audits remains an important issue.
This study examined continuous audit of databases from a cost minimiza-
tion perspective. Using two important audit strategies proposed by Orman,
counting strategy and periodic strategy, this study developed an analytical
methodology. This study relaxed some of the assumptions used by Orman.
Optimum timing of audits and the costs of such audits per unit of time were
compared under each strategy. The counting strategy, where an audit is trig-
gered after a certain number of transactions have entered the system was more
cost-effective than the periodic strategy where an audit is triggered after a cer-
tain amount of time has elapsed. However, a caveat is in order. As discussed
earlier, while cost is an important attribute in the choice of audit time, there
are other attributes such as tolerance for errors and impact on financial reports
that also must be taken into account while deciding the audit of a database.
The cost comparisons of the various audit strategies reported in this study
are important because this is one of the first studies to examine continuous
auditing from a cost minimization focus. The analytical methodology devel-
oped in this study would be useful to auditors to determine the cost of auditing
under various scenarios. When the cost minimization methodology reported in
this study is used in conjunction with the Orman methodology where audit
strategies are examined from an error minimization focus, auditors would have
a greater understanding of the various facets of continuous auditing.
Future research can develop methodologies combining our methodology
and the Orman methodology to find a balance between cost of continuous
audit and tolerable error rates. The issues are important for both businesses
and auditors in developing monitoring strategies that combine both the techni-
cal and economic feasibility at the same time. Other studies can examine the
design and development of intelligent modules for continuous auditing and
J. Pathak et al. / Journal of Accounting and Public Policy 24 (2005) 61–75 75

use the cost minimization strategy used in this paper as a criterion for evaluat-
ing the effectiveness of the modules.

Acknowledgment

We express our gratitude to the anonymous reviewers and the editors for
their contribution to the refinement of this paper.

References

Alles, M.G., Kogan, A., Vasarhelyi, M.A., 2002. Feasibility and economics of continuous
assurance. Auditing, A Journal of Practice and Theory 21 (1), 123–138.
Amer, T.A., Bailey, D., De, P., 1987. A review of the computer information systems research
related to accounting and auditing. Journal of Information Systems 1, 3–28.
Cinlar, E., 1975. Introduction to Stochastic Processes. Prentice Hall, New Jersey.
Date, C.J., 1995. An Introduction to Stochastic Processes. Addison-Wesley, Reading, MA.
Davis, G.B., Weber, R., 1986. The impact of advanced computer systems on controls and audit
procedures: A theory and empirical test. Auditing, A Journal of Practice and Theory 46, 1–28.
Elliott, R., 1997. Assurance service opportunities: Implications for academica. Accounting
Horizons 11 (4), 61–74.
Elliott, R., 2002. Twenty-first century assurance. Auditing, A Journal of Practice and Theory
(March), 139–146.
Fernandez, E.B., Summers, R.C., Wood, C., 1981. Database Security and Integrity. Addison-
Wesley, Reading, MA.
Groomer, S.M., Murthy, U.S., 1989. Continuous auditing of database applications: An embedded
audit module approach. Journal of Information Systems 3, 53–69.
Halper, F.B., Snively, J., Vasarhelyi, M.A., 1992. The continuous audit process audit system:
Knowledge engineering and representation. EDPACS (4), 15–22.
Laudon, K.C., 1986. Data quality and due process in large interorganizational record systems.
Communications of the ACM 29 (1), 4–11.
Orman, L.V., 1990. Functional semantics of accounting data. Journal of Accounting Systems 14
(4), 480–502.
Orman, L.V., 2001. Database audit and control strategies. Information Technology and
Management 2 (1), 27–51.
Redman, T.L., 1992. Data Quality: Management and technology. Bantam Books.
Rezaee, A., Elam, R., Sharbatoghlie, A., 2001. Continuous auditing: Building automated auditing
capability. Auditing, A Journal of Practice and Theory 21 (1), 147–163.
Vasarhelyi, M., Halper, F.B., 1991. The continuous audit of online systems. Auditing, A Journal of
Practice and Theory 19 (1), 110–125.
Vasarhelyi, M.A., Halper, F.B., Ezawa, K.J., 1991. The continuous process audit system: A UNIX-
based auditing tool. The EDP Auditors Journal 3, 85–91.
Vasarhelyi, M.A., Alles, M.G., Kogan, A., 2003. Principles of analytic monitoring for continuous
assurance, Working Paper presented at Continuous Auditing Conference, Texas A&M
University, College Station, TX. pp. 1–30.
Wang, R.Y., Storey, V.C., Firth, C.P., 1995. A framework for the analysis of data quality research.
IEEE Transactions on Knowledge and Data Engineering 7 (4), 623–639.
Weber, R., 1988. EDP Auditing: Conceptual Foundations and Practices. Prentice-Hall, New
Jersey.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy