Canada Measuring ESD
Canada Measuring ESD
Canada Measuring ESD
In e-government, performance measures should define progress and success, giving the
public reasonable assurance that the system is accountable to its customers. The
measures should be developed with stakeholders’ input and should be clearly
documented and well promoted. Without finite measurements and stakeholder input and
consensus of what those measurements will be, government risks having the project
assessed by arbitrary measurement standards.
NECCC report e-Government Strategic Planning: A White Paper1
As Lac Carling V approaches, the public profile of e-government and electronic service
delivery (ESD) has never been higher. Long the domain of technology-oriented policy
wonks, ESD today finds public officials from around the world racing to interact with
citizens and businesses through electronic channels.
How are we in the Canadian public sector doing in this race? Are we doing as well or
better than others – as Accenture suggests2 – or are we behind? Are we close to our
goals? What are those goals – and what should they be? Based on what we have learned
so far about the relationship between our actions and their results, what should we do
next?
These are important questions. To answer them, we need measures from points
throughout the complex value chain of inputs, processes, outputs, and outcomes that
governments – interacting with other institutions – use to produce and deliver electronic
services. We need measures of the social values affected by the ESD value chain –
values such as service effectiveness and efficiency, personal privacy, systems security,
community cohesiveness and equity, our capacity for democratic governance, and the
legitimacy of our governmental systems.
In short, we need to make good decisions about what to measure, how to measure, and
how to interpret those measures.
Given Canada’s status as one of the most connected nations in the world (six in ten
Canadians report being online3 and almost one in two report Internet access in the
home4), the demand for ESD in Canada is not surprising. Research on e-government in
Canada from Ekos Research Associates5 suggests that more than half of Canada’s
Internet users have visited at least one government web site in the past three months.
Moreover, six in ten Canadian Internet users believe the Internet is an effective way for
governments to communicate with citizens about programs and services. Public opinion
is especially favorable toward the portal or “one-window” concept, where almost seven
in ten Canadian Internet users believe citizens should be able to apply for programs and
services from different levels of government through a single web site. Research from
Ipsos-Reid reveals a similar story of support and demand for interacting with government
online. According to a poll conducted in the spring of 2001,6 almost four in ten Canadian
Internet users believe that in the next five years the majority of their dealings with
government will be conducted over the Internet – and this number rises to almost sixty
percent among young Canadians.
In the face of growing demand, Canadian governments have been busy working to offer
services through a variety of electronic channels, including interactive voice response
telephony (IVR), the Web, and electronic kiosks. At the federal level, the Government of
Canada is actively working toward a goal of having all government information and
services online by 2004, and has established the Government Online (GOL) initiative to
help drive toward this goal. Building on a strong ESD foundation that saw 5.7 million
tax returns filed using EFILE or TELEFILE in 2000 and 350,000 jobs posted on the
government’s online Job Bank10, early GOL activities include a user-focused redesign of
the main Government of Canada portal, building of a common IM/IT infrastructure, and
funding of a series of “Pathfinder” projects to accelerate learning.
At the provincial and territorial level, a new focus on ESD is also becoming evident.
Every province and territory in Canada now has an office or agency responsible for
delivering services through new electronic channels.11 Appendix A highlights some of
the most interesting ESD activities of the past year, including Newfoundland’s motor
vehicle payment transaction engine that is integrated with the government’s backend as
well as the Royal Bank’s backend, Prince Edward Island’s Telehospice project that
connects homes of terminally ill patients to a nursing station at a local hospital allowing
patients to spend more time at home during their last days while monitoring vital data,
Manitoba’s day care calculator that will soon link parents to a database of available day
care spaces, and British Columbia’s OneStop business registration that is integrating
municipal and provincial efforts.
Less visible than these web-based applications, Canadian governments at all levels have
been actively working to establish the policies, legal frameworks, and technology
infrastructures that will make electronic service delivery possible. For example,
information policies that cover privacy, access to information, and security, legislation to
enable digital signatures and protect personal information, and technology infrastructures
to support secure transactions are all on the agenda.
With all this apparent activity, it is appropriate for us to ask the questions: Are we
succeeding in advancing an ESD agenda, and if we are, how do we know? Are we
delivering the right services at the right pace? What measures are important in charting
progress? These and a myriad of related questions are floating around without any
definitive answers.
How government pursues the goal of ESD is, at its heart, a public policy decision. In this
context, it is useful to recall Stokey and Zeckhauser’s12 classic five-step analytic
approach to public decision making: establish the context, lay out the alternatives, predict
the consequences, value the outcomes, and make a choice. Sound public decision-
making requires the policy maker to understand the consequences of different alternatives
in a given situation and make a choice based on a set of social preferences.
Inherent in this analytic approach is a definition of success. Success is achieved when the
policy choice taken produces an outcome that maximizes the welfare of society, where
the welfare of society is defined broadly as the sum of each citizen’s individual welfare.
While conceptually straightforward, the maximization of social welfare in a complex
system is much more difficult to define or achieve in practice. First, how do we define
social welfare and what factors affect social welfare? For example, we know that in
pursuing ESD we want to improve the quality of service, ensure the privacy and security
of data, increase levels of customer satisfaction, and raise the public perception of
government. But, we also want to provide equitable access to services, integrate service
across delivery channels, improve the accuracy of data collection, and stimulate e-
commerce. Do these objectives represent all of the factors that affect social welfare?
Clearly no; even a cursory review of statements made by governments when launching
ESD initiatives reveals a myriad of perceived benefits – from accessibility to economic
development, the list is very long. Furthermore, the complex and often competing nature
of our values necessitates trade-offs among them. Aggregating measures of these
individual values into an overall measure of social value that would help in prioritizing
actions is not easy when individual preferences differ and are in conflict. Yet, if we are
to assess the success of ESD initiatives we must have some idea of what success means.
While a number of tools have evolved over the years to help with the challenge of
identifying social values, the practical exercise is necessarily an incomplete
approximation.
Analyze
Possible
Actions
Decide on
Measure Progress Best Course
Toward Goals of Action
Act
While the complexity of the system makes measuring the impact of ESD challenging,
government cannot shy away from the measurement exercise. As the saying goes, “You
only get what you measure.” If government ignores factors that are difficult to measure,
it will not get an accurate assessment of the progress made.
How do we define success in ESD? What is the goal we are seeking to attain? Looking
at the ESD landscape in Canada and elsewhere it is clear that the overriding focus is on
making services available through new electronic channels in a timely manner so as to
encourage citizens to migrate from traditional delivery channels. What was once a mix
of pilot projects and experiments has taken a more organized form as governments from
Australia to Europe to North America are setting public targets in an attempt to get from
here (limited or first generation ESD) to there (full or at least robust second generation
ESD).
The earliest example of public targets came from the federal government in Australia. As
early as 1997, Prime Minister John Howard announced that the government “is
committed to… delivering all appropriate Commonwealth services electronically on the
Internet by 2001.13
Since then, governments from the United Kingdom, to the Netherlands, to the United
States have set their own ESD targets (See Box 1 for a sample of ESD targets).
Governments in Canada have also begun setting public targets. In 1999, the Government
of Canada announced that, “By 2004, our goal is to be known around the world as the
government most connected to its citizens, with Canadians able to access all government
information and services online at the time and place of their choosing."14 Shortly
Better Service: Citizens have come to expect efficient services delivered when and where
they want them. A study by Momentum Research Group found that citizens perceive
convenience, speed, and time saving as the primary benefits of ESD.17 The majority of
Canadian Internet users (69%) view the Internet as simplifying access to government
programs and services, and seven in ten believe Canadians should be able to apply for
programs and services from different government departments through a single site.18
Most of all, however, citizens expect choice – choice in how and when they interact with
government, and governments are trying to respond.19 As Erin Research notes, “citizens
expect government services to be as good, if not better, than what they can get from the
private sector.”20
More Efficient Government: While rarely measured, the biggest driver for many
governments pursuing ESD is cost savings. With governments being asked to deliver
Benefits to Society: Aside from the direct benefits of delivering services through new
electronic channels, many governments also recognize that ESD can contribute toward
other government objectives. Examples cited by governments include improved
economic development, improved perception of and relevance of government, and more
citizens participating in the knowledge society.
Two things are important to note about the objectives listed in Figure 2. First, while there
is overlap in the objectives cited by different governments, the goals of any policy maker
must be consistent with the social values of the governed. Therefore, this paper does not
attempt to define ESD success in universal terms, but rather it suggests some criteria for
success based on the experiences of leading jurisdictions, leaving specific choices to
those who must tailor their decisions to the conditions of particular jurisdictions and
constituencies. Second, and most important, any definition of success must clearly be
broader than simply achieving ESD targets and milestones; while targets and milestones
serve as important drivers, they do not wholly define success. Success is achieved when
the right services are available through the right channels in a way that maximizes social
welfare.
With a broad understanding of what success might look like, how can we find out if
government is on the right track? This question lies at the root of why measurement is so
important. In order to reach its goals, an organization must understand where it is today
and what affect its current actions have on the pursuit of these goals. Measurement helps
1) Stages of E-government
Probably the most popular measure of ESD success is to chart government’s position or
status against a defined timeline, a set of milestones, or a theoretical model charting the
path to ESD maturity. These targets represent a natural starting point to motivate action
and are a way to galvanize support.
For example, the Government of the United Kingdom produces regular Electronic
Service Delivery Reports that track the number of services to citizens or businesses that
will be ready for electronic delivery by the government deadline of 2005.21 Similarly, the
Government Online reporting framework in Australia requires all agencies to report on
their progress toward meeting the government’s target of having all services online by the
end of 2001.22 In both cases, the quantitative number is derived from reports submitted
by each ministry or agency.
Also pursuing a fixed date, the state government in Victoria, Australia, has set eight
intermediate targets to track progress en route to achieving its goal of having 100 percent
of services available online by 2001 (see Box 2). The accompanying reporting system is
a “traffic light” system in which agencies indicate whether they are “green” (on target),
“yellow” (falling behind), or “red” (behind schedule) for each target date and each
service. Included with each rating is a brief summary outlining recent activities and
31 December 1998
2) All Government tenders available on the Internet
3) All public forms electronically accessible
4) High-volume printed public information available on the Internet
31 December 1999
5) All Government publications on the Internet
30 April 2000
6) High-volume public transactions online
31 December 2001
7) All Government purchasing online
8) All transactions online
More recently, the Government of the United Kingdom has also started working with
local governments to help set targets and milestones and to measure progress in ESD. In
helping kick-start ESD at the local level, the UK government has proposed a series of
internal milestones. These milestones include designating an authority, giving staff
access to email, gaining shared access to data, and developing marketing plans. While
encouraging local government to develop more specific targets for ESD in each locality,
the UK government has thus far left those targets to local authorities.24 The one area
where the government is measuring progress is with an addition to the Best Value
performance framework – a performance framework that requires local authorities to
report on how they are continuously improving their services. Best Value performance
indicator 157 is focused on “the number and types of interactions that are enabled for
electronic delivery as a percentage of the types of interactions that are legally permissible
for electronic delivery.”25
The Australian National Audit Office (ANAO) is using a similar model in assessing
whether government agencies will be ready to deliver all appropriate services by the end
of 2001. As Box 3 illustrates, the ANAO’s Stages of Internet Service Delivery give
overseers and the public a better sense for how advanced agencies are in delivering
services online.
Box 3: Australian National Audit Office Stages of Internet Service Delivery
The ANAO identified four stages of Internet service delivery. Of the service due to be in
place by 2001, the ANAO determined that:
• 52 per cent would be at Stage 1, at which an agency had a Website that published
information about itself and its services;
• 25 per cent would be at Stage 2, at which an agency allows Internet users to access the
agency database(s), and to browse, explore, and interact with that data;
• 21 per cent would be at Stage 3, at which an agency allows users access as in Stages 1
and 2 and also permits them to enter secure information, and engage in transactions with
the agency; and
• 2 per cent would be at Stage 4, at which, in addition to the level of access permitted at
Stage 3, the agency, with the user’s prior approval, shares with other government
agencies’ relevant information provided by that user with a view to providing a whole-of-
government integrated service.
As Accenture’s study Rhetoric vs. Reality points out, governments are a long way from
achieving this vision. “Single sites, or portals, which allow citizens and businesses to
seamlessly interact with several government departments at one visit, have not yet
emerged as the dominant model.”32 While the breadth of service offerings is growing,
there is still not much depth.
Clearly these maps are simplified representations of the road from passive web presence
to a future transformed by fully integrated ESD. In fact, many would argue that this road
is anything but linear and that the move to later stages of maturity requires a distinctive
break with the past. While far from a precise measure of success, these tracking
frameworks are useful indicators of where governments stand in relation to their high-
level goals and with respect to private sector thought leadership. Without undertaking a
thorough examination of where different Canadian jurisdictions might lie on a map of
ESD maturity, it seems clear that different governments are at different stages of
development across the country. While governments are beginning to adopt a broader
perspective with respect to ESD, efforts to date has been largely centered on individual
applications and not on the kind of cross-boundary collaborative work that will be
required to seamlessly integrate service delivery and transform government.
The United Kingdom has been a leader in benchmarking their electronic service delivery
efforts. In March 2000, it released an interim report that outlined its strategy for
measuring progress.33 In this document the UK explicitly rejects using a rigid scorecard
approach, noting that different political, cultural, and legislative environments make such
comparisons useless. Instead, they adopted a relatively qualitative approach.
Demand Supply
(consultation with citizens and businesses) (electronic government services)
- propensity to use ESD - service availability
- availability - linking services enterprise wide and
- desired services across agency borders
- adoption rates
Change Capability
(commitment and drivers of change) (enabling government infrastructure)
- route maps - authentication
- monitoring progress - data standards
- organization - security
- targets - privacy / records management
Manitoba is in the early planning stages of a similar benchmarking effort. Like the UK,
Manitoba intends to focus its efforts on peer jurisdictions as well as “best of breed”
jurisdictions. Early drafts of Manitoba’s benchmarking scorecard consider global or
enterprise measures such as existing infrastructure and security measures, as well as
service-specific measures such as accessibility and functionality.
Member states of the European Union have also decided to use a benchmarking effort to
assess their ESD progress. Agreeing to a list of 20 basic public services (see Box 4), the
EU is measuring progress toward delivery of these services using a four-stage maturity
model: posting of information, one-way interaction, two-way interaction, and full online
transactions including delivery and payment.
Source: Commission of the European Communities, eEurope 2002 Impact and Priorities:
Communication from the Commission to the Council and the European
Parliament, March 2001.
Another common framework for assessing progress and measuring ESD success is to
report against a set of strategic objectives. Jurisdictions adopting such strategies identify
a set of high-level objectives and then track proxy measures for each objective.
The most traditional framework for measuring and tracking success is to report progress
through an annual (or periodic) legislative process designed to maintain public
accountability. Such processes are often incorporated into the budget process so as to tie
strategic objectives with financial allocations.
In the United States, for example, agencies are required to submit an annual performance
plan as part of their budget submission.35 These plans contain performance goals and
indicators for the agency and are reviewed both by the executive Office of Management
and Budget (OMB) and by Congress. Performance indicators include both outcome and
In addition to the annual performance plans, in October 2000 U.S. federal agencies also
submitted plans to the OMB outlining how they will meet specific ESD requirements set
out in the Government Paperwork Elimination Act (GPEA).36 In its guidance for the
implementation of GPEA, the OMB requires agencies to report progress against their
plan on an annual basis.37
5) Evaluation
A classic program evaluation effort often begins by drawing a logic model or program
theory model that visually connects the activities or inputs of a program with its outputs
and outcomes. The logic model explicitly details how actions lead to the realization of
objectives. Using this visual model, the organization can isolate actions, outputs, and
outcomes, choosing which points in the model would be most valuable to measure in
assessing the progress and/or success of the program. While this methodology is very
useful for assessing program success, the challenge is to figure out what points in the
chain are most important to measure. This challenge is especially difficult when
outcomes depend on collaborative efforts.
With the help of KPMG, the Government of Canada is in the process of establishing an
evaluation framework. Based on a logic model of the Government Online (GOL)
program, the federal government has isolated the activities, outputs, and outcomes
In reporting these measures, a high-level scorecard will be used that focuses on whether
outputs or outcomes are being achieved in the context of six client-oriented objectives:
responsiveness, convenience, ease-of-use, interactivity, security, and privacy.
Results Category
Level of Responsiveness Convenience Ease- Interactivity Security Privacy
Measurement of-Use
Outputs ü ü ü ü ü
Department- ü ü ü ü
level Outcomes
Client-level ü ü ü ü
Outcomes
Each of the frameworks above focus on measuring the progress and success of ESD from
an enterprise or corporate perspective, addressing the question “Is the government
succeeding with ESD?” A more common approach to measuring success is to focus on
the results achieved by individual applications.
Usage (Take-up or Adoption Rates): The most common way of measuring usage is to
calculate the number of transactions happening through electronic channels as a
percentage of all transactions – the take-up rate. These numbers can then be tracked over
time to see how quickly applications are adopted. In British Columbia they are even
breaking their adoption rate numbers down by community to better understand where
they need to focus their marketing attention.
Productivity: While much less common, some jurisdictions are attempting to measure
productivity improvements. For example, in Ontario it is estimated that by using the
Cost Savings: Cost savings are probably the must elusive (and some would argue
illusory) aspect of ESD to try and measure. Getting a clear understanding of the costs
associated with delivering services through different channels is very difficult. In Texas,
however, the state government recently experimented with activity-based costing (ABC)
with some moderate success. ABC is a methodology that traces the full range of direct
and indirect resources associated with delivering a service. While many of the savings
identified in the Texas pilot were savings in productivity – and were not easily
translatable into hard dollars – by accounting for all resources, including items such as
fringe benefits, overhead, and depreciation of equipment, ABC enabled the state to
identify hard dollar savings within the five pilot agencies. For example, the study
suggested that replacing the Department of Transportation’s call center with a web-based
system could save taxpayers more than $240,000 per year.
“Total Value” Return on Investment. In Iowa, ESD projects are assessed in terms of the
benefits to both government and citizens. In addition to “hard” costs and benefits (e.g.
hardware and staff time), Iowa’s ROI analysis estimates costs and benefits associated
with factors that are more difficult to quantify (and often ignored), including risks to
citizen health, impacts on security and safety, and the time and energy required by
citizens in fulfilling their roles in the process. By calculating ROI in this way, Iowa gets
a more complete picture of the value of each project.
Unfortunately, the efforts to measure aspects outside of customer satisfaction and usage
are not very common.
From our review of how jurisdictions around the world are defining and measuring
success in ESD, it is clear that the diverse activities involved in delivering services
through electronic channels constitute an extremely large and complex system of
interrelationships. Many different types of inputs are involved including labor,
information infrastructure, and stakeholder inputs to a variety of government planning
processes. These inputs interact through complex processes – some run by the
government and some by contractors – to create the immediate outputs of the electronic
services themselves. These services, in turn, interact with a variety of forces in society to
generate outcomes such as service take-up and penetration rates as well as the end results
that most directly affect social values – productivity, fairness in how different individuals
and groups are treated, privacy, and security.
Figure 3: Simplified ESD Model
Outcomes
Inputs Processes Outputs and
and Values
Activities
To manage complex systems we need to make estimates of the relative value to society of
various outcomes that can be created from the activities or resource-consuming options
we might pursue. The important judgment is to find the appropriate relationship between
benefits and costs, dedicating resources to produce desired outputs and outcomes.
A classic measurement problem in complex systems derives from the uncertain cause and
effect relationship between actions (Costs) and outcomes (Benefits). If we measure how
we are doing in achieving downstream values (outcome measures), we reduce uncertainty
of where we stand on achieving desired benefits but we may not be able to estimate the
In that regard, how well balanced are the efforts to measure Canadian ESD efforts?
What we have not seen yet is much measurement of ESD inputs. How much is being
spent on developing and delivering ESD? We have also not seen much measurement of
processes, process integration, or process change. For example, are electronic services
requiring a greater reliance on outsourcing or greater-than-normal investments in
innovation? In terms of outcomes we have not seen much measurement of the impacts of
ESD on productivity, or internal rate of return (IRR) or equity or privacy or security.
Perhaps more important, we have not seen extensive efforts to measure the perceptions of
Are we moving toward public targets and milestones at the expense of other success
criteria? You only get what you measure, so are we only getting what we measure?
While little systemic measurement is available for many inputs, processes, outcomes, and
human perceptions, there is disparate research that yields evidence of significant
shortcomings in ESD performance overall. For example:
What are the primary options for Canadian governments to consider for measuring and
assessing progress on electronic service delivery? In specifying these options, it is useful
to vary both what we measure and how we measure. There are a number of outcomes
and values we clearly care about as well as the causal chain that leads from input to
outcome. We can use different measurement instruments and different scales of
interpretation as we seek to understand the relevant cause-effect relationships. In varying
how we measure and interpret, however, we must remember that our resources are finite.
Measurement Framework
What is the optimal “balanced scorecard”
Varying these factors in different ways to arrive at a measurement framework, the above
typology could produce hundreds of permutations. For the purpose of discussion and
debate, let us for now consider the following ten opportunities. These options are not
mutually exclusive and, in fact, often overlap. Furthermore, they are not set out as a
progression of options, but rather as a series of measurement opportunities to structure
the debate and discussion at Lac Carling V.
By answering these questions, we can describe the status quo or baseline against which
other measurement and assessment options can be analyzed. As we think about changes
in what we are doing, we can make comparisons against our measurement status quo. Is
what we are doing now good enough for the future or are there a significant measurement
opportunities out there to be addressed?
Beyond access and efficiency, there are other important values at stake that could (and
should) be measured and assessed. We should consider more thoughtful and precise
measures of privacy, focusing both on citizen perceptions and also on measures such as
legislation proposed, lawsuits filed, judgments awarded, etc. Similar efforts could be
made to measure security. Would it be valuable to try and measure the "multiplier effect"
of more efficient government on the overall performance of the Canadian economy and
its capacity to create high-value jobs? What about measuring the impacts of ESD on
equity? Broader yet, what evidence can we assemble that links ESD to citizen perceptions
of governmental competence, legitimacy, and trust?
If we are motivated to invest in ESD for many reasons other than making services
available electronically – and we clearly are – then we need to gain a balanced scorecard
of the results being generated by our ESD activities. That will require us to collect more
results-oriented measures than we presently collect.
It may be particularly important to get more precise about the estimated benefit/cost
(B/C) ratios for electronic services, where the benefits include all the measurable savings
to clients, to taxpayers, and to production units. We need to measure outcomes and
values, but we also need to understand the relationship between ESD activities and those
outcomes. To do this kind of assessment we will need better estimates than we presently
have about the dollars invested in electronic services, going beyond the typical
contracting and out-of-pocket costs to include the full costs of resources allocated for
training, operations, maintenance, etc. Measuring costs may not have been a priority
when we were first getting started with ESD since the dollar figures involved were small.
But the costs are growing now and if the overall economy continues to weaken
governments will be pressured to cut back on all new programs that can not prove their
worth.
How should we be measuring these developments? What information and feedback will
help assess our progress on the “integration agenda.” At one level we should track the
level of resources we are investing in multi-agency and in public-private collaborations?
What are the trends? Are we moving towards more outsourcing, leaving government with
the work of “steering, not rowing”? How many partners do we have? Even more
important than the number of partners involved, however, is the quality of the
Furthermore, when systems are being radically reconfigured and reinvented the
measurement and control process is extremely important. Parties within the system need
The broad movement towards electronic service delivery will undoubtedly be more
important to long-term social welfare than many of the issues that are currently the
subject of intensive polling and surveys. What would it mean to the ESD movement to
take survey work seriously and to systematically probe the views of producers (front-line
workers, supervisors, senior management), consumers (citizens and the general
public/taxpayers), and overseers (elected officials and the press)?
Surveys can be an extremely cost-effective measurement tool because they can conserve
costs through sampling techniques. They can also be structured to leverage their costs
over time if a core structure of questions allows the database to be deepened and
extended through subsequent surveys. Stakeholder surveys can provide critical
information on the perceived value of results. In addition, they can provide equally
critical information about implementation as they help to measure the perceptions and
feasibility of particular projects. Drawing on the UK’s “People’s Panel” example, would
it be valuable to create a "citizen panel" of substantial size to serve as focus group, test
market, and outcome evaluator of our ESD work? Would it be possible to talk a
Stakeholder surveys, of course, are not the only instruments for collecting feedback on
our ESD systems. We can use other instruments such as accounting systems and
performance measurement systems. In looking at the various data collection instruments
at our disposal, however, one of the most cost-effective instruments seems to be
stakeholder surveys. This instrument should be used to collect views from front-line
workers, managers, and overseers as well as from individual citizens.
7. More measurement against goals and time frames. A critical element of all measuring
systems is not just the instrument that takes the measures, but the scale against which the
measures are reported and judgments made. A “15” on a test means something different
if it is considered an “F” rather than an “A.”
The most obvious and readily available scale is comparing current performance against
performance of the same organization at an earlier period of time. For ESD this
comparison can be difficult because many programs do not have much of a history
against which to report. Nevertheless, planning for such reports should be a priority.
These comparisons become more important when the “expected” pattern of performance
is not “what you did last time,” but some “normal growth” above what you did last time.
There are many “maturity models” for judging performance when growth is expected,
and many of the consulting firms have developed such models for ESD.
In all of these comparisons there is a search for meaning, which is usually a search for
ways to improve performance in the future. The most explicit and straightforward of
these comparisons occurs when specific targets have been set for performance. While
many governments have set such targets (100% E by 2003!), the goal-setting process has
often failed to penetrate very deeply. While it is possible to have too many goals as well
8. More competitive benchmarking. How are others doing this work and – in particular –
how are the “best performers” in the world doing it? Governments do a certain amount
of benchmarking, but might learn a great deal from more aggressive benchmarking on
ESD issues and opportunities. Is the EU’s benchmarking model in which they are
tracking the availability of services a useful one to copy? Can we expand our
benchmarking effort to look at process or value measures as well? Developing broad
scorecards on electronic service delivery and publishing the results will certainly raise the
competitive stakes. While the need for collaboration suggests that too much competition
could be detrimental, we should not overlook the power of competition to push us
forward.
9. More analysis – More effort to use the feedback we get from measures and assessments
in the politics and management of government services. Feedback is not valuable in and
of itself, but rather to the extent that it informs analysis and choices of future activities.
To be useful in this way, it must be inserted into powerful decision-making processes.
In this context, how well is the feedback available to the ESD movement being used
today and how could it be used better? The primary efforts to use performance measures
in governmental decision making tend to make them a requirement of the budgeting or
strategic planning process. We are beginning to see such requirements in governmental
ESD work – for example the Government Performance and Results Act and its (less than
stellar) implementation within the U.S. What could be done to insert feedback measures
more forcefully into government decision making and/or make such measures more
visible and influential with the public?
10. Significant Increase in Resources and Level of Effort – Perhaps a doubling or tripling
of present levels of investment in electronic services measurement. This option would
involve a combination of all the options above. But what kind of expanded effort would
11. Some other option. After all, there is always SOME other option.
The above is a starter list to encourage thinking about measurement and assessment of
government ESD in Canada. We need to balance resources invested in direct activities
against those invested in measurement for feedback and control. We need to measure the
right processes and outcomes while using the right instruments and evaluation scales. So
far we have not had to worry too much about measurement problems. Everyone has been
readily convinced that ESD offers a good set of opportunities. Now, however, as the
movement gets bigger and more costly, and as economic growth is growing more
problematic, we are sailing in more difficult waters. We clearly need better tools for
navigation and control.
As we think about these and other options for improving ESD measurement, what are the
primary values at stake and the criteria to be used in making decisions? How can we
evaluate the above ten (or eleven) options and make practical decisions about our
investment in measurement?
Efficiency – or the degree to which control will be improved relative to the costs required
for improvement. Efficiency can be improved by establishing standards so that
measurements taken independently can be aggregated into more comprehensive
measures. Standards also allow measures to be compared across time and jurisdictional
boundaries. Efficiency, from the perspective of the government, may also be improved
by giving incentives to other institutions to do the measurement (as when various “good
government” groups do the surveys or other measurement work). We are normally
interested not only in the immediate cost of getting a measurement program underway,
but also in the long-term costs of keeping it going.
Using the above criteria, we have tried to assess each of the ten measurement
opportunities on a quantitative scale of 1 (low) to 5 (high). Such a scale is obviously not
“objective” in the sense that reasonable people will disagree with our numbers. That
said, we think that the results may be useful to provoke important dialog and learning. If
nothing else, the framework provides a systematic way of organizing the judgment calls
that need to be made in designing investments in ESD measurement and serve as the
basis of our conclusions and recommendations. Our results are based on our own
research and perspectives and can be summarized as follows.
Figure 6.
Preliminary Evaluation of the Value, Efficiency, and Difficulty
of Implementing Eleven Options for Measuring and Assessing
Canadian Electronic Service Delivery Activities
Using the evaluations presented above, how can we characterize present patterns of ESD
measurement and assessment? What sorts of discussion and issues should we focus on
most closely at Lac Carling V? Consider the following.
1. ESD measurement so far has been modest but probably adequate – what we have
done has focused heavily on the availability of electronic services. In getting the ESD
movement started, measurement has frankly not been all that important. We have enjoyed
a broad consensus that getting started has been a good thing. We have used targets to
build support and momentum, with most of the measurement activity focused on the
breadth of services and not on the depth or richness of the services.
Some of this measurement should be done as stakeholder surveys (see above). But other
measurement will also be important. If carried out on a sample rather than a system-wide
basis, new measurements could greatly improve the feedback needed for guidance yet not
be very costly. These efforts would not collect measures from all agencies, but from a
well-chosen sample. As we discussed in the beginning of this paper, the values or
objectives for each jurisdiction will necessarily vary according to the values and
objectives of society. Nevertheless, some possible measures could include:
• Measure access. Use samples of different functions and transactions to measure the
volume and percentage of transactions actually taking place through the new
electronic channels in comparison with traditional face-to-face and other offerings.
Are changes in access changing the total penetration rates of government services?
Are they changing the demographic patterns of service use? As a simple but possibly
telling number, do they significantly change the portion of government transactions
that occur outside of normal “government hours?”
• Measure service quality and efficiency. Design traditional time and motion studies
and/or participant observation studies to compare electronic versus other channels
with respect to: the unit cost of services to the government, the internal rate of return
(IRR), the cost to the citizen, the response time to citizen requests, and error rates.
Other indicators of cost and quality such as the migration of users from one channel
to another or the integration of service delivery channels might also prove valuable.
Taking both quality and cost into consideration, and using rigorous activity-based
Note that we are now playing a rather new game that relies much more on innovation
than was true for governments in the past. If this is true, we need to measure more
assiduously than before the size of the investments we are making in innovation. Our
innovations portfolio may be the most important part of our budget, even if it is a
relatively small portion overall. We need to develop the knowledge bases that will let the
“fast followers” of government move faster and more effectively in diffusing
authentically better new practices.
None of this is readily condensed into a single measure of success. This exercise is NOT
like saying “100%E by 2003!” Rather, we need to assemble evidence to make a reasoned
assessment of how the new investments are faring. It is a measurement task, but of a
complex and often non-quantitative sort.
5. Don’t avoid the unpleasant stuff: go for more benchmarking. While the measures
described above should be extremely valuable to the public sector community as a whole,
they are often based on the kind of samples and studies that avoid uncomfortable
comparisons of one program or ministry against another. If so, they will not be enough.
We need feedback for individual programs and even individual managers as well as the
public sector community overall. To get those measures, we should focus on the kind of
comparative data that is required for a rational assessment of performance and
accountability. Benchmarking is necessary to help drive progress. While much of what
we will do together in deploying ESD can and will be done collaboratively, we also need
to make the “race” a bit more visible. Comparative data and benchmarking for ESD are
required to help overcome complacency and resistance to change. There are, of course, a
number of different ways such benchmarking could be structured. We could focus on
6. Beyond the obvious next steps, maybe we need a very substantial increase in
measurement and assessment efforts? Certainly, after a few more years to improve
measurement through high priority steps like those above, we will perhaps be at the point
to put a much larger effort into measurement. After all, the measurements are cheap,
especially if they buy better control.
* * * *
As we mentioned in the beginning, the purpose this paper has not been to assess whether
Canada is succeeding with ESD, to gauge how different jurisdictions in Canada compare
with one another, or to judge how Canada compares to other jurisdictions. Instead, we
have offered a starting point for discussing how we can and should define and measure
progress on the ESD agenda. Our goal has been to encourage debate and discussion at
Lac Carling V and even to assist the group in developing specific actions to follow up
after the conference. It is with this thought that we turn our attention to Lac Carling V.
British Columbia:
- Launched BC Connects – first phase of ESD portal
- Integrating municipal governments in OneStop business registration
- Soon to deliver ability to accept credit card payments online
Alberta:
- Launched One-Window Initiative to coordinate service delivery across boundaries
- Extended Alberta wellnet to link health care community together
Saskatchewan:
- Launched citizen-centered portal re-organized by functionality
- Finalized government online strategy with goals to have all forms available
electronically by 2002 and 90% of transactions by 2004
- Implemented “shopping cart” functionality and credit card transactions
Manitoba:
- Launched online personal property registry
- Enabled students to calculate eligibility for financial assistance and apply online
- Enabled parents to calculate day care subsidy; developing a database to search for
available day care spaces
Ontario:
- Set a goal of 2003 to be a world leader in delivering services online
- Issued an RFP for an Integrated Service Delivery platform for individuals
- Launched E-Laws to provide online access to up-to-date provincial laws
Quebec:
- Passed a law mandating the modernization of public services to improve service
quality
- Dedicated resources to connect families and businesses to the information economy
- Focusing on security (especially PKI) and looking toward a "smart" health card
New Brunswick:
- Launched land title registry online
- Launched motor vehicle registrations online
- Implemented an e-commerce suite that includes “shopping cart” style functionality
Nova Scotia:
- Launched online motor vehicle renewal; includes real-time access to survey feedback
- Launched business registration and online procurement information
- Modeled Service Nova Scotia site to offer seamless citizen-centered access to
services
Newfoundland:
- Launched motor vehicle registration and traffic payment transaction engine that is
integrated with government backend as well as the Royal Bank backend
- Enabled students to apply for financial aid online
Nunavut:
- Completed Phase 1 of IT strategy in which IT functions were centralized and
enterprise applications launched in a stable environment. Phase 2 will review
inventory and programs in order to prioritize future development
- Established a task force to examine broadband access in Nunavut
- Provided Inuktitut syllabus online as a first step toward making Inuktitut the working
language of government
Northwest Territories:
- Focused on developing telemedicine and distance education capacity
- Using Wire North Initiative to bring stakeholders together
Yukon:
- Redesigned web site with citizen-centered functional index
- Posted single change of address form (not yet interactive)
- Working with federal, municipal, and first nations governments to integrate telephone
listings as a first step toward stronger citizen-centered integration
Enhancing services.
- timeliness, reliability, accuracy
- client centric streamlined processing harmonized across programs/jurisdictions
- universal access, choice of channels
- keeping in tune with what clients want and need
Enhanced accountability.
- clients know who to hold accountable for what
- service providers and government officials are accountable for their actions
- compliance targets and service level agreements are met or exceeded
- decisions are made quickly and transparently, with full information
Enhanced relevance
- lead in third party benchmarks
- brand recognition
- meet or exceed take up rate targets
- be sought after to form alliances/partnerships
Cohen, Steven and William Eimicke. (2001). The Use of the Internet in Government
Service Delivery. Prepared for the PricewaterhouseCoopers Endowment for the Business
of Government.
Council of the European Union. (2001). eEurope 2002: An Information Society for All:
Impact and Priorities. Communication from the Commission to the Council and the
European Parliament.
Council of the European Union. (2000). eEurope 2002: An Information Society for All:
Action Plan. Prepared for the Council and the European Commission for the Feira
European Council.
Heath, William. (2000). Europe’s readiness for e-government. Kable and Government
Computing.
Netherlands, Government of. (2001). 25% electronic public service delivery in the
Netherlands.
Schmidt, Faye Nella, with Teresa Strickland. (1998). Client Satisfaction Surveying:
Common Measurements Tool
Schmidt, Faye Nella, with Teresa Strickland. (1998). Client Satisfaction Surveying: A
Manager’s Guide.
Service New Brunswick. (2000). Service New Brunswick Annual Report 1999-2000.
Stanley & Milford. (2000). maxi Review: Attitudes and Behaviour of maxi Users.
Report prepared for Multimedia Victoria.
Stokey, Edith, and Richard Zeckhauser. (1978). A Primer for Policy Analysis.
United Kingdom Cabinet Office. (2000). e.gov: Electronic Government Services for the
21st Century.
United Kingdom Local Government Association and the Department of the Environment,
Transport, and the Regions. (2001). e-Government Local Targets for Electronic Service
Delivery.
United Kingdom Local Government Association and the Department of the Environment,
Transport, and the Regions. (2001). e-Government Delivering Local Government
Online: Milestones and Resources for the 2005 Target.
Victoria, Government of, with Simsion Bowles and Associates. (1998). Online Service
Delivery: Lessons of Experience.
West, Darrell. (2000). Assessing E-Government: The Internet, Democracy, and Service
Delivery by State and Federal Governments.
Presentations
Australia, National Office for the Information Economy. (2000). TIGERS: Electronic
Framework (Signpost) Consultancy Report.
d’Auray, Michelle. (2000). Government On-Line: Serving Canadian in the Digital Age.
Presentation to North America Day delegates.
Erin Research. (2000). Citizens First 2000. Presentation to the Treasury Board of
Canada Secretariat.
Milrad, Lewis H. (2001). Dealing with Security and Privacy: A Special Obligation for
Governments Online? Presentation to MuneGov 2001.
The authors would like to thank the following people for taking the time to offer their
insights and ideas.