Werner Bellingan
Werner Bellingan
Werner Bellingan
BY
WERNER BELLINGAN
SEPTEMBER 2007
15 September 2007
The name of the organisation under review is kept confidential because the
work presents potentially sensitive information. It is referred to as Company X.
Yours faithfully,
WERNER BELLINGAN
I, Werner Bellingan, hereby declare that:
____________________ __________________
Werner Bellingan Date
ACKNOWLEDGEMENTS
The successful completion of this study would not have been possible without
the support and assistance of the following parties, which I would like to thank in
particular:
• My Lord and Saviour, Jesus Christ, for giving me my abilities and the
opportunities to fulfil my destiny in life.
• My wife Zelda and son Danté, for their invaluable support, patience and
love.
• My parents, Otto and Susan, for their encouragement and love.
• My promoter, Prof Koot Pieterse, for his guidance and assistance.
• Dr Annelie Pretorius for checking the technical correctness of this paper.
• Jacques Pietersen from the Nelson Mandela Metropolitan University for
doing the statistical analysis.
• Debbie Box for proofreading this dissertation.
• Andrew Theron, my employer at the time of the research, for giving me
his full support.
• All the respondents who completed the questionnaire.
• Christine Gross who tirelessly sent out hundreds of correspondence to
potential respondents.
• Kevin Jackson who assisted with the computer aided design (CAD)
draughting of graphs.
SUMMARY
The civil engineering industry in South Africa has seen a steady decline in the
number of professionals during the last few decades, however it is expected
that the government and private sectors are to spend over R200 billion on
infrastructure in the next few years. This increases the demand on civil
consulting engineering firms to achieve greater productivity, with reduced time
and human resources, which has had a profound effect on the quality of service
delivered to clients. These firms need to gain a competitive advantage by
consistently providing Service Excellence, which is superior to their competitors.
One way of achieving this is by benchmarking firms against their competitors.
In this research paper the Service Quality and Service Recovery procedures of
Company X in Port Elizabeth were benchmarked against its competitors using a
customised form of the recognised SERVQUAL research instrument - the
SERVPERF questionnaire. The results proved to be invaluable because the
survey revealed insightful information which can be used to their strategic
benefit. Civil consulting engineering firms need to be aware that Service
Excellence is an imperative in the service industry, but do not necessarily have
to be perfect. Firms simply need to outperform their competitors to be rated as
market leaders.
Strategies to improve the Service Quality and Service Recovery of the firm
under review are suggested and this work concludes with suggestions for future
research projects, which may be beneficial to the researcher, the civil
engineering industry and the economy of South Africa.
i
TABLE OF CONTENTS
Page
CHAPTER 1 : THE BACKGROUND AND METHODS OF STUDY
1.1 INTRODUCTION 1
1.2 PROBLEM STATEMENTS AND OBJECTIVES 2
1.2.1 MAIN PROBLEM STATEMENT AND OBJECTIVES 2
1.2.2 SUB-PROBLEM STATEMENT AND OBJECTIVES 3
1.3 OVERVIEW OF RELATED LITERATURE 3
1.4 DEFINITION OF KEY CONCEPTS 4
1.4.1 SERVICE QUALITY 4
1.4.2 CLIENT SATISFACTION VERSUS SERVICE QUALITY 5
1.4.3 TOTAL QUALITY MANAGEMENT 6
1.5 SCOPE AND DELIMITATION OF THE RESEARCH 8
1.6 ASSUMPTIONS 8
1.7 LIMITATIONS 9
1.8 SUMMARY 9
2.1 INTRODUCTION 11
2.2 SERVICE QUALITY 11
2.2.1 A SERVICE QUALITY MODEL 13
2.2.2 SERVICE LEADERSHIP 15
2.3 CLIENT SATISFACTION 16
2.3.1 SERVICE SATISFACTION FRAMEWORK 16
2.3.2 MEASURING CLIENT SATISFACTION 17
2.4 MEASURING SERVICE QUALITY 19
2.5 PRIOR RESEARCH 22
2.6 THE IMPORTANCE–PERFORMANCE MATRIX 23
2.6.1 THE “APPROPRIATE” ZONE 24
2.6.2 THE “IMPROVE” ZONE 24
2.6.3 THE “URGENT ACTION” ZONE 24
2.6.4 THE “EXCESS?” ZONE 24
3.1 INTRODUCTION 35
3.2 RESEARCH METHODOLOGY 35
3.2.1 TRIANGULATION 36
3.3 RESEARCH QUESTIONS AND HYPOTHESES 36
3.3.1 RESEARCH QUESTIONS 36
3.3.2 RESEARCH HYPOTHESES 37
3.4 THE RESEARCH INSTRUMENT 39
3.5 CHOICE OF SAMPLE 40
3.6 SUMMARY 40
4.1 INTRODUCTION 41
4.2 DATA COLLECTION 41
4.3 DATA ANALYSIS 42
4.3.1 RESPONSE RATE 43
4.3.2 CRONBACH’S ALPHA RELIABILITY ANALYSIS 45
4.4 SERVICE QUALITY DIMENSIONS 47
4.4.1 RELIABILITY 47
i. Question 1 – “When they promise to do something
by a certain time, they do so.” 48
5.1 INTRODUCTION 73
5.2 THE SERVQUAL/SERVPERF RESEARCH INSTRUMENT 73
5.2.1 COMPARING EXPECTATIONS AND PERCEPTIONS
OF CLIENTS OVER TIME 73
5.2.2 COMPARING SERVPERF SCORES OF COMPANY X
AGAINST COMPETITORS OVER TIME 74
5.2.3 CATEGORISE CLIENTS INTO SEGMENTS OF
DIFFERENT QUALITY PERCEPTIONS 75
5.2.4 ASSESSING SERVICE QUALITY PERCEPTIONS OF
INTERNAL CLIENTS 75
5.2.5 ASCERTAIN THE SERVICE QUALITY PERCEPTIONS
OF CONTRACTORS 76
5.2.6 ASCERTAIN THE SERVICE QUALITY PERCEPTIONS
OF OTHER PROFESSIONALS 76
5.2.7 ASSESSING THE SERVICE QUALITY OF CONTRACT
WORKERS 76
5.3 RECOMMENDATIONS 77
5.3.1 IMPROVING SERVICE QUALITY 77
i. Closing the Service Quality Model Gaps 79
5.3.2 IMPROVING SERVICE RECOVERY 81
5.3.3 IMPROVING TOTAL QUALITY MANAGEMENT 83
5.3.4 IMPROVEMENTS SUGGESTED BY CLIENTS FOR
COMPANY X 84
5.4 SUGGESTED IMPLEMENTATION PLAN 85
5.5 SUGGESTIONS FOR FUTURE RESEARCH PROJECTS 86
5.6 CONCLUSION 88
REFERENCE LIST 90
LIST OF FIGURES
LIST OF TABLES
CHAPTER 1
1.1 INTRODUCTION
The civil engineering industry in South Africa has seen a steady decline in the
number of professionals during the last few decades. This can be attributed to
factors such as a reduction in the industry demand, a reduction in the number of
graduates, an increase in the number of emigrations and poor financial and
other rewards. The result is that personnel have left the industry at a higher
rate than those professionals entering it through tertiary institutions and
immigration (Lawless, 2006).
South Africa will need, according to Lawless (2006), between 3 000 and 6 000
additional civil engineers, technologist and technicians, depending on whether
projects are run concurrently. According to Van Zyl (2006), the Eastern Cape
construction industry is in a growth phase and is expected to gain further
momentum. The total number of commercial building plans approved in the first
four months of 2006 grew by more than 300 per cent over the previous year and
this figure is the highest of all the provinces in South Africa (Van Zyl, 2006).
This increased demand for Civil Consulting Engineering Firms (CCEF) to be
more productive, with reduced time and human resources, has a profound
effect on the quality of service delivered to clients.
The two Service Quality dimensions, according to Gardiner (2004: 56), that
require the most action in the civil consulting engineering industry in Port
Elizabeth (PE) are Reliability and Responsiveness. The perceptions of the
clients of these two dimensions are benchmarked against the competitors of
Company X in PE. This provides a relative ranking of Company X in relation to
the Service Quality of its competitors. The results of this survey may prove to
be invaluable to Company X.
Civil consulting engineering firms are unaware of the perceptions of their clients
with regard to their Service Quality. Most do not have the measures in place to
gain valuable feedback from their clients to provide better Service Quality. An
unrealistic belief by a CCEF about the perceptions of its clients about its Service
Quality is undesirable.
The main problem statement of this research is: “How can Company X improve
its Service Quality to gain a competitive advantage?”
The first step in the research was to conduct a comprehensive literature search
on Service Quality and Service Recovery. Online databases such as SABINET,
EBSCOHOST, Emerald and Google were used to obtain relevant information.
Various books, journals and other relevant media such as newspapers were
The majority of clients of CCEF focus on the quality of service rather than the
quality of work. It is difficult for clients to appraise technical excellence,
therefore the personal relationship between the client and the firm is important
(Maister, 2003: 71). Maister (2003: 76) observed that few professional services
firms give attention to improving Service Quality.
Service Quality is different from product quality because goods are consumed
and services are experienced (Maister, 2003: 71). The following are the main
differences between goods and services (Zeithaml, Parasuraman & Berry,
1990: 15-16):
• Services are predominantly intangible. Services, unlike goods,
usually cannot be measured, tested and verified in advance of sale to
ensure quality. The selling of a service is purely a performance,
therefore the criteria clients use to evaluate it are complex and hard to
capture accurately;
• Services are heterogeneous. The performance of services varies from
one service provider to another, from client to client and over time. The
interactions between the staff of CCEF and the clients cannot be
standardised to ensure uniformity in the way that the quality of goods
produced are;
• Production and consumption of many services are inseparable.
Service Quality often occurs during service delivery, rather than being
delivered to the client as manufactured goods. Service providers do not
have the advantage of factories serving as buffers between production
and consumption. Service clients are said to be in the service factory.
Leading service firms have identified total client satisfaction both as a goal and
as an imperative. Client satisfaction need not be viewed the same as Service
Quality. Service Quality does not necessarily lead to client satisfaction and
client satisfaction is not necessarily an antecedent of Service Quality (Gardiner,
Two approaches assist with the achievement of total client satisfaction, namely
Service Recovery and Service Guarantees.
The reputation of a CCEF is built by its quality, reliability, delivery and price.
Quality is the most important of these. Quality is meeting the requirements of
the client and is not restricted to the functional characteristics of the services
(Oakland, 2003: 16).
Planning, People and Processes are the keys to ensuring Service Quality,
which improves overall Performance. The four Ps of Planning, People,
Processes and Performance form a structure of management necessities,
which form the TQM model as illustrated in Figure 1.1 (Oakland, 2003: 26).
Planning
Culture Communication
Performance
1.6 ASSUMPTIONS
1.7 LIMITATIONS
Only the dominant CCEF in PE have been included in the research instrument
(questionnaire) and provision was made for clients to specify and rate other
firms which it had dealings with. Company X was benchmarked against its ten
main competitors in PE and against the civil consulting engineering industry.
The latter comprised the average rating of all the firms in PE.
The questionnaire included nine questions covering the two Service Quality
dimensions, Reliability and Responsiveness and one question was pertaining to
Service Recovery. Its results are limited to the Gap Scores of the
aforementioned and no conclusions or inferences could be made on any other
issue relating to Service Quality or Service Recovery. The research was limited
to current and previous clients, as suggested by Parasuraman, Zeithaml and
Berry (1988: 31), because meaningful responses to the perception statements
of the SERVQUAL questionnaire require respondents to have some knowledge
of or experience of Company X.
1.8 SUMMARY
The problem statements and objectives of this research were identified in this
introductory chapter. The overview of the related literature and the definition of
key concepts and the scope and delimitation of the research have set the
parameters of this dissertation, which is structured as follows:
CHAPTER 2
2.1 INTRODUCTION
The aim of this chapter is to analyse and present the findings of a literature
review that determines the factors influencing the Service Quality and Service
Recovery of CCEF. It includes prior research and the application of Service
Quality and Service Recovery in practice.
Excellent service is beneficial in the short and long term because it creates true
clients. These are clients who are pleased that they have chosen a firm after
the service experience and clients who will come back for repeat business. The
positive relationship between perceived quality and profitability has been
documented in a database from the Profit Impact of Market Strategy (PIMS)
programme, which illustrates this relationship. Figure 2.1 (from the PIMS
database) illustrated the relationship between relative perceived quality and
Return On Investment (ROI) (Zeithaml et al, 1990: 10).
35%
30%
25%
Percentage ROI
20%
15%
10%
5%
0%
20% 40% 60% 80% 100%
Relative Quality
The Service Quality Model (Zeithaml et al, 1990: 46) focuses on deficiencies
within firms that contribute to poor Service Quality perceptions by clients.
Organisations do not always meet the expectations of their clients and the
differences between the expected and perceived service are called “Gaps”. It is
noted that should firms fail to meet the expectations of clients, it does not
necessarily result in dissatisfied clients. Figure 2.2 depicts a Conceptual Model
of Service Quality as developed by Zeithaml et al (1990: 46).
Expected Service
Client
Gap 5
Perceived Service
Gap 1
Gap 4 External
Service Delivery
Communications
Gap 3
Provider
Gap 2
Management Perceptions of
Client Expectations
The Gaps as illustrated in Figure 2.2 are narrowed through closing the provider
gaps (Zeithaml et al, 1990: 49):
• Gap 1 – The discrepancy between the expectations of the clients and the
perceptions of management of those expectations. This translates into
not knowing what the client wants;
• Gap 2 – The discrepancy between perceptions by management of client
expectations and Service Quality specifications. This translates into not
selecting the right service designs and standards;
• Gap 3 – The discrepancy between actual Service Quality specifications
and the actual service delivery. This translates into not delivering to
service standards;
• Gap 4 – The discrepancy between service delivery and what is
communicated to clients. This translates into not matching performance
to promises;
• Gap 5 – The assessment of the client about Service Quality.
The factors that contribute to Gap 1 through Gap 4 are as follows (Zeithaml et
al, 1990: 35):
• Gap 1
o Insufficient market research;
o Inadequate use of market research findings;
o Lack of interaction between management and the clients;
o Insufficient upward communication from contact personnel to
management;
o Too many staff levels between contact personnel and management.
• Gap 2
o Inadequate management commitment to Service Quality;
o Perception of infeasibility;
o Inadequate standardisation of tasks;
o Absence of goal setting.
• Gap 3
o Employee role ambiguity;
o Role conflict;
o Poor employee-job fit;
o Poor technology-job fit;
The main reason that Service Quality is not at the desired level is due to a lack
of sufficient Service Leadership. Zeithaml et al (1990: 5) states that “Too many
workers are over-managed and under-led”. Service Leadership means profit,
and is an integral part of any business. The following are important
characteristics of Service Leadership (Zeithaml et al, 1990: 5-8):
• Service Vision. Service leaders view Service Quality as the basis for
competing. Service Excellence is a central part of the vision. Service
leaders realise that Service Excellence requires constant attention;
• High Standards. Service leaders strive to achieve the right service the
first time;
• Hands-on Leadership style. Service leaders lead from the field, as
opposed to from their desks;
• Integrity. Personal integrity is vital to successful Service Leadership.
There is a strong connection between Service Excellence and employee
pride. This pride is partly shaped by their perceptions of management
fairness.
The difference between Service Quality and client satisfaction have been
introduced and briefly discussed. Client satisfaction is a subjective concept,
because expectations differ from client to client. Any firm that wants to assess
its performance needs to distinguish between measuring the following (Van
Looy, Gemmel & Van Dierdonck, 2003: 125):
• Perceived Service Quality;
• Client satisfaction;
• Technical Quality.
Delighted Client
Satisfied
Recovered
Satisfied Client
Existing
Client
Complaining
It can be deduced from this framework that only a certain percentage of clients
who are dissatisfied make their complaints heard. Client satisfaction
management and complaint management are crucial parts of a strategy to
increase client loyalty, and ultimately increase profits. This is achieved when
firms minimise client defections, have effective Service Recovery strategies and
maximise repeat business. Service satisfaction is achieved through client
satisfaction measurement and complaint management (Van Looy et al, 2003:
125-126).
Benchmarking that is done over time, serves as a good indicator about whether
the client focus efforts of the organisation are successful. Van Looy et al (2003:
128) state that outperforming competitors may yield more than merely achieving
the highest possible performance. Hence, it is not uncommon to benchmark
client satisfaction scores against those of the competitors. This is done in terms
of the actual performance and the rate of improvement.
One way to measure specific aspects of a service is to divide it into its quality
dimensions. These dimensions are related to the needs of the clients. A
perfect measurement model, according to Van Looy et al (2003: 132), needs to
have the following characteristics:
• The various dimensions are valid across a wide range of services (that
is, universal);
• The dimensions are independent. The dimensions measure the various
aspects of service quality perceptions;
• The dimensions comprise a comprehensive set;
• The dimensions are homogenous;
• The dimensions are unambiguous;
• The number of dimensions is limited.
A perfect model does not currently exist, but one model which is widely
accepted and possesses most of the abovementioned characteristics is the
SERVQUAL research instrument.
This research uses some of the findings by Robin Gardiner (2004) in his
dissertation “Evaluating the Service Delivery of a Consulting Engineering Firm”.
His research was aimed at mainly the same clients in the Eastern Cape as
those of Company X and used the SERVQUAL questionnaire. Gardiner
discovered that Gaps existed between client expectations and perceptions.
Some of the findings and recommendations by Gardiner (2004: 59-60) have
been incorporated in this research and include:
• The use of the SERVPERF instrument opposed to the SERVQUAL
research instrument;
• The adoption of a variety of measures with the aim to improve response
rate:
o Distribution of the questionnaire by a senior member of the firm
such as the Managing Director;
o Prior notification by the firm that the questionnaire is to be sent to
clients;
o Implementing a follow-up procedure to remind the respondents.
• The relative importance of the Service Quality dimensions to clients of
CCEF in PE, which are:
o Reliability – 37 per cent;
o Assurance – 21 per cent;
The “lower bound of acceptability” is shown as line AB. It is below this line that
managers would typically have a dire need for improvement, whereas above
this line there is no pressing urgency for improvement. Not all factors falling
below this line AB have the same degree of improvement priority. A boundary
represented by line CD represents the distinction between “Urgent Action” and
“Improve”. Similarly, factors falling above the line AB have been classified as
either “Appropriate” or “Excess?”. The four zones imply different approaches as
described below (Slack, 1994: 67-69).
The lower limit of the “Appropriate” zone is the “lower bound of acceptability”,
which is the level of performance that the firm would not want fall below. The
objective of any improvement programme is to move performance up to or
above this boundary. Factors falling in the Appropriate zone can be considered
satisfactorily in the short and medium term. Long-term objectives will be to
continuously improve and to strive towards the upper boundary of this zone.
Factors lying below the lower edge of the Appropriate zone need to be
improved. The “Improve” zone is depicted as the area below the AB line, and
above the CD line in Figure 2.4. Factors lying in the bottom left-hand corner of
the Matrix are likely to be classified as non-urgent, lower-priority cases where
performance is poor, but is less important.
Any factor that lies within the “Urgent Action” zone, depicted as the area below
line CD in Figure 2.4, is classified as crucial. The short-term objectives are to
raise the performance to the Improve zone. The medium-term goal is to
improve performance to above the lower boundary of the Appropriate zone.
The “Excess?” zone is depicted as the area above the EF line in Figure 2.4. Its
punctuation mark, the “?”, is of particular importance. Any factors that lie in this
zone imply that their achievement performance is better than would seem to be
warranted. This can mean that too many resources are being utilised to
The adjusted use of the Matrix needs clarification. The version of the
Importance–Performance Matrix by Slack (1994: 67) was based on a Likert-type
scale of one to nine. Gardiner (2004: 57) suggested that a changed Matrix be
used, with the difference being that the Y-axis is on a scale from -4.5 to 0, at
intervals of 0.5.
The relative importance, according to Gardiner (2004: 43), of each of the five
SERVQUAL dimensions in the civil engineering industry in PE is illustrated in
Figure 2.5.
Assurance, 21%
Reliability, 37%
Responsiveness,
19%
Stiefbold (2003: 44) discovered that between 85 and 95 per cent of disgruntled
clients will never complain to the firm about poor service, but will simply take
their business elsewhere. More than 90 per cent of these dissatisfied clients
will never use the same organisation again. This strengthens the reason for
organisations to have Service Recovery processes in place. An effective
Service Recovery process needs to be able to convert at least 80 per cent of
dissatisfied clients into satisfied ones.
Stiefbold (2003: 44) summarises the following about why Service Recovery
needs to be done:
• The client may not use the firm again, but should the recovery attempt
be perceived by the client as satisfactory, that individual is not likely to
embark on a market damaging campaign against the firm;
• More than 70 per cent of dissatisfied clients will re-engage in business
activities with the firm should the problem resolved be perceived as
satisfactory. Some 90 per cent of clients will return if the Service
Recovery was perceived as fair and prompt. Firms need to try to get
more of their disaffected and dissatisfied clients to complain, since this
would create a unique economic opportunity;
• Clients who have experienced effective Service Recovery will engage in
more business with the firm and will promote the firm to a greater degree
than loyal clients;
• Effective Service Recovery can differentiate a firm from its competitors.
It is perceived as part of the overall Service Quality.
Research has indicated that there are four types of activities necessary for
Service Recovery (Bowen & Johnston, 1999: 120):
• Response. The acknowledgement that a problem occurred, together
with an apology, empathy, quick response and management
involvement;
• Information. The explanation of the failure, listening to suggested
solutions, agreeing on a solution, giving assurance that it would not
happen again and a written apology;
• Action. The correction of the failure, taking the necessary action to
avoid failures in the future, follow-up action to ascertain the after-effects;
• Compensation. Token compensation, equivalent compensation or
refund.
There are, according to Stiefbold (2003: 44-46), firms with excellent Service
Recovery efforts who make a number of mistakes. These include the following:
• Most managers do not believe that Service Recovery is worth the time
and effort. It is believed that Service Quality is taken for granted by all
clients and that it is costly to excel continuously. These managers think
that poor Service Quality has nearly become the norm and that the
clients, themselves, are very cynical;
• Many managers disregard evidence that Service Recovery has a
substantial financial pay-off. Most firms have turned their focus towards
cost reduction and pay lip-service to client retention strategies aimed at
the most profitable clients. The need to respect all clients appears
forgotten;
• Firms fail to take advantage of free client data on critical incidents. There
is little coordination of information from the various client segments that
will allow the firm to plan and respond effectively to Service Recovery
opportunities;
• Firms fail to invest sufficiently in actions which prevent poor service.
Preventative measures may not eliminate the need for excellent Service
Recovery systems, but greatly reduce the load. Inadequate preventative
actions are typically found in the following three areas:
o The measurement of dissatisfaction both qualitatively and
empirically. This leads to an ignorance of the activities in the
various client segments;
o The implementation of regular tracking polls on significant service
indicators and the analysis of the resulting data by applicable
market segments;
o The design and implementation of user-friendly client complaint
systems to collect feedback.
• The most important aspect of Service Recovery is attitude. The Service
Recovery effort will not succeed without the proverbial “smile-in-the-
voice”;
• Firms make it difficult for clients to complain or give feedback. Most firms
do not have a mechanism whereby clients can get problems solved;
• Firms do not train and empower employees to convert disgruntled clients
into satisfied clients. Employees often quote the organisational policies,
rather than asking the clients for solutions to amicably resolve the
problem;
• Some firms collect data on client problems, but fail to communicate this
information as a preventative measure for future problems.
Firms that implement effective Service Recovery strategies incur a variety of the
benefits; financial consequences of market damage are avoided, the revenue
potential from the clients that have been successfully recovered is increased,
and it assists with the loyalty of the client base. The quality of customer support
processes throughout a firm can reduce the need and cost of Service Recovery.
Stiefbold (2003: 46) aptly summarises it by stating that “Service Recovery is
smart and profitable business”.
Client's Sensitivity to
Outcome & Process
Nature of Service Failures
Failure
• Type of Failure
Perceived Value
(Outcome vs. Process)
Loss from Failure
• Magnitude of Failure
Client's
Relative Importance of Cumulative
Outcome & Process Value of
Recovery Strategy Dimensions Service after
• Magnitude of Recovery
Outcome and Process Perceived Value
Recovery • Gain from
Expenditure on Recovery
Outcome and Process
Recovery Client Sensitivity to
Outcome & Process
Recoveries
The service failure, as perceived by the client, is divided into two components,
namely Failure Type (outcome versus process) and Failure Magnitude. The
client experiences a value loss when a failure occurs. This loss is moderated
by the sensitivity of the client to each type of failure and the perceived
importance of the outcome and process dimensions. The client experiences a
value gain, depending on the effectiveness of the Service Recovery efforts.
This perceived value gain is moderated by the sensitivity of the client to each
type of recovery and the perceived importance of the two dimensions. The
perceived value by the client of the service is collectively determined by the
previous perceived value, perceived value loss from failure, and perceived
value gain from the Service Recovery by the individual.
The Service Recovery efforts, from the perspective of the service firm, have a
two-fold objective. First, the firm needs to aim to re-establish the cumulative
perceived value by the client to the desired target level that is necessary to
retain the client. This level is indicated as the “Value Recovery Target” in
Figure 2.6. Second, the firm needs to try to minimise the overall recovery cost
required to realise the Value Recovery Target.
Zhu et al (2004: 497-498) suggest that the type and magnitude of Service
Recovery need not depend on the severity of the failure and the principle of
matching mental accounting alone. Their model conceptualises a value-driven
approach. The firm determines the optimum recovery strategy by firstly
deciding on the target for its recovery efforts. This target is based on the
existing and potential profitability of the client and other criteria that influence
the importance of the client to the firm.
Percentage of
Level of dissatisfaction
Respondents
Slightly dissatisfied 8%
Annoyed 14%
Very annoyed 30%
Extremely annoyed 22%
Absolutely furious 26%
Total 100%
Johnston (1998: 74) states that “[Figure 2.7] confirms … that the number and
types of responses made by a dissatisfied client will be proportional to the
intensity of the dissatisfaction with the execution of complaining which appears
not to be significantly proportional to the intensity of dissatisfaction”. He notes
that the chances of not re-using the services of a firm or actively dissuading
other people rises sharply with the intensity of dissatisfaction experienced.
100%
90%
80%
70%
Told friends
60% Complained
Percent
Made a f uss
50%
Did not use again
40% Dissuaded others
Campaigned against
30%
20%
10%
0%
Slightly Annoyed Very Annoyed Extremely Absolutely
Dissatisfied Annoyed Furious
There were 85 per cent of the Absolutely Furious clients who told other people
about the incident and 90 per cent made a formal complaint, whilst 55 per cent
voiced their dissatisfaction during the service. These took action against the
firm with 70 per cent actively dissuading other people from using the services.
Only 10 per cent were prepared to take further actions and actively campaign
against the firm. This includes acts such as legal action or petitioning. The fact
that the vast majority of clients made themselves available for Service Recovery
by either making a fuss or formally complaining, or both, is noteworthy. Fifty per
cent of the Slightly Dissatisfied clients made themselves available for Service
Recovery by voicing their dissatisfaction. This increased to 100 per cent of the
clients who were Absolutely Furious. Therefore, the majority of clients complain
and so make themselves available for Service Recovery (Johnston, 1998: 75-
76).
The research by Johnston (1998) was not specifically aimed at the CCEF under
review in this dissertation, however, the importance of Service Quality is
stressed and the fact that firms must engage in effective Service Recovery is
affirmed.
The ISO 9001:2000 requires that “… top management shall ensure that quality
objectives, including those needed to meet requirements for product are
established at relevant functions and levels within the organisation. The quality
objectives shall be measurable and consistent with the quality policy” (BVQI,
2000: 3.15).
The modern concept of quality focuses on how firms meet or exceed all the
requirements and expectations of their clients. This broadened understanding
of quality led to the concept of TQM, which is based on the following three
elements (BVQI, 2000: 2.12):
• No one in the firm is excluded. Everyone in the firm is responsible for
implementing quality which has an impact on the client perception of
quality;
• Internal and external client must be satisfied. Firms are viewed as a
series of client supplier relationships;
• Appreciation of the firm by society. This plays a crucial role in securing
the success of the business.
2.10 SUMMARY
This chapter presented the findings of a literature study which determined the
factors influencing the Service Quality and Service Recovery of CCEF and the
suggested research instrument to be used to benchmark the firm under review
against its competitors. The next chapter describes the design of the research.
CHAPTER 3
RESEARCH DESIGN
3.1 INTRODUCTION
The objective of this chapter is to outline the research method followed, the
research questions and hypotheses, and the selection and appropriateness of
the research instrument used, the questionnaire.
The main research paradigm used in this study can be labelled as positivistic.
Other recognised terms for this research paradigm are quantitative, objective,
scientific or experimental (Collis & Hussey, 2003: 47). The positivistic paradigm
deals with the facts or causes of social phenomena, with little regard to the
subjective aspects of human activity (Collis & Hussey, 2003: 52).
The phenomenological paradigm has the following attributes (Collis & Hussey,
2003: 55):
• It uses large samples;
• It is concerned with hypotheses testing;
• The data is specific and precise;
• The location is artificial;
• The reliability is high;
• The validity is low;
• It generalises from the sample to the population.
The methodology of this research is concerned with the following main issues
(Collis & Hussey, 2003: 55):
• Why certain data was collected;
• What data was collected;
• From where was the data collected;
• When was the data collected;
3.2.1 TRIANGULATION
Jick (1979, as quoted by Collis & Hussey, 2003: 78), states that triangulation
has important strengths. It encourages productive research, enhances
qualitative methods and allows the complementary use of quantitative methods.
There are questions which arise when taking cognisance of the purpose of the
research and its objectives.
The first set of hypotheses based on the first theory is to test the effect of
the two Service Quality dimensions, Reliability and Responsiveness, on
perceived Service Quality:
The second hypothesis based on the second theory is to test the effect of
Service Recovery on perceived Service Quality:
3.6 SUMMARY
This chapter discussed the research methodology, the research questions and
hypotheses. The approach of triangulation was introduced and the SERVPERF
research instrument was expounded. The results of the research are presented
in the Chapter 4.
CHAPTER 4
4.1 INTRODUCTION
The previous chapter introduced the SERVPERF research instrument and the
design of the research. This chapter presents the method of data collection, the
response rate and the analysis and interpretation of the data collected.
The actual questionnaires were sent out one week later, accompanied by
official, personalised letters. These letters contained pertinent information
including the expected due dates for the return of the completed questionnaires.
The data analysis was conducted in three phases. First, the response rate and
overall validity of the results were analysed. According to Collis and Hussey
(2003: 58-59), “Validity is the extent to which the research findings accurately
represent what is really happening in the situation”. Errors, such as the
research procedures, inaccurate measurement, and the like can undermine
validity.
Third, the average Gap Score for each dimension was calculated by totalling
the average Gap Scores per respondent divided by the total number of
respondents. The Gap Scores for the two individual Service Quality
dimensions, Reliability and Responsiveness, are plotted onto graphs and used
to benchmark Company X against its competitors.
Response Rate
Returned - Non-
comprehensively
Completed, 8
Returned -
Comprehensively
Completed, 18
The following are potential reasons why the response rate was lower than
expected:
• A few clients indicated their unwillingness to complete the questionnaire.
The following are typical reasons:
o “I am unfortunately not in a position to complete the questionnaire
because it may compromise the position of trust and independency
that we maintain with all consultants.”
o “As a client we cannot divulge our opinion of other consultants to
yourself. I believe it is not ethical…”
o “I have completed the questionnaire, only as far as [Company X] are
concerned. I don’t believe it to be ethical to provide individual
companies comments on the quality of service received from their
Clients often voice their criticisms of ‘other consultants’, but given the
opportunity to formally respond, most opted not to. This was
understandable given the professional environment within which the
services are rendered, and the relationships formed between individuals.
The lack of anonymity is believed to be the biggest contributing factor
towards the low response rate.
• The duration of the response period may have been too short. Every
effort was made to stress the urgency of returning the questionnaires; by
means of numerous correspondences and with a relatively short
timeframe. Some municipalities have lengthy administrative processes
where all correspondence has to pass in and out of the records divisions.
This may have had a time-delaying effect in that some respondents may
have received the questionnaires only a few days prior to its return date.
The respondents may have chosen not to complete the questionnaires
given the relatively short response period remaining.
N ⋅r
α=
1 + ( N − 1) ⋅ r
Where α = Cronbach’s Alpha
N = The number of items;
r= The average of all (Pearson) correlation coefficients between the
items.
Cronbach’s Alphas are consistently above 0.90, which indicates that the data is
of high reliability.
Competitor:
1 2 3 4 5 6
* Question 3 has been omitted since all five respondents gave the same rating. Hence, no
variance could be calculated and could not be used in the analysis.
Competitor:
7 8 9 10 Company X
The results of the research and the Gap Scores for each of the nine questions
comprising the Service Quality dimensions, Reliability and Responsiveness, are
depicted in Figures 4.2 to 4.12 and Tables 4.3 to 4.13. The Gap Scores
indicate the average of the numeric differences between the assumed
expectations of the clients, as a perfect score of seven out of seven for all firms,
and the actual perceptions as rated by the respondents. Company X is
therefore benchmarked against its competitors and the local Industry Average
for each of the questions and for the overall Service Quality dimensions
Reliability (five questions) and Responsiveness (four questions).
4.4.1 RELIABILITY
The following five questions comprise the Service Quality dimension, Reliability,
which were adapted in the SERVPERF research instrument:
The findings and responses to the Service Quality dimensions are discussed.
Figure 4.2 illustrates the first set of results of the questionnaire. The
competitors are listed on the X-axis and the Gap Scores on the Y-axis.
0.0
-0.5
Gap Score -1.0
-3.5 -3.3
e
X
r1
r2
r3
r4
r5
r6
r7
r8
r9
ag
r1
ny
o
er
tit
tit
tit
tit
tit
tit
tit
tit
tit
pa
Av
tit
pe
pe
pe
pe
pe
pe
pe
pe
pe
om
pe
om
om
om
om
om
om
om
om
om
ry
om
st
C
du
C
In
Figure 4.2 - Gaps per question for Service Quality dimension Reliability –
Question 1, Company X versus Competitors and Industry
Table 4.3 summarises the Gap Scores and the Gaps from the highest ranked
firm for Question 1 and the benchmarked Gap Scores and the Gap Scores as
rated against the “expected perfection”.
Gap Score
Gap
Gap Score rated
Gap from
For Question 1 bench- against
Score Highest
marked expected
Rank
perfection
1 Competitor 8 -1.3 0 100% 81%
2 Competitor 4 & 5 -1.4 -0.1 98% 80%
3 Company X -1.5 -0.2 96% 79%
4 Competitor 1 & 9 -1.7 -0.4 93% 76%
5 Competitor 10 -1.8 -0.5 91% 74%
6 Competitor 7 -2.0 -0.7 88% 71%
7 Industry Average & Competitor 3 -2.2 -0.9 84% 69%
8 Competitor 2 -2.5 -1.2 79% 64%
9 Competitor 6 -3.3 -2.0 65% 53%
Figure 4.3 depicts the Gap Scores of the second of five questions that comprise
the Service Quality dimension Reliability.
0.0
-0.5
Gap Score
-1.0
-1.0 -1.0
-1.5 -1.3
-1.4
-1.5
-1.7
-1.8 -1.8 -1.8
-2.0 -1.9
-2.0
-2.3
-2.5
e
X
r1
r2
r3
r4
r5
r6
r7
r8
r9
ag
r1
ny
o
er
tit
tit
tit
tit
tit
tit
tit
tit
tit
pa
Av
tit
pe
pe
pe
pe
pe
pe
pe
pe
pe
pe
om
om
om
om
om
om
om
om
om
om
try
om
us
C
d
In
Figure 4.3 - Gaps per question for Service Quality dimension Reliability –
Question 3, Company X versus Competitors and Industry
Table 4.4 summarises the results of Figure 4.3. It is interesting to note that the
Gap Score of Competitors 4 and 5 is -1.0, which is the smallest Gap Score for
all five questions comprising the Service Quality dimension Reliability.
Gap Score
Gap
Gap Score rated
Gap from
For Question 3 bench- against
Score Highest
marked expected
Rank
perfection
1 Competitor 4 & 5 -1.0 0 100% 86%
2 Competitor 8 -1.3 -0.3 95% 81%
3 Company X -1.4 -0.4 93% 80%
4 Competitor 1 -1.5 -0.5 92% 79%
5 Competitor 9 -1.7 -0.7 88% 76%
6 Industry Average & Competitor 3 & 10 -1.8 -0.8 87% 74%
7 Competitor 2 -1.9 -0.9 85% 73%
8 Competitor 7 -2.0 -1.0 83% 71%
9 Competitor 6 -2.3 -1.3 78% 67%
iii. Question 5 – “The firm performs the service right the first time.”
Figure 4.4 illustrates the Gap Scores of the third of five questions that comprise
the Service Quality dimension Reliability. It is evident from the results that
Company X has not fared as well in this question as with Questions 1 and 3.
The perceptions by clients of Company X are that the firm is on par with the
“average firm” and that Company X performed the service right the first time 69
per cent of the time. Company X is lagging behind the leading firm by 8 per
cent (overall) and 11 per cent (benchmarked).
The results show that there are seven firms ranked higher that Company X and
the firm does not perform the service right the first time 31 per cent of the time.
0.0
-0.5
-1.0
Gap Score
-1.5
-1.6
-2.0 -1.8
-2.0 -2.0 -2.0 -2.0
-2.1
-2.3 -2.2 -2.2
-2.5
-2.6
-2.8
-3.0
e
X
r1
r2
r3
r4
r5
r6
r7
r8
r9
ag
r1
ny
o
er
tit
tit
tit
tit
tit
tit
tit
tit
tit
pa
Av
tit
pe
pe
pe
pe
pe
pe
pe
pe
pe
pe
om
om
om
om
om
om
om
om
om
om
try
om
us
C
d
In
Figure 4.4 - Gaps per question for Service Quality dimension Reliability –
Question 5, Company X versus Competitors and Industry
The summary of the Gap Scores and the ranking of the firms are presented in
Table 4.5.
Gap Score
Gap
Gap Score rated
Gap from
For Question 5 bench- against
Score Highest
marked expected
Rank
perfection
1 Competitor 10 -1.6 0 100% 77%
2 Competitor 4 -1.8 -0.2 96% 74%
3 Competitor 1, 2, 5 & 8 -2.0 -0.4 93% 71%
4 Competitor 9 -2.1 -0.5 91% 70%
5 Company X & Industry Average -2.2 -0.6 89% 69%
6 Competitor 7 -2.3 -0.7 87% 67%
7 Competitor 3 -2.6 -1.0 81% 63%
8 Competitor 6 -2.8 -1.2 78% 60%
Figure 4.5 illustrates the results of Question 7. The apparent similarity between
Question 1 and Question 7 is noted, and it is expected that the results of the
ratings would be similar. However, Company X rated third in Question 1 (4 per
cent behind the top competitor), whilst in Question 7 it was rated equal best,
together with Competitors 4, 5 and 10. Clients perceive Company X to provide
its service at the time it is promised at 74 per cent of the time. Company X was
ranked first, but its Gap Score is the largest of all the firms which fared best in
each question about the Service Quality dimension Reliability.
0.0
-0.5
-1.0
Gap Score
-1.5
-3.0
-3.0
-3.5
e
X
r1
r2
r3
r4
r5
r6
r7
r8
r9
ag
r1
ny
o
er
tit
tit
tit
tit
tit
tit
tit
tit
tit
pa
Av
tit
pe
pe
pe
pe
pe
pe
pe
pe
pe
pe
om
om
om
om
om
om
om
om
om
om
try
om
us
C
d
In
Figure 4.5 - Gaps per question for Service Quality dimension Reliability –
Question 7, Company X versus Competitors and Industry
Gap Score
Gap
Gap Score rated
Gap from
For Question 7 bench- against
Score Highest
marked expected
Rank
perfection
1 Company X, Competitor 4, 5 & 10 -1.8 0 100% 74%
2 Competitor 1, 7 & 8 -2.0 -0.2 96% 71%
3 Competitor 2 -2.1 -0.3 94% 70%
4 Competitor 3 & 9 -2.2 -0.4 92% 69%
5 Industry Average -2.3 -0.5 90% 67%
6 Competitor 6 -3.0 -1.2 77% 57%
Figure 4.6 illustrates the results of the Question 9 of the adapted SERVPERF
questionnaire. This question was the only question of the Service Quality
dimension, Reliability, in which Company X was rated below the Industry
Average. It is clear that the firm needs to place more insistence on error free
records and the importance of this to clients need to be communicated to all
staff.
0.0
-0.5
-1.0
Gap Score
-1.5
-1.5
-1.6
-2.0
-2.0 -2.0 -2.0 -2.0 -2.0
-2.1 -2.2
-2.5 -2.4 -2.3
-2.8
-3.0
e
X
r1
r2
r3
r4
r5
r6
r7
r8
r9
ag
r1
ny
o
er
tit
tit
tit
tit
tit
tit
tit
tit
tit
pa
Av
tit
pe
pe
pe
pe
pe
pe
pe
pe
pe
pe
om
om
om
om
om
om
om
om
om
om
try
om
us
C
d
In
Figure 4.6 - Gaps per question for Service Quality dimension Reliability –
Question 9, Company X versus Competitors and Industry
Gap Score
Gap
Gap Score rated
Gap from
For Question 9 bench- against
Score Highest
marked expected
Rank
perfection
1 Competitor 1 -1.5 0 100% 79%
2 Competitor 5 -1.6 -0.1 98% 77%
3 Competitor 2, 4, 7, 8 & 10 -2.0 -0.5 91% 71%
4 Industry Average -2.2 -0.7 87% 69%
5 Company X -2.3 -0.8 85% 67%
6 Competitor 3 -2.4 -0.9 84% 66%
7 Competitor 6 -2.8 -1.3 76% 60%
The averages of the Gap Scores for Questions 1, 3, 5, 7 and 9 comprise the
Gap Scores for the Service Quality dimension Reliability. Company X is rated
third, behind a total of four competitors.
Figure 4.7 and Table 4.8 summarises the overall results of the Service Quality
dimension Reliability.
0.0
-0.5
-1.5
-1.6 -1.6
-1.7 -1.7 -1.8
-2.0 -1.8
-2.0
-2.1 -2.1 -2.1
-2.2
-2.5
-2.8
-3.0
e
X
r1
r2
r3
r4
r5
r6
r7
r8
r9
ag
r1
ny
o
er
tit
tit
tit
tit
tit
tit
tit
tit
tit
pa
Av
tit
pe
pe
pe
pe
pe
pe
pe
pe
pe
om
pe
om
om
om
om
om
om
om
om
om
try
om
s
C
du
C
In
Figure 4.7 - Gaps for Service Quality dimension Reliability – Company X versus
Competitors and Industry
The overall Reliability of the firm is 4 per cent higher than the Industry Average
in PE. The Gap Score of Company X is -1.8, which indicates that the firm is 26
per cent short of being “totally reliable”. Clients in PE can expect the average
CCEF to be 70 per cent reliable.
Gap Score
Gap
Gap Score rated
Gap from
Overall - Reliability bench- against
Score Highest
marked expected
Rank
perfection
1 Competitor 4 & 5 -1.6 0 100% 77%
2 Competitor 1 & 8 -1.7 -0.1 98% 76%
3 Company X & Competitor 10 -1.8 -0.2 96% 74%
4 Competitor 9 -2.0 -0.4 93% 71%
5 Industry Average , Competitor 2 & 7 -2.1 -0.5 91% 70%
6 Competitor 3 -2.2 -0.6 89% 69%
7 Competitor 6 -2.8 -1.2 78% 60%
The Gap Scores for these questions are illustrated in Figures 4.8 to 4.12
respectively.
Figure 4.8 graphically illustrates the results of the first question of the Service
Quality dimension, Responsiveness.
0.0
-0.5
-1.0
-1.5
Gap Score
-1.4 -1.4
-1.7 -1.7 -1.7
-2.0 -1.8
-1.9
-2.0
-2.2
-2.5 -2.4
-2.5
-3.0
-3.5
-3.5
-4.0
e
X
1
10
ag
or
or
or
or
or
or
or
or
or
ny
er
or
tit
tit
tit
tit
tit
tit
tit
tit
tit
pa
Av
tit
pe
pe
pe
pe
pe
pe
pe
pe
pe
pe
om
om
om
om
om
om
om
om
om
om
ry
om
st
C
du
C
In
Figure 4.8 - Gaps per question for Service Quality dimension Responsiveness –
Question 2, Company X versus Competitors and Industry
Clients have rated Company X with a Gap Score of -1.9, which signifies a
shortcoming of 27 per cent. Employees tell clients on average 69 per cent of
the time exactly when the services will be performed. It is evident from the data
that Company X is rated fourth on this question as presented in Table 4.9.
Gap Score
Gap
Gap Score rated
Gap from
For Question 2 bench- against
Score Highest
marked expected
Rank
perfection
1 Competitor 4 & 5 -1.4 0 100% 80%
2 Competitor 1, 8 & 9 -1.7 -0.3 95% 76%
3 Competitor 10 -1.8 -0.4 93% 74%
4 Company X -1.9 -0.5 91% 73%
5 Competitor 7 -2.0 -0.6 89% 71%
6 Industry Average -2.2 -0.8 86% 69%
7 Competitor 3 -2.4 -1.0 82% 66%
8 Competitor 2 -2.5 -1.1 80% 64%
9 Competitor 6 -3.5 -2.1 63% 50%
The results of the second question pertaining to the Service Quality dimension
Responsiveness are presented in Figure 4.9.
The Gap Score of Company X is -1.7, which denotes a percentage Gap Score
of 24 per cent from 100 per cent which means always giving prompt service to
clients. Company X is ranked fifth by the respondents and lags by 5 per cent
behind its top competitor. Table 4.10 provides a comprehensive summary of
these results.
0.0
-0.5
-2.5 -2.4
-3.0
-3.0
-3.5
e
X
r1
r2
r3
r4
r5
r6
r7
r8
r9
10
ag
ny
o
er
or
tit
tit
tit
tit
tit
tit
tit
tit
tit
pa
Av
tit
pe
pe
pe
pe
pe
pe
pe
pe
pe
pe
om
om
om
om
om
om
om
om
om
om
ry
om
st
C
du
C
In
Figure 4.9 - Gaps per question for Service Quality dimension Responsiveness –
Question 4, Company X versus Competitors and Industry
Gap Score
Gap
Gap Score rated
Gap from
For Question 4 bench- against
Score Highest
marked expected
Rank
perfection
1 Competitor 8 -1.3 0 100% 81%
2 Competitor 4 & 10 -1.4 -0.1 98% 80%
3 Competitor 1 -1.5 -0.2 96% 79%
4 Competitor 9 -1.6 -0.3 95% 77%
5 Company X -1.7 -0.4 93% 76%
6 Competitor 5 & 7 -1.8 -0.5 91% 74%
7 Industry Average -1.9 -0.6 89% 73%
8 Competitor 3 -2.0 -0.7 88% 71%
9 Competitor 2 -2.4 -1.1 81% 66%
10 Competitor 6 -3.0 -1.7 70% 57%
iii. Question 6 – “Employees at the firm are never too busy to respond
to my requests.”
Respondents had to rate whether employees at the firm are never too busy to
respond to the requests of clients. Figure 4.10 illustrates the results of this
question. The perception exists that Company X respond 77 per cent of the
0.0
-0.5
-1.0
Gap Score
-3.0
e
X
r1
r2
r3
r4
r5
r6
r7
r8
r9
10
ag
ny
o
er
or
tit
tit
tit
tit
tit
tit
tit
tit
tit
pa
Av
tit
pe
pe
pe
pe
pe
pe
pe
pe
pe
pe
om
om
om
om
om
om
om
om
om
om
ry
om
st
C
du
C
In
Figure 4.10 - Gaps per question for Service Quality dimension Responsiveness –
Question 6, Company X versus Competitors and Industry
Table 4.11 summarises and ranks the scores for Question 6. Company X is
ranked third and the Industry Average is 71 per cent (Gap Score -2.0).
Gap Score
Gap
Gap Score rated
Gap from
For Question 6 bench- against
Score Highest
marked expected
Rank
perfection
1 Competitor 5 & 10 -1.4 0 100% 80%
2 Competitor 8 -1.5 -0.1 98% 79%
3 Company X & Competitor 9 -1.6 -0.2 96% 77%
4 Competitor 1 -1.7 -0.3 95% 76%
5 Industry Average & Competitor 3 & 7 -2.0 -0.6 89% 71%
6 Competitor 2 -2.1 -0.7 88% 70%
7 Competitor 4 -2.4 -1.0 82% 66%
8 Competitor 6 -2.5 -1.1 80% 64%
iv. Question 8 – “Employees at the firm are always willing to help me.”
Company X scored the best on the question about whether the employees at
the firm are always willing to help the clients. Figure 4.11 graphically illustrates
that the Gap Scores of most of the competitors are smaller than for the other
questions. This indicates that the perception exists that employees in general
are willing to help clients. The Industry Average, a Gap Score of -1.5, is higher
than those of the other questions which confirms this. The Gap Score is of
Company X is -1.2 which represents a percentage Gap of only 17 per cent.
0.0
-0.5
-1.0
Gap Score
-3.0
-3.0
-3.5
e
X
1
10
ag
or
or
or
or
or
or
or
or
or
ny
er
or
tit
tit
tit
tit
tit
tit
tit
tit
tit
pa
Av
tit
pe
pe
pe
pe
pe
pe
pe
pe
pe
pe
om
om
om
om
om
om
om
om
om
om
ry
om
st
C
du
C
In
Figure 4.11 - Gaps per question for Service Quality dimension Responsiveness –
Question 8, Company X versus Competitors and Industry
The data which was analysed for Question 8 is benchmarked and summarised
in Table 4.12.
Gap Score
Gap
Gap Score rated
Gap from
For Question 8 bench- against
Score Highest
marked expected
Rank
perfection
1 Company X & Competitor 4, 5 & 8 -1.2 0 100% 83%
2 Competitor 1 -1.3 -0.1 98% 81%
3 Competitor 3, 9 & 10 -1.4 -0.2 97% 80%
4 Industry Average -1.5 -0.3 95% 79%
5 Competitor 2 & 7 -2.0 -0.8 86% 71%
6 Competitor 6 -3.0 -1.8 69% 57%
The averages of the Gap Scores for Questions 2, 4, 6 and 8 result in the Gap
Scores for the Service Quality dimension Responsiveness. These are depicted
in Figure 4.12. Company X rated third behind a total of four competitors with a
Gap Score of -1.6.
0.0
-0.5
-1.0
Gap Score
-3.0
-3.0
-3.5
e
X
r1
r2
r3
r4
r5
r6
r7
r8
r9
ag
r1
ny
o
er
tit
tit
tit
tit
tit
tit
tit
tit
tit
pa
Av
tit
pe
pe
pe
pe
pe
pe
pe
pe
pe
pe
om
om
om
om
om
om
om
om
om
om
try
om
us
C
d
In
Figure 4.12 - Gaps per question for Service Quality dimension Responsiveness –
Company X versus Competitors and Industry
Gap Score
Gap
Gap Score rated
Gap from
Overall Responsiveness bench- against
Score Highest
marked expected
Rank
perfection
1 Competitor 8 -1.4 0 100% 80%
2 Competitor 1, 5 & 10 -1.5 -0.1 98% 79%
3 Company X & Competitor 4 & 9 -1.6 -0.2 96% 77%
4 Industry Average & Competitor 7 -1.9 -0.5 91% 73%
5 Competitor 3 -2.0 -0.6 89% 71%
6 Competitor 2 -2.3 -0.9 84% 67%
7 Competitor 6 -3.0 -1.6 71% 57%
The tenth and final quantitative question of the questionnaire does not form part
of the recognised SERVPERF research instrument. Hence, it is not part of a
Service Quality dimension. However, the question was formulated in the same
manner as the other nine questions to address the secondary research
hypothesis and the sub-problem statement and objectives.
Question 10 (Q10) – “When someone at the firm makes a mistake, they take
corrective action.”
The results of this question are illustrated in Figure 4.13. The majority of firms
scored between -1.5 and -2.0, with the average score -1.8. The rating of
Company X is -1.7, which denotes that corrective action is perceived to be
taken 76 per cent of the time.
0.0
-0.5
-1.0
Gap Score
-1.0
-1.5
-1.5
-1.6 -1.6 -1.7 -1.6
-1.8 -1.7 -1.7
-1.8
-2.0
-2.0
-2.5
-2.8
-3.0
e
X
r1
r2
r3
r4
r5
r6
r7
r8
r9
10
ag
ny
o
er
or
tit
tit
tit
tit
tit
tit
tit
tit
tit
pa
Av
tit
pe
pe
pe
pe
pe
pe
pe
pe
pe
pe
om
om
om
om
om
om
om
om
om
om
ry
om
st
C
du
C
In
Figure 4.13 - Gaps per question for Service Recovery – Question 10, Company X
versus Competitors and Industry
Gap Score
Gap
Gap Score rated
Gap from
Service Recovery bench- against
Score Highest
marked expected
Rank
perfection
1 Competitor 5 -1.0 0 100% 86%
2 Competitor 1 -1.5 -0.5 92% 79%
3 Competitor 3, 4 & 10 -1.6 -0.6 90% 77%
4 Company X & Competitor 8 & 9 -1.7 -0.7 88% 76%
5 Industry Average & Competitor 2 -1.8 -0.8 87% 74%
6 Competitor 7 -2.0 -1.0 83% 71%
7 Competitor 6 -2.8 -1.8 70% 60%
0.0
-0.5
Gap Scores
-1.0
Company X
-1.2 Industry
-1.5 -1.4
-1.5
-1.5
-1.6
-1.7 -1.7
-1.8 -1.8 -1.8
-2.0 -1.9 -1.9
-2.0
-2.2 -2.2 -2.2-2.2 -2.2
-2.3 -2.3
-2.5
10
1
9
n
n
io
io
io
io
io
io
io
io
io
io
st
st
st
st
st
st
st
st
st
st
ue
ue
ue
ue
ue
ue
ue
ue
ue
ue
Q
Company X was rated with bigger Gap Scores than the Industry on Questions 5
and 9. The firm was rated lower than the Industry Average on the other eight
questions. The biggest Gap Score was found in the perceptions of clients of
Question 9 – “The firm insists on error free records”.
Figure 4.15 illustrates the overall results of the benchmarking of the Service
Quality dimensions, Reliability and Responsiveness, and the Service Recovery
and the Overall Service Quality between Company X and the Industry.
0.0
-0.5
Gap Scores
-1.0
Company X
Industry
-1.5
-1.6
-1.7 -1.7
-1.8
-1.8
-2.0 -1.9 -1.9
-2.1
-2.5
Reliability Responsiveness Service Recovery Overall Service
Quality
The responses indicate that clients need firms to improve their Service Quality
and technical skills and to be innovative.
The responses confirm the lack of staff in the civil consulting engineering
industry and the effect it has on Service Quality.
4.8 TRIANGULATION
Two types of triangulation are used in this research, namely data triangulation
and methodological triangulation to verify its validity and reliability.
Data is triangulated, in this research, with that obtained from the research done
by Gardiner (2004). Gardiner (2004: 52) indicates that each of the five
questions of the SERVQUAL dimension Reliability received a negative Gap
Score, the perceptions never exceeded the expectations, and three of these
comprise the biggest negative Gaps identified.
The average Gap Score of the research by Gardiner (2004: 50-51) for the
Service Quality dimension Responsiveness was negative, reflecting the
perceptions by clients that firms need to be more responsive. The findings of
this research indicate that a relatively large Gap exists between the
expectations and perceptions of the clients and that the Industry Average Gap
Score is -1.9.
Both qualitative and quantitative methods of data collection are used in this
research. Section A of the questionnaire is qualitative and Section B is
quantitative. The Gap Scores of the clients, quantitative data, confirms their
comments, qualitative feedback, on the Service Quality of CCEF.
Figure 4.16 illustrates the Gap Scores for the Service Quality dimension
Reliability. Gardiner (2004: 43) states that its importance to clients is at 37 per
cent.
Importance-Performance Matrix
Reliability versus Gap Scores
⊗ Competitor 4 & 5
Competitor 1 & 8 ⊗
⊗ Company X
⊗ Industry,
Competitor 3 ⊗ Competitor 2 & 7
⊗ Competitor 6
Competitor 9 ⊗
Competitors 6 and 9 are the only firms that need to take “urgent action”. The
Reliability of Company X and the Service Quality of the other competitors all
need to “improve” because Reliability is an order-winner (Davis & Heineke,
2005: 278). The performances of these firms are perceived to be the same as
those for everyone else.
Figure 4.17 illustrates the Gap Scores for the Service Quality dimension
Responsiveness. Gardiner (2004: 43) observed that its importance to clients is
at 19 per cent.
Importance–Performance Matrix
Responsiveness versus Gap Scores
Competitor 8 ⊗
Company X & Competitor 4 & 9 ⊗
⊗ Industry Average & Competitor 7
Competitor 3 ⊗
⊗ Competitor 2
Competitor 6 ⊗
Two firms need to improve their service for the Service Quality dimension
Responsiveness. Company X and the other firms are in the “appropriate” zone.
However, every effort needs to be made to outperform competitors in this
regard should a firm wish to distinguish itself as the market leader.
It was found that only a few of the larger and/or more established CCEF assess
or benchmark their Service Quality in relation to other firms. Some of these
firms have their Service Quality benchmarked by independent firms on an
annual basis. The firms participating in such surveys are benchmarked
anonymously and the ratings of each particular firm are revealed to that firm
only. These benchmarking surveys include the percentages spent on
marketing, administration, technical staff salaries and the like.
Research indicated that none of the CCEF in PE has formally identified Service
Recovery as a factor that contributes to the overall experience of the quality of
the service by the client. None appear to have any procedures established.
The three hypotheses of this research have been confirmed by the results of the
data. There are positive relationships between the three independent variables
Reliability, Responsiveness and Service Recovery and the dependent variable
perceived Service Quality.
4.11 SUMMARY
CHAPTER 5
5.1 INTRODUCTION
-0.5
Gap Score
-1
-1.5
-2
-2.5
0 1 2 3 4 5 6
Time Period
Linear trend lines can be used for each firm, which will reveal whether Gaps are
changing, converging or remaining constant over time.
0
Company X
-0.5 Competitor A
Competitor B
Gap Score
-1
-1.5
-2
-2.5
0 1 2 3 4 5 6
Time Period
These steps will ensure that clients are satisfied and that direct responsibility is
taken for specific expenditures and time-consuming activities.
The following are practical examples of things that can be done to create the
experience of client satisfaction (Maister, 2003: 76-80):
• Client meetings to be followed up with brief notes/minutes summarising
the discussion, points agreed to and an action-responsibility plan;
• Explain in advance the format of complex invoices so that the client is
aware of what to expect;
• Follow up referrals with letters of thanks even if the referral has not
resulted in business;
• Find out what the real deadlines of the clients are and ensure that these
are met.
The Conceptual Model of Service Quality has five Gaps. The factors
contributing to the first four Gaps and the suggested methods of closing these
Gaps are summarised in Table 5.1, as Gap 5 is the assessment of the client
about Service Quality.
Project
Management
The sooner the firm can recover poor service and improve Service Quality, the
sooner it will gain a competitive advantage. Therefore, the most important
factor to manage as part of the implementation plan is time. The suggested
tasks and timeframes, which form part of the implementation plan of Company
X for 2008, are presented in Figure 5.5. It is suggested that Company X
compile a longer term (two to three year) strategic plan in a similar manner
which can be used as a management tool.
Figure 5.5 – Suggested tasks and timeframes for the implementation plan
The main factors affecting Service Quality need to form part of the
implementation plan and include the following (as discussed in previous
chapters):
• Service Leadership;
• TQM;
• The Service Satisfaction Framework;
• The Conceptual Model of service failure and recovery strategies;
• The Process Model for continuous measurement and improvement of
Service Quality.
Perceptual Mapping
Competitor 8
-0.5
Attribute A: Gap Scores
Company X
-1
Competitor 1
Competitor 4 Competitor 3
-1.5
Competitor 2
Competitor 7
Competitor 6 -2
Competitor 9
-2.5
Competitor 5 -3
-3.5
Attribute B: Gap Scores
Figure 5.6 illustrates how one attribute is mapped in relation to another and the
arrows indicate the changes to the Gap Scores over time. The directions of the
arrows indicate whether the Gap Scores increase or decrease over time.
5.6 CONCLUSION
This research has confirmed that the steady decline in the number of
professionals during the last few decades and the increase in the demand for
firms to produce more, in a shorter space of time, and with fewer human
resources have had a negative effect on Service Quality. Firms need to
currently focus on gaining a competitive advantage by consistently providing
Service Excellence. One way of achieving this is through continuous
improvement through benchmarking.
Bowen, D.E. & Johnston, R. 1999. Internal service recovery: Developing a new
construct. International Journal of Service Industry Management, 10(2),
118-131.
Cronbach, L.J. 1951. Coefficient Alpha and the internal structure of tests.
Psychometrika, 16, 197-334.
Davis, M.M. & Heineke, J. 2005. Operations management (5th ed.). New
York: Irwin McGraw-Hill.
Hitt, M.A., Ireland, R.D. & Hoskisson, R.E. 2005. Strategic management:
Competitiveness and globalization (Concepts and cases) (6th ed.).
Versailles: Thomson.
Maister, D.H. 2003. Managing the professional service firm. London: Bath
Press.
Oakland, J.S. 2003. Total quality management (3rd ed.). Oxford: Butterworth-
Heinemann.
Parasuraman, A., Zeithaml, V.A. & Berry, L.L. 1988. SERVQUAL: A multiple-
item scale for measuring consumer perceptions of service quality.
Journal of Retailing, 64(1), 12-40.
Van Looy, B., Gemmel, P. & Van Dierdonck, R. 2003. Services management:
An integrated approach (2nd ed.). London: Prentice-Hall.
Van Zyl, R. 2006. Boom in building as plans passed are 300% higher [Online].
Available from: http://www.coega.co.za/ (accessed: 28 June 2006).
Zeithaml, V.A. & Bitner, M.J. 2000. Services marketing: Integrating customer
focus across the firm (2nd ed.). New York: Irwin McGraw-Hill.
Zeithaml, V.A., Parasuraman, A. & Berry, L.L. 1990. Delivering quality service:
Balancing customer perceptions and expectations. New York: The Free
Press.
Annexure A
1. What improvements would you like to see in the service quality of ………………………………………………………………………….
consulting engineers?
………………………………………………………………………….
………………………………………………………………………….
………………………………………………………………………….
………………………………………………………………………….
………………………………………………………………………….
………………………………………………………………………….
………………………………………………………………………….
………………………………………………………………………….
Please return the completed questionnaire by Fri 10 Nov 06.
………………………………………………………………………….
………………………………………………………………………….
………………………………………………………………………….
This section deals with your experience with the various consultants
you have been dealing with. Please provide a rating for each of the
consultants according to the following scale:
1 2 3 4 5 6 7
Competitor 10
Competitor 11
Competitor 12
Competitor 13
Competitor 14
Competitor 15
Competitor 16
Competitor 17
Competitor 18
Competitor 1
Competitor 2
Competitor 3
Competitor 4
Competitor 5
Competitor 6
Competitor 7
Competitor 8
Competitor 9
Company X
(Strongly Disagree) (Strongly Agree)
Q5. The firm performs the service right the first time.