Bench Marking Report
Bench Marking Report
Bench Marking Report
ORGANISATIONS To OBTAIN INFORMATION That Will Help The Organisation Identify And IMPLEMENT IMPROVEMENTS ...
Submitted by SHARAD NAGARAJUNA:GO4125 NIKHIL PRABHUDEVA: GO4126 NITESH SASIDHARAN :GO4127 NICMAR-GOA
Example:
If a company needs six days to fill a customers order and the competitor in the same industry needs only five days, five days do not become the standard if a firm in an unrelated industry can fill orders in four days. The four day criterion becomes the benchmark even when at first this seems to be an unachievable goal. The process involved in filling the order is then carefully analyzed, and creative ways are encouraged to achieve the benchmark.
Benchmarking Objectives:
Accelerate the process of Business Change. Lead to other Breakthrough and Continuous Improvement. Result in Customer Satisfaction and Competitive Advantage. Adapt Best Practices.
Stages in Benchmarking:
I. II. Internal Study and Preliminary Competitive Analysis. Developing long-term commitment to the Benchmarking Projects and Unite the Benchmarking Team. III. IV. V. Identifying Benchmarking Partners. Information Gathering and Sharing Methods. Taking action to meet or exceed the benchmark.
NICMAR-GOA
Page 2
FOR
BENCHMARKING
IN
The main interest of the construction companies that get involved in benchmarking initiatives is to compare their performance to other companies, especially from the same market segment. However, it was observed in the four initiatives of PMS for benchmarking that many companies find difficult to become involved in such initiatives on a permanent basis. Holloway et al. (1997) pointed out some common difficulties in carrying out benchmarking: (a) the lack of suitable partners for comparing information; (b)
NICMAR-GOA Page 3
resource constraints, including time, money and expertise; (b) lack of data access transparency; (c) staff resistance; and (e) confidentiality of data. These difficulties were observed in the four initiatives. The lack of resources is particularly critical in small sized construction companies. According to Hudson et al. (2001), a strategic performance measurement development process for small and medium companies must be very resource effective and produce notable short-term, as well as long term benefits, to help maintaining the momentum and enthusiasm of the development team. In addition, it must be dynamic and flexible enough to accommodate strategic changes, which tend to be frequent in companies that have emerging strategies. For those authors, in practical terms, this means that the process should be iterative, in order to maintain the strategic relevance of performance measurement. Due to the difficulties and problems raised above, construction companies should design their own performance measurement, according to their strategy and capabilities, inserting some benchmarking measures in their measurement system. Such companies should see benchmarking as a source of new ideas, or route to improvement based on observed best practices. Therefore, the information provided by benchmarking initiatives should enable a better understanding of the workings of business (their own or their competitors'), which could lead to improvement actions, instead of only being used for data comparison. This is an interesting way to share good practices concerning lean construction, for instance. Based on the experiences of benchmarking initiatives in UK, Chile and USA, it is important to emphasise some key issues for the design and implementation of benchmarking performance measurement systems for the construction industry. First, the set of measures for benchmarking should be simple and well designed in order to support improvement initiatives. The set of measures must give a holistic, company-wide view including a mixture of leading and lagging indicators (Beatham et al., 2004). The KPI and CDT programmes mostly involve lagging measures, based on outcomes. Such measures are important for accessing the success of strategies, but do not support improvement opportunities during the period for which the measure has been taken (Beatham et al., 2004). By contrast, the design of CII benchmarking system includes a set of performance measures that can be used during the whole life of the project. The procedures for data collection should be also simple, aiming to facilitate the creation of the database and to make it simple to evaluate the project performance in relation to other projects in real-time. Three of the initiatives (KPI, CDT and CII) offer an online tool for the collection and evaluation of the benchmarking measures. For this reason, it is useful to design an interactive online tool, which allows the user to access an assortment of documents and provides feedback. Beatham et al. (2004) suggest that the online tool must also be used
NICMAR-GOA Page 4
throughout the life of a project, aiming to offer to the companies the opportunity to analyse the results and to promote improvements. Another key issue of the implementation of the online benchmarking process is data security. Finally, the benchmarking system must be fully understood by all people involved. Therefore, it is also important to promote training courses for the companies involved, including the communication of results, analysis of the evolution of the set of indicators, and the exchange of practices between practitioners, such as the ones promoted by the KPI and CII initiatives
Problems
of
Benchmarking
in
The benchmarking process involves, first of all, an audit of the current performance of an organisation, sector or market. While carrying out this auditing, the performance of the subject being studied is measured against an established benchmark (KPI). The scope of the KPI criteria is ranged to reflect the aspects which, are being benchmarked. A Likert type of scale can be used for the scoring of KPIs (Roest, 1997), as well as 'balanced scorecards' (Sommerville and Robertson, 2000) and point-scores (DETR, 2000). Subjective assessment, in terms of low, medium, high, may also be used. Ranking and descriptive score scales have been discussed by Longbottom (2000). Graves et al., (1998) had suggested that, the universally accepted methods of measuring project performance were few. In benchmarking, a 'best practice' that would be the target of comparison, is identified. Meanwhile, the search for best practice is very difficult and takes a lot of resources (Kouzmin et al., 1999). The best practice that had been identified is next assessed on the same criteria used for the company's own-assessment. A comparison between the two sets of measurements is then made. Deviations between the two, (positive and negative) are expressed as percentages. More significant, is that shortfalls in the performance of the assessor-company are identified, and reasons for such shortfalls are primarily established. Steps to step up the standards of the assessor-company, in order to match those of the fm being benchmarked, are then sought and implemented. Employees need to understand the concept and implementation of benchmarking, and thus new standards established from the exercised have to be explained to them. Literature groups the foregoing process in four phases (e.g., Longbottom, 2000): planning, analysis, implementation (set goals, communicate) and review and repeat literate the previous steps. The benchmarking process is maintained on a continuous basis, and is done for various aspects. In an industry where products or processes change very frequently, the need for frequent iterations of the benchmarking process is ever crucial, as the 'best practice' keeps changing. If a
NICMAR-GOA Page 5
bench-marker is to remain competitive, the frequent review of benchmarking must remain paramount. Brah et al. (2000) identified four dimensions to benchmarking as follows: (1) Internal (where one project is benchmarked with another within the portfolio of an organisation); (2) Competitive (where an organisation benchmarks its products or services using that of a competitor); (3) Functional (based on non-competitors carrying out the same functional activity); and (4) Generic (where an organization benchmarks its services or products with that of others irrespective of industry or country). This paper by adopting Brah et al. (2000) categorisation of benchmarking looks at the generic benchmark.
As a concept, benchmarking is simple, but as a practice, it can be difficult toimplement. Empirical and theoretical evidences suggest practical downsides of benchmarking, especially pertaining construction engineering. Notable downsides of benchmarking in construction include those identified by McHugh et al. (1995), (1997), Hinton et al. (2000) and Sommerville and Robertson (2000) as follows: Identification of suitable partners. Obtaining comparable data could be quite difficult especially that, construction products tend to be unique at the micro level. Resistance to change by staff who would effect new standards. Resources for benchmarking (qualified staff, time, etc.) may be limited. Eliciting full co-operation from 'best practices' may be futile especially where confidential information is involved. ntangibles may not be measured accurately. Organisational instability or collapse, due to workload variations can erode the basis of a comparison. The creation of new networks (relationships) re-invents the wheel, and fails to build on experiences gained from previous projects.
steel making, grocery retailing, and offshore engineering industries/sectors. Against this background, construction industry firms were provided with a list of nine industries, to identify one, which they consider to be world-class in terms of performance that should be benchmarked. The list, which can be expanded, comprises of healthcare, automobile, manufacturing, aviation, electronics, retailing, services, major construction firms and petrochemical industries. Table below shows that three major industries have been considered by firms to be 'world class', that construction should be benchmarked against. These are the aviation industry (42.4%), the electronics industry (24.2%) and the automobile industry (18.2%).
Table showing World class industry in terms of quality of product and services Frequency Aviation industry Electronic industry Automobile industry Others Total 14 8 6 5 33 Percent 42.4 24.2 18.2 15.1 100.0
Others comprise of major construction firms (9.1%), retail industry (3.0%) and Pharmaceutical (3%).
CONCLUSSION
Benchmarking parameters include the measurement of processes and other intermediate factors present in projects. The practical use of benchmarking would inform the industry about the causes of results and would allow a better understanding of the reasons that lead to better or worse performance. The application of the model would allow identification of the processes with greater impact on the projects performance and the better practices
NICMAR-GOA Page 7
required in those key processes. In addition, the implementation of a database with information on project performance can provide a very important information source for future research in different areas of construction.
BIBILOGRAPHY:
www.scribd.com www.leanconstruction.dk
E-book : Construction Innovation & Global Competitiveness Chapter- Construction Industry Benchmark For Key Performers Indicators
NICMAR-GOA
Page 8