Uc 3
Uc 3
LEARNING OUTCOMES:
Introduction
This module covers the knowledge, attitudes and skills required to conduct technical consultation, provide
recommendation and solution for technical problems and operation procedures, improve the performance of
operation and maintenance services and proposed guidelines and systematic approach on maintenance
practices within the organization and to enhance the productivity and smooth operation of the industry.
The aim of this unit of competency:
How to identify a workplace hazard
How to assess the risk of the hazard occurring
How to implement measures for controlling hazards
The role and responsibilities of health and safety representatives and committees
The importance of consultation in the workplace.
Lo1:- conduct inspection
1.1 Conducting Consultation and Discussion
Consultation is the action or process of formally consulting or discussing.
There is different type of consultation
1. Ethical consultation
2. Technical consultation
3. Professional consultation
Inspection is most generally, an organized examination or formal evaluation exercise. In engineering
activities inspection involves the measurements, tests, and gauges applied to certain characteristics in regard
to an object or activity.
- It is Critical appraisal involving examination, measurement, testing, gauging, and comparison of
materials or items. An inspection determines if the material or item is in proper quantity and
condition, and if it conforms to the applicable or specified requirements.
Safety Representatives and Consultation
Electrical contractors.
From electrical breakdowns to periodic inspections, our packages offer a professional and flexible solution to
suit your requirements.
I. Qualified and accredited electricians
II. Electrical installations
III. Installations can include:
x Fuse board changes
x Customized and efficient installations: We design new ring or radial circuits around what your
installation will be powering
x The best security lighting solutions for your specific requirements
x Save up to 80% on your lighting energy costs and reduce your carbon footprint with high efficiency
lighting LED. These lighting systems offer brilliant luminaries’ reliability combined with style and of
course efficiency. You can also take advantage of tax saving benefits (All works )
x All electrical plant installation found in a boiler room
x Electrical showers
x Controls wiring i.e. heating and hot water controls.
Electrical breakdowns
Our efficient and expert electricians are committed to solve rapid emergency breakdowns and avoid any
major inconveniences anywhere.
We offer a 24 hour call out service for contract customers, whether it's a broken light fitting or you have
completely lost all your power so you never have to be without electricity.
Did you know that if you are a home owner you have a responsibility to have your property checked for
electrical vulnerability? Are you aware that if you have a shock or a fire caused by an electrical fault and you
don't hold an inspection certificate, your insurer may refuse to cover any damage?
Electrical testing and periodic inspections will highlight any potential problem, helping you to reduce costs
and give you complete peace of mind.
Our electricians can check your electrical installations, from extensive periodic inspection reports to minor
testing and certifications, depending on your needs.
Select and appoint a safety representative or, by agreement with their employer, more than one safety
representative to represent them in consultations with the employer on matters of safety, health and welfare
at the place of work.
Special consideration may need to be given to those situations where the employees spend most of their
working time away from the nominal place of work, e.g. care workers, goods delivery depots and local
authority service yards.
To gain most benefit from knowledge acquired and training received during the period, a term of office of
about three years seems appropriate. There should, however, be provision for review by the employees,
perhaps on an annual basis.
A safety representative may consult with, and make representations to, the employer on safety, health and
welfare matters relating to the employees in the place of work. The employer must consider these
representations, and act on them if necessary. Consultations would be particularly important when changes
are taking place, for example when drawing up a safety plan, or introducing new technology or work
processes, including new substances. They also have a part to play in long established work practices and
hazards.
A safety representative, after giving reasonable notice to the employer, has the right to inspect the whole or
part of a workplace he or she represents, at a frequency or on a schedule agreed between him/her and the
employer, based on the nature and extent of the hazards in the place of work. Factors that should be
considered when deciding the frequency of inspections include:
x size of workplace
x nature and range of work activities and work locations
x nature and range of hazards and risks
x changing hazards and risks
Inspections can take various forms, which can be used either separately or in any combination. Such
common types of inspections are:
A safety representative may investigate accidents and dangerous occurrences in the place of work to find out
the causes and help identify any remedial or preventive measures necessary. However, a safety representative
must not interfere with anything at the scene of the accident
What kind of information must the employer and/or inspector give to the safety representative?
there is a duty on an employer to provide "information, instruction, training and supervision necessary to
ensure, so far as is reasonably practicable, the safety, health and welfare at work of his or her employees"
(including safety representatives). The type of information provided will vary according to the hazards and
risks involved.
Safety representatives must have access to:
x information on Risk Assessments prepared under Section 19 of the 2005 Act
x information on reportable accidents, occupational illnesses and dangerous occurrences
x information resulting from experience of applying protective and preventive measures
x whenever an employer writes to a Health and Safety Authority inspector to confirm compliance with
an Improvement or Prohibition Notice served upon him or her, the employer must copy this
confirmation to the safety representative
x Relevant technical information about hazards, risks and precautions connected with articles or
substances used in the workplace they represent. Such information would include Safety Data Sheets,
relevant instruction manuals, or information, including revisions, supplied by a designer,
manufacturer, importer or supplier about any article or substance which is under review from a safety
and health perspective
x adequate information about the workplace, the systems of work, and any changes in either, that would
affect existing risks or precautions
x reports relating to occupational safety, health and welfare commissioned by the employer relating to
the workplace
x information on occupational accidents and ill health at the place of work and collective data on the
results of any relevant health assessments carried out
x any necessary information about appropriate precautions, safeguards and measures to be taken in
emergencies, including the names of employees with designated emergency duties, etc. which are
currently in place or which should be provided to minimize the risks to safety and health arising from
hazards at work.
Are there any limitations/exemptions to the information that the employer must give to the safety
representative?
It is in the employer‘s interest to ensure that safety representatives are supplied with all the relevant
information. However, there are limited exceptions. The employer need not provide any information:
The confidentiality rules that apply to any workplace will apply to any information provided to safety
representatives under the 2005 Act.
What is the difference between the information provided by the employer and information supplied by
an Inspector?
The employer has a duty to provide the kind of information necessary for safety and health at work, whereas
the inspector would be expected to supply information that the employer would not be in a position to
supply, e.g. results of measurements, sampling or assessment carried out by the inspector.
Yes. It is essential that safety representatives have the knowledge and skills necessary to perform their
function effectively. Training courses for safety representatives are provided by trade unions and other
organizations.
The Safety Representatives and Safety Consultation Guidelines details the course content for training safety
representatives and safety committee members. There are 10 elements that are to be included in this training:
Regarding consultation, employee participation and safety committees, must employers make
employee consultation and participation arrangements?
Employers, for the purpose of promoting and developing measures to ensure safety, health and welfare, must
consult their employees in establishing arrangements for securing co-operation in the workplace on safety,
health and welfare. These arrangements will allow employees to be consulted on the steps taken to safeguard
their safety, health and welfare and on measures to check how effective the safeguards have been.
Consultation must be made in advance and in good time so as to allow employees time to consider, discuss
and give an opinion on the matters before managerial decisions are made. The difference between the
provision of information and consultation should be noted. Consultation with employees involves listening to
their views and taking them into account as part of the decision making process.
What should health and safety consultation cover?
Employers must consult in advance and in good time on anything carried out in the workplace, which can
have a substantial effect on safety and health. Any type of work activity already covered by safety and health
law is valid for discussion. Consultation must cover:
x any risk protection and prevention measures
x the appointment and the duties of staff with safety and health responsibilities
The employer must also provide persons in its organization who have training and information
responsibilities with all available information necessary to enable them to fulfill those responsibilities.
Regardless of the type of consultation arrangement introduced in an organisation, it must be agreed upon by
both the employees and management.
Occupational health and safety committees
An OHS Committee can be formed where:
o there are 20 or more employees and the
majority request it
o a Work Cover inspector directs it
o Where the business decides it would be useful
to have one.
The Committee membership must contain a balance of
employers and employees and the number of employers
must not exceed the number of employees.
The term for a committee is two years.
Occupational health and safety representatives
OHS Representative/s could be useful for small businesses, or where there are several work locations, and
can be formed when:
o at least one employee requests it
o Work Cover directs it
o the business believes it would be appropriate
A representative’s term is for two years.
OHS committees and representatives have the following responsibilities:
1. Keep under review the measures taken to ensure the health, safety and welfare of persons at the place
of work
2. Investigate any matter that poses a risk
Comprehensive. A substantially complete inspection of the potentially high hazard areas of the
Establishment. An inspection may be deemed comprehensive even though, as a result of the exercise
of Professional judgment, not all potentially hazardous conditions, operations and practices within those
areas are inspected.
Conduct of the Inspection.
a. Time of Inspection. Inspections shall be made During regular working hours of the establishment
Except when special circumstances indicate otherwise. The Assistant Area Director and CSHO shall confer
with Regard to entry during other than normal working hours.
b. Presenting Credentials.
(1) At the beginning of the inspection the CSHO shall locate the owner representative, operator or
agent in charge at the workplace and present credentials. On construction sites this will most often be the
representative of the general contractor.
(2) When neither the person in charge nor a management official is present, contact may be made
with the employer to request the presence of the owner, operator or management official. The inspection
shall not be delayed unreasonably to await the arrival of the employer representative.
Follow-up Files. The follow-up inspection reports Shall be included with the original (parent) case
file.
Employer Worksite.
General. Inspections of employers in the Construction industry are not easily separable
into distinct worksites. The worksite is Generally the site where the construction is
being performed (e.g., the building site, the dam site). Where the construction site extends over
a large geographical area (e.g., road building), the entire job will be considered a
single worksite. In cases when such large geographical areas overlap between Area Offices, generally only
operations of the employer within the jurisdiction of any Area Office will be considered as the worksite of
the employer.
Work place inspections are an essential component of your prevention program. The process involves
carefully examining work stations on a regular basis with a view to:
x identifying and recording actual and potential hazards posed by buildings, equipment, the
environment, processes and practices;
x recording any hazards requiring immediate attention;
x determining whether existing hazard controls are adequate and operational;
x recommending corrective action where appropriate.
x Spot inspections are carried out on occasion in order to meet a range of responsibilities with respect
to work place health and safety. They focus on a specific hazard associated with a specific work
Perform Technical Consultation Page 11
Instructor: A.N
Department of Electrical Electronics Technology 2010E.C
station or work area for example, noise made by a shredder, operation of a pump, pressure from a
boiler or exposure to a solvent.
x Pre-operation inspections of special equipment and processes are often required before starting the
inspection itself, such as equipment checks before working under water or entering a closed area.
x Critical parts inspections are regular inspections of the critical parts of a machine, piece of
equipment or a system that have a high potential for serious accidents. These inspections are often
part of a preventive maintenance program or hazard control program. Checklists can be used for
forklifts, tractor semi-trailers and aircraft, for example.
x New equipment inspections involve series of specific tests and checks that are carried out before
starting up any new piece of equipment. This means that prior to starting to operate a recently
acquired air compressor, the manufacturer or installer checks to ensure that all the parts are in the
right place and are working properly.
x Routine inspections are inspections carried out on a regular basis in a given work area. They cover
all working conditions, including work hazards, processes and practices.
This pamphlet covers mainly routine, regular and planned inspections. However, the principles that apply in
these types of inspections can easily be adapted to other types of inspections.
1.4 Identifying and Analyzing Technical Problems
To create a systematic solution for technical problem consider the following :
1. Inspect the device
2. Identify the problem
3. Select and technical problem
4. Analysis’ the problem
5. Gate appropriate solution
1.5 Evaluation and work plans
The first step is to clarify the objectives and goals of your initiative. What are the main things you want to
accomplish, and how have you set out to accomplish them? Clarifying these will help you identify which
major program components should be evaluated. One way to do this is to make a table of program
components and elements.
For our purposes, there are four main categories of evaluation questions. Let's look at some examples of
possible questions and suggested methods to answer those questions. Later on, we'll tell you a bit more about
what these methods are and how they work
x Planning and implementation issues: How well was the program or initiative planned out, and how
well was that plan put into practice?
o Possible questions: Who participates? Is there diversity among participants? Why do
participants enter and leave your programs? Are there a variety of services and alternative
activities generated? Do those most in need of help receive services? Are community
members satisfied that the program meets local needs?
o Possible methods to answer those questions: monitoring system that tracks actions and
accomplishments related to bringing about the mission of the initiative, member survey of
satisfaction with goals, member survey of satisfaction with outcomes.
x Assessing attainment of objectives: How well has the program or initiative met its stated objectives?
o Possible questions: How many people participate? How many hours are participants
involved?
o Possible methods to answer those questions: monitoring system (see above), member survey
of satisfaction with outcomes, goal attainment scaling.
x Impact on participants: How much and what kind of a difference has the program or initiative made
for its targets of change?
o Possible questions: How has behavior changed as a result of participation in the program? Are
participants satisfied with the experience? Were there any negative results from participation
in the program?
o Possible methods to answer those questions: member survey of satisfaction with goals,
member survey of satisfaction with outcomes, behavioral surveys, interviews with key
participants.
x Impact on the community: How much and what kind of a difference has the program or initiative
made on the community as a whole?
o Possible questions: What resulted from the program? Were there any negative results from the
program? Do the benefits of the program outweigh the costs?
o Possible methods to answer those questions: Behavioral surveys, interviews with key
informants, community-level indicators.
Once you've come up with the questions you want to answer in your evaluation, the next step is to decide
which methods will best address those questions. Here is a brief overview of some common evaluation
methods and what they work best for.
x Process measures: these tell you about what you did to implement your initiative;
x Outcome measures: these tell you about what the results were; and
x Observational system: this is whatever you do to keep track of the initiative while it's happening.
Right now! Or at least at the beginning of the initiative! Evaluation isn't something you should wait to think
about until after everything else has been done. To get an accurate, clear picture of what your group has been
doing and how well you've been doing it, it's important to start paying attention to evaluation from the very
start. If you're already part of the way into your initiative, however, don't scrap the idea of evaluation
altogether--even if you start late, you can still gather information that could prove very useful to you in
improving your initiative.
x Key evaluation questions (the five categories listed above, with more specific questions within each
category)
x Type of evaluation measures to be used to answer them (i.e., what kind of data you will need to
answer the question?)
x Type of data collection (i.e., what evaluation methods you will use to collect this data)
x Experimental design (A way of ruling out threats to the validity - e.g., believability - of your data.
This would include comparing the information you collect to a similar group that is not doing things
exactly the way you are doing things.)
With this table, you can get a good overview of what sort of things you'll have to do in order to get the
information you need.
Evaluation: - the making of a judgments about the amount, number, or value of something; assessment.
It is a systematic determination of a subject's merit, worth and significance, using criteria governed by a set
of standards. It can assist an organization, program, project or any other intervention or initiative to assess
any aim, realizable concept/proposal, or any alternative, to help in decision-making; or to ascertain the
degree of achievement or value in regard to the aim and objectives and results of any such action that has
been completed.
Impact evaluation assesses the changes that can be attributed to a particular intervention, such as a project,
program or policy, both the intended ones, as well as ideally the unintended ones. In contrast to outcome
monitoring, which examines whether targets have been achieved, impact evaluation is structured to answer
the question: how would outcomes such as participants’ well-being have changed if the intervention had not
been undertaken? This involves counterfactual analysis, that is, “a comparison between what actually
happened and what would have happened in the absence of the intervention. Impact evaluations seek to
answer cause-and-effect questions. In other words, they look for the changes in outcome that are directly
attributable to a program.
The primary goals of a performance evaluation system are to provide an equitable measurement of an
employee’s contribution to the workforce, produce accurate appraisal documentation to protect both the
employee and employer, and obtain a high level of quality and quantity in the work produced. To create a
performance evaluation system in your practice, follow these five steps:
1. Develop an evaluation form.
2. Identify performance measures.
3. Set guidelines for feedback.
4. Create disciplinary and termination procedures.
5. Set an evaluation schedule.
Performance evaluations should be conducted fairly, consistently and objectively to protect your employees’
interests and to protect your practice from legal liability. One way to ensure consistency is to use a standard
evaluation form for each evaluation. The form you use should focus only on the essential job performance
areas. Limiting these areas of focus makes the assessment more meaningful and relevant and allows you and
the employee to address the issues that matter most. You don’t need to cover every detail of an employee’s
performance in an evaluation.
For most staff positions, the job performance areas that should be included on a performance evaluation form
are job knowledge and skills, quality of work, quantity of work, work habits and attitude. In each area, the
appraiser should have a range of descriptors to choose from (e.g., far below requirements, below
requirements, meets requirements, exceeds requirements, far exceeds requirements). Depending on how
specific the descriptors are, it’s often important that the appraiser also have space on the form to provide the
reasoning behind his or her rating. (Click below for a one-page evaluation form that covers these essential
performance areas without overwhelming the employee or the appraiser.
Performance evaluations for those in management positions should assess more than just the essential job
performance areas mentioned above. They should also assess the employee’s people skills, ability to
motivate and provide direction, overall communication skills and ability to build teams and solve problems.
You should have either a separate evaluation form for managers or a special managerial section added to
your standard evaluation form. (Click below for an example of a performance evaluation form that covers all
the areas essential to rating the performance of management staff.)
Standard performance measures, which allow you to evaluate an employee’s job performance objectively,
can cut down on the amount of time and stress involved in filling out the evaluation form. Although
developing these measures can be one of the more time-consuming parts of creating a performance
evaluation system, it’s also one of the most powerful.
If you have current job descriptions for each position in your practice, you’ve already taken the first step
toward creating standard performance measures, which are essentially specific quantity and quality goals
attached to the tasks listed in a job description. A job description alone can serve as a measurement tool
during an evaluation if, for example, you’re assessing whether an employee’s skills match the requirements
of the position. But standard performance measures take the job description one step further. For example,
one task listed in a receptionist’s job description might be entering new and updated patient registrations into
the computer. The standard performance measure for that task might be to enter 6 to 12 registrations per day
(quantity) with an error rate of less than 2 percent (quality). for some other standard performance measures
that were created for a receptionist in a two-physician primary care practice.)
View/Print Table
Standard performance measures can even objectively measure some of the more subjective job performance
areas, such as work habits. For example, you can establish an objective measure for attendance by defining
the acceptable number of times an employee can be tardy or absent during a specific time frame.
However, standard performance measures don’t always work for other subjective areas, such as attitude. In
these cases, it’s still important to be as objective as possible in your evaluation. Don’t attempt to describe
attitude, for instance; instead, describe the employee’s behavior, which is what conveys the attitude, and the
consequences of that behavior for the practice. For example: “This employee has failed to support her co-
workers. When another member of her department is absent, she refuses to take on the additional tasks
required to process patients in a timely manner. This behavior causes patient backlog, places a burden on
staff and compromises effective teamwork.”
To begin developing standard performance measures in your practice, review the job descriptions for each
position and select the key components of the job that can be specifically measured. Then, work with the
employees in each position to gather quantitative data, examine historical patterns of volume and determine
qualitative measurements that reflect the practice’s mission and goals. Depending on how large your practice
is and how many positions need standard performance measures, you may want to select a committee to
develop them. Then, with help from the employees in each position, the supervisors should maintain them.
It’s important to keep job descriptions and standard performance measures as current as possible. Otherwise,
when an employee doesn’t measure up to the standards you’ve set, you can’t be sure whether he or she has a
performance problem or whether your expectations of the position have become unrealistic based on
increased volume or a change in circumstances.
If your practice’s pay increases are based on merit, it may be appropriate and efficient to review an
employee’s salary at the time of the performance evaluation. Such a direct link between performance and pay
could make you and your employees take the performance evaluations even more seriously than you might
have otherwise. However, if your pay increases are based only partially on merit and partially on annual
changes in the Consumer Price Index, it may not be quite as easy to review and change individual salaries at
various times during the year.
Whether you plan to include a review of the employee’s salary during each performance evaluation should be
communicated to all employees verbally and in writing when they are hired. It is important that employees
understand this so that their expectations are realistic and they are not disappointed.
Feedback is what performance evaluations are all about. So before you implement your performance
evaluation system, make sure that everyone who will be conducting evaluations knows what kind of
feedback to give, how to give it and how to get it from the employee in return.
Encourage feedback from the employee. After you’ve discussed the results of the evaluation with the
employee, encourage him or her to give you some no defensive feedback. Ask the employee whether he or
she agrees with your assessment, and/or invite suggestions for improvement. For example: “You seem to
become impatient and short with patients when the physician is running late. Since there are times when
running late cannot be avoided, how do you suggest we handle this to avoid such a reaction?” This should
lead to an open exchange of information that will allow you and the employee to better understand each
other’s perspective.
In some cases, even after a thorough performance evaluation and a discussion of expected improvements, an
employee will continue to perform poorly. You need to be prepared to handle such a situation by having
well-defined, written disciplinary and termination procedures in place. These procedures should outline the
actions that will be taken when performance deteriorates – a verbal warning, a written warning if there is no
improvement or a recurrence, and termination if the situation is not ultimately resolved.
Verbal warning. This should be given in private, with the behavior or reason for the discipline clearly stated.
For example: “I observed you talking disrespectfully to another employee at the front desk. You said she was
brain-dead and tossed a chart at her. We will not tolerate disrespect in the work-place. Furthermore, this
outburst could be overheard from the reception room. If this occurs again, a report will be written up and
placed in your file. Do you understand the importance of this?” After the verbal warning is given, allow the
employee to respond, but keep the exchange brief.
Written warning. How you handle the written warning plays a critical role in the success of your disciplinary
and termination procedures. This is the time to make it clear to the employee just how serious his or her
performance problem is. Unfortunately, many practices fail to do this and/or to follow through with
termination if necessary. Once the written warning is mishandled in this way, it no longer has any merit. A
standard, written, warning form should include the following:
Termination. Explain the reason for the termination but do so briefly and objectively to avoid getting into an
elaborate discussion that puts you in a defensive position. Validate the employee as a person, perhaps by
giving a positive slant to the employee’s potential in the job market. For example, although an employee
might have been a poor file clerk for you because he or she didn’t pay attention to detail, the employee may
have a friendly personality that would make him or her a good telephone operator. Also, let the employee
know what will become of any accrued vacation or sick leave, pension benefits, etc. Know your state’s laws
on these issues. Finally, ask if the employee has any further questions and then assist the employee in
retrieving all of his or her belongings and leaving with as much dignity as possible. If you handle termination
well, you are less likely to have an employee who wants to “get even” by badmouthing you in the
community or seeking legal revenge.
Once you’ve built your performance evaluation system – the evaluation form, the performance measures, the
feedback guidelines and the disciplinary procedures – you just need to decide when to conduct the
performance evaluations. Some practices do all employee evaluations at the same time of year, while others
conduct them within 30 days of each employee’s anniversary of employment (the latter may work better
since it spreads the work of the evaluations out for employer and employee). However you decide to
schedule the evaluations, ensure that each appraiser consistently meets the deadline. Ignoring employees’
overdue evaluations will make them feel devalued and may hurt morale and performance.
2.2 Evaluating and Create Systematic Solution/Remedy
x Does the systematic review include a description of how the validity of individual studies was
assessed?
x Were the results consistent from study to study?
x Were individual patient data or aggregate data used in the analysis?
x Are my organization so different from those in the study that the results don't apply?
x Is the treatment feasible in our setting?
x Was section of industry important outcomes (harms as well as benefits considered?
x What are my organization values and preferences for both the outcome we are trying to prevent and
the side effects that may arise?
1. Lack of integration
Performance management has to be approached from an integrated perspective. Synergy has to be created
between the performance management system and strategic planning, human resource management
processes, organizational culture, structure and all other major organisational systems and processes.
Individual, team and organizational strategic objectives must be harmonized. Without integration, no
performance management system can succeed on its own, no matter how good the performance management
system may be.
2. Design challenges
The performance management system and tools must be designed to address the particular needs of
organizations. The design process should involve thorough consultation with major stakeholders and
especially with future users of the system. Consultation and interaction are necessary to build trust and
relationships with employees and relevant stakeholders. Trust is an absolute requirement for the success of
the performance management system. The new performance management system should be piloted and
thoroughly tested before it is applied in the organisation. Applying an incomplete system leads to loss of
credibility, time, financial and human resources, and increases resistance to change and low acceptance of
the new performance management system.
People involved in the design of the system must have expertise in performance management and an
understanding of the institution’s context. Overreliance on external consultants might be an expensive way of
developing the system, which often has additional negative consequences of dependency and lack of
ownership of the new performance management system.
The implementation of the performance management system has to be supported and driven by top
leadership and management. Leadership has to be committed to implementing the performance management
system. Leaders should be encouraged to develop the capacity to create a shared vision, inspire staff and
build a performance management system that drives the entire organization towards a common purpose.
Organizations with the best performance management results have strong value and vision-driven leaders at
the top who inspire people, communicate the vision, take risks, and provide support and rewards.
4. Implementation failure
The change management aspect of performance management should be managed strategically. The
organization’s top leadership must drive the change process. Resistance to change should be managed
proactively. A communication process should be put in place which will explain the benefits of the
performance management system, communicate progress with the implementation and reduce uncertainties,
fears and anxieties. Managers must be encouraged to engage in careful, systematic and professional planning
and implementation of the performance management system. Implementation time frames must be respected.
All documentation and forms must be completed properly and professionally, especially performance
agreements and personal development plans. Mechanisms must be put in place to ensure the objectivity of
performance ratings and judgments, and to reduce favoritisms and bias. Performance management should be
a continuous process and not an activity conducted once or twice a year. Performance feedback should be
timely and continuous. A rewards system, comprising both monetary and nonmonetary rewards, should be
developed to reward high performers. Mechanisms must be put in place to deal with nonperformers.
5. Incompetence
All those involved in the performance management system must possess appropriate knowledge, attitudes
and skills to utilize the system. The following are major skills required:
x Development of performance indicators, key results areas, core management competencies and
performance agreements
x Measurement of performance indicators
x Communication of results and feedback
x Monitoring and evaluation of the performance management system.
Proactive training and development interventions should be implemented to ensure that the users of the
performance management system are continuously developed. Special emphasis should be given to soft skills
and the behavioural aspects of performance.
6. Lack of rewards
A reward system that rewards high performance and discourages low and mediocre performance must be put
in place. A comprehensive and holistic reward system, which includes various rewards such as financial
rewards, public acknowledgments, merit awards, promotions, greater work responsibilities, learning and
study opportunities, should be developed and communicated to staff. Much greater emphasis must be given
to non-monetary rewards. Mechanisms must be put in place to take corrective action against low performers.
7. Communication challenges
A proactive communication strategy and process must be followed throughout the implementation of the
performance management system. In the planning and design phases, good communication will enable buy-
in from the major stakeholders. In the implementation phase, good communication will assist with managing
resistance to change and building positive momentum. In the monitoring and evaluation phase, good
communication will assist with learning and reinforcing achievements gained. Users of the system must be
trained to communicate professionally and developmentally during the process of conducting performance
appraisals and when communicating outcomes and feedback. Communication is one of the most critical
success factors of the entire performance management system. Effective communication requires the
provision of relevant information, ensures buy-in from the users of the system, reduces fears and anxieties,
reduces resistance to change, and generates commitment to the system.
8. Inspiration challenges
The organizations must ensure high levels of staff inspiration. This requires a systematic approach to
addressing the challenges of staff inspiration. It requires continuous investment in human resources. Staff
motivation should not be left unmanaged. If it is left unmanaged, staff motivation naturally deteriorates.
Programmers are required to ensure high levels of staff motivation and commitment to the organisational
vision, which may include a variety of activities such as team building, strategic planning, family picnics,
internal competitions and awards, learning and development opportunities, behavioral change exercises,
attitude change activities, sport activities, and similar. These programmes must be proactive, continuous and
have a long-term focus on ensuring sustainable levels of staff motivation.
In addition to direct staff motivation programmers, organizations must build an enabling organizational
environment for staff motivation. Organizational development interventions must be implemented
continuously in order to ensure high levels of staff motivation in a sustainable manner. Special emphasis
must be given to culture change programmers to ensure that the organizational culture is progressive and
developmental. Issues of the objectivity of performance ratings, fairness and equity should be addressed –
otherwise staff motivation is compromised.
Organizational processes should be streamlined, simplified and made user-friendly to motivate staff and not
to demotivate them with red-tape and bureaucratic procedures. Proactive communication processes must be
put in place to ensure that information is continuously communicated to the right people. Effective
communication reduces fear and uncertainties and prevents wrong assumptions, gossip, and politics.
Performance feedback should be given timorously and continuously and not once or twice a year following
the performance appraisal process.
Human resource management and development policies, strategies and activities should be proactive and
developmental. They should be designed and implemented to attract, nurture, develop and retain the best
staff. In addition to the development of intellectual capabilities and technical skills, training and
development interventions should emphasize the development of emotional and spiritual intelligence. A
comprehensive reward system should be implemented, comprising monetary and nonmonetary rewards, to
ensure high levels of staff motivation on a sustainable basis. A reward system should be designed in such a
way that it encourages excellence, discourages mediocrity and addresses non-performance.
Leadership plays a crucial role with regard to staff motivation. It is the main responsibility of a leader to
inspire staff, to ensure that obstacles to staff motivation are removed and to generate their passion and
commitment to the organisational mission. High motivation generally leads to high performance. Without
motivated staff, no performance management system can be successful, irrespective of how well the system
is developed and how sophisticated performance documents, forms and agreements are.
9. Lack of monitoring
The evaluation process must be conducted at regular intervals to enable the detection of problems at an early
stage. The problems identified should be fed back to the design phase. This will ensure that prompt
corrective action is taken to address the identified problems. In order to ensure the integrity of the evaluation
process, it is advisable that an independent party conducts the evaluation process. In order to be successful,
the performance management system must be continuously evaluated and improved.
The system below on Integrated Performance Management has been developed based on the identification of
major performance management problems, weaknesses and challenges. The system addresses these problems
in an integrated manner and provides long-term solutions. The solutions are based on practical
recommendations from performance management practitioners. They are underpinned by strong theoretical
foundations informed by leading local and international performance management scholars, experts and
consultants.
Based on the above, an integrated performance management system is presented in the following
diagram.
Recommendation: - a suggestion or proposal as to the best course of action, especially one put forward by an
authoritative body.
Technical recommendation: - the action of recommending something or someone.
Recognition: - the action or process of recognizing or being recognized, in particular:
Contact us today for a free consultation and our qualified team will discuss your specific needs with you as
soon as possible.
3.1 Established OH& S and risk control measures and procedures
This procedure is made under the Occupational Health and Safety Policy.
SCOPE
This procedure applies to all staff, students, contractors and other personnel at workplaces under the
management or control of the University of Melbourne.
PROCEDURE
xidentify OHS hazards, including public safety hazards, that are associated with the activities,
processes, products and services under the management and control of the University
x assess the OHS risks involved
x Implement suitable control measures to ensure OHS risks are eliminated, or else controlled and
monitored, in accordance with the hierarchy of risk control and legal requirements.
Risk assessments
OHS risk assessments must be carried out:
x before new or altered systems of work are established
x before new plant and equipment or regulated plant is acquired
x before new chemicals and substances are acquired
x before plant and equipment and regulated plant are manufactured
x before buildings are acquired or leased
x before businesses or operational entities are established or acquired
x when hazards are identified in the workplace, including when incidents have occurred
x when work environment is altered (for instance: refurbishment or new building)
x when new information about workplace risks becomes available
x when responding to concerns raised by workers, health and safety representatives (HSRs) or others at
the workplace
x When required by legalization for specific hazards.
OHS risk assessments and risk control plans are documented in:
The Director, OHS and Injury Management is responsible for maintaining and publishing the University-
wide OHS risk register.
Heads of budget division must ensure that a budget division OHS risk register is developed and
maintained.
Budget division OHS risk registers must incorporate risks identified from budget divisional risk
assessments and relevant risks from the organization-wide OHS risk register. Budget division OHS risk
registers must record:
x activity
x associated hazards/risks
x raw risk score risk, assessed before risk treatment
x legislation standard, guidance
x organization policies/procedures
x controls
x Residual risk score risk remaining after implementation of risk treatment.
Supervisors and managers must encourage all personnel, including University employees, students,
contractors and visitors, to report any hazards they identify.
Where reasonably practicable, supervisors and managers must consult with HSRs and University employees
on hazard identification, risk assessment and control processes. Supervisors/managers must proactively
identify hazards in their workplace, including through the use of:
x workplace inspections
x hazard reports Incident Report (S3) forms
x audit reports (internal or external)
x Formal risk assessment reports.
When a hazard is identified, the supervisor/ manager must assess the risk and implement a control plan using
the OHS risk assessment methodology. Where reasonably practicable, this should be completed in
consultation with HSRs and affected University employees.
The supervisor or manager must ensure that the controls implemented are reviewed and their effectiveness
monitored.
The supervisor or manager must ensure that a record of the identification, assessment and control process is
maintained.
Policies and procedures go hand-in-hand to clarify what your organization wants to do and how to do it.
Policies
Policies are clear, simple statements of how your organization intends to conduct its services, actions or
business. They provide a set of guiding principles to help with decision making.
Policies don't need to be long or complicated – a couple of sentences may be all you need for each policy
area.
Procedures
Procedures describe how each policy will be put into action in your organization. Each procedure should
outline:
Procedures might just be a few bullet points or instructions. Sometimes they work well as forms, checklists,
instructions or flowcharts.
Policies and their accompanying procedures will vary between workplaces because they reflect the values,
approaches and commitments of a specific organization and its culture. But they share the same role in
guiding your organisation.
Before you place a vacancy with a recruitment consultancy, it is a good idea to spend some time evaluating
exactly what you require from a candidate. A job specification is a detailed description of the role, including
all responsibilities, objectives, and requirements. A person specification is a profile of your ideal new
employee, including skills, experience, and personality type.
Writing a detailed specification forces you to think about exactly what skills and experience are required for
your role and the type of person you want for the team. Giving your recruitment consultant a comprehensive
brief will allow them to work more effectively and quickly in finding you the perfect candidate.
Specifications also give candidates a better idea of exactly what you are looking for. This can help to weed
out inappropriate applications from people who might be suitable on paper, but not actually that interested in
the role. They also help to manage the expectations of successful new employees and to avoid situations
where they feel they have been misled about the exact nature of the role.
You can use the specifications as a checklist for evaluating CVs and in interviews, which will save you
preparation time and make sure you don’t miss anything.
Writing a specification can make you think about how your department works and provide you with an
opportunity to shift responsibilities around to maximize efficiency.
Specifications are also useful after the vacancy has been filled, as they can help to assess a new recruit's
performance and to determine their future training needs.
Be as specific as possible about the responsibilities of the job, including any deadlines for delivery and
measurements of success.
Leave room for flexibility within the job specification, and make it obvious if the role is likely to change or
grow in the near future. This helps to avoid employees resenting taking on responsibilities not in their
original job description.
Be careful with your wording, eg. is a qualification really required or would someone who is Qualified by
Experience (QBE) still be suitable?
It is essential not to discriminate on grounds of gender, age, ethnicity, sexuality, or health, so avoid any
inappropriate requirements, eg “must have x years’ experience” or words such as “dynamic” or “mature”.
If the role is involved with service delivery, you may want to ask a selection of your clients their opinions on
the type of person they would prefer to work with.
x the job title and the position in the company, including their line manager and any other members of
staff reporting to them
x a summary of the general nature, main purpose, and objectives of the job
x the technical, organizational, communicative, and creative skills and abilities you expect from an
ideal candidate
x the kind of personality that would fit in with your team, and with your organization’s ethos
x character traits that are likely to help them to do the job effectively
In previous sections of this chapter, we’ve discussed studying the issue, deciding on a research design, and
creating an observational system for gathering information for your evaluation. Now it’s time to collect your
data and analyze it – figuring out what it means – so that you can use it to draw some conclusions about your
work. In this section, we’ll examine how to do just that.
Essentially, collecting data means putting your design for collecting information into operation. You’ve
decided how you’re going to get information – whether by direct observation, interviews, surveys,
experiments and testing, or other methods – and now you and/or other observers have to implement your
plan. There’s a bit more to collecting data, however. If you are conducting observations, for example, you’ll
have to define what you’re observing and arrange to make observations at the right times, so you actually
observe what you need to. You’ll have to record the observations in appropriate ways and organize them so
they’re optimally useful.
Recording and organizing data may take different forms, depending on the kind of information you’re
collecting. The way you collect your data should relate to how you’re planning to analyze and use
it. Regardless of what method you decide to use, recording should be done concurrent with data collection if
possible, or soon afterwards, so that nothing gets lost and memory doesn’t fade.
Some of the things you might do with the information you collect include:
There are two kinds of variables in research. An independent variable (the intervention) is a condition
implemented by the researcher or community to see if it will create change and improvement. This could be a
program, method, system, or other action. A dependent variable is what may change as a result of the
independent variable or intervention. A dependent variable could be a behavior, outcome, or other
condition. A smoking cessation program, for example, is an independent variable that may change group
members’ smoking behavior, the primary dependent variable.
Analyzing information involves examining it in ways that reveal the relationships, patterns, trends, etc. that
can be found within it. That may mean subjecting it to statistical operations that can tell you not only what
kinds of relationships seem to exist among variables, but also to what level you can trust the answers you’re
getting. It may mean comparing your information to that from other groups (a control or comparison group,
statewide figures, etc.), to help draw some conclusions from the data. The point, in terms of your evaluation,
is to get an accurate assessment in order to better understand your work and its effects on those you’re
concerned with, or in order to better understand the overall situation.
There are two kinds of data you’re apt to be working with, although not all evaluations will necessarily
include both. Quantitative data refer to the information that is collected as, or can be translated into,
numbers, which can then be displayed and analyzed mathematically. Qualitative data are collected as
descriptions, anecdotes, opinions, quotes, interpretations, etc., and are generally either not able to be reduced
to numbers, or are considered more valuable or informative if left as narratives. As you might expect,
quantitative and qualitative information needs to be analyzed differently.
Quantitative data
Quantitative data are typically collected directly as numbers. Some examples include:
Data can also be collected in forms other than numbers, and turned into quantitative data for
analysis. Researchers can count the number of times an event is documented in interviews or records, for
instance, or assign numbers to the levels of intensity of an observed event or behavior. For instance,
community initiatives often want to document the amount and intensity of environmental changes they bring
about – the new programs and policies that result from their efforts. Whether or not this kind of translation is
necessary or useful depends on the nature of what you’re observing and on the kinds of questions your
evaluation is meant to answer.
Qualitative data
Unlike numbers or “hard data,” qualitative information tends to be “soft,” meaning it can’t always be reduced
to something definite. That is in some ways a weakness, but it’s also a strength. A number may tell you how
well a student did on a test; the look on her face after seeing her grade, however, may tell you even more
about the effect of that result on her. That look can’t be translated to a number, nor can a teacher’s
knowledge of that student’s history, progress, and experience, all of which go into the teacher’s interpretation
of that look. And that interpretation may be far more valuable in helping that student succeed than knowing
her grade or numerical score on the test.
Qualitative data can sometimes be changed into numbers, usually by counting the number of times specific
things occur in the course of observations or interviews, or by assigning numbers or ratings to dimensions
(e.g., importance, satisfaction, ease of use).
Qualitative data can sometimes tell you things that quantitative data can’t. It may reveal why certain
methods are working or not working, whether part of what you’re doing conflicts with participants’ culture,
what participants see as important, etc. It may also show you patterns – in behavior, physical or social
environment, or other factors – that the numbers in your quantitative data don’t, and occasionally even
identify variables that researchers weren’t aware of.
Quantitative analysis is considered to be objective – without any human bias attached to it – because it
depends on the comparison of numbers according to mathematical computations. Analysis of qualitative
data is generally accomplished by methods more subjective – dependent on people’s opinions, knowledge,
assumptions, and inferences (and therefore biases) – than that of quantitative data. The identification of
patterns, the interpretation of people’s statements or other communication, the spotting of trends – all of
these can be influenced by the way the researcher sees the world. Be aware, however, that quantitative
analysis is influenced by a number of subjective factors as well. What the researcher chooses to measure, the
accuracy of the observations, and the way the research is structured to ask only particular questions can all
influence the results, as can the researcher’s understanding and interpretation of the subsequent analyses.
Why should you collect and analyze data for your evaluation?
Part of the answer here is that not every organization – particularly small community-based or non-
governmental ones – will necessarily have extensive resources to conduct a formal evaluation. They may
have to be content with less formal evaluations, which can still be extremely helpful in providing direction
for a program or intervention. An informal evaluation will involve some data gathering and analysis. This
data collection and sensemaking is critical to an initiative and its future success, and has a number of
advantages.
x The data can show whether there was any significant change in the dependent variable(s) you
hoped to influence. Collecting and analyzing data helps you see whether your intervention brought
about the desired results
The term “significance” has a specific meaning when you’re discussing statistics. The level of significance
of a statistical result is the level of confidence you can have in the answer you get. Generally, researchers
don’t consider a result significant unless it shows at least a 95% certainty that it’s correct (called the .05 level
of significance, since there’s a 5% chance that it’s wrong). The level of significance is built into the
statistical formulas: once you get a mathematical result, a table (or the software you’re using) will tell you
the level of significance.
Thus, if data analysis finds that the independent variable (the intervention) influenced the dependent variable
at the .05 level of significance, it means there’s a 95% probability or likelihood that your program or
intervention had the desired effect. The .05 level is generally considered a reasonable result, and the .01
level (99% probability) is considered about as close to certainty as you are likely to get. A 95% level of
certainty doesn’t mean that the program works on 95% of participants, or that it will work 95% of the time.
It means that there’s only a 5% possibility that it isn’t actually what’s influencing the dependent variable(s)
and causing the changes that it seems to be associated with.
x They can uncover factors that may be associated with changes in the dependent variable(s).
Data analyses may help discover unexpected influences; for instance, that the effort was twice as
large for those participants who also were a part of a support group. This can be used to identify key
aspects of implementation.
x They can show connections between or among various factors that may have an effect on the
results of your evaluation. Some types of statistical procedures look for connections (“correlations”
is the research term) among variables. Certain dependent variables may change when others do.
These changes may be similar – i.e., both variables increase or decrease (e.g., as children’s
proficiency at reading increases, the amount of reading they do also increases). Or the opposite may
be observed – i.e. the two variables change in opposite directions (as the amount of exercise they
engage in increases, peoples’ weight decreases). Correlations don’t mean that one variable causes
another, or that they both have the same cause, but they can provide valuable information about
associations to expect in an evaluation.
x They can help shed light on the reasons that your work was effective or, perhaps, less effective
than you’d hoped. By combining quantitative and qualitative analysis, you can often determine not
only what worked or didn’t, but why. The effect of cultural issues, how well methods are used, the
appropriateness of your approach for the population – these as well as other factors that influence
success can be highlighted by careful data collection and analysis. This knowledge gives you a basis
for adapting and changing what you do to make it more likely you’ll achieve the desired outcomes in
the future.
x They can provide you with credible evidence to show stakeholders that your program is
successful, or that you’ve uncovered, and are addressing limitations. Stakeholders, such as
funders and community boards, want to know their investments are well spent. Showing evidence of
intermediate outcomes (e.g. new programs and policies) and longer-term outcomes (e.g.,
improvements in education or health indicators) is becoming increasingly important to receiving –
and retaining – funding.
x Their use shows that you’re serious about evaluation and about improving your work. Being a
good trustee or steward of community investment includes regular review of data regarding progress
and improvement.
x They can show the field what you’re learning, and thus pave the way for others to implement
successful methods and approaches. In that way, you’ll be helping to improve community efforts
and, ultimately, quality of life for people who benefit.
As far as data collection goes, the “when” part of this question is relatively simple: data collection should
start no later than when you begin your work – or before you begin in order to establish a baseline or starting
point – and continue throughout. Ideally, you should collect data for a period of time before you start your
program or intervention in order to determine if there are any trends in the data before the onset of the
intervention. Additionally, in order to gauge your program’s longer-term effects, you should collect follow-
up data for a period of time following the conclusion of the program.
x You can hire or find a volunteer outside evaluator, such as from a nearby college or university, to take
care of data collection and/or analysis for you.
x You can conduct a less formal evaluation. Your results may not be as sophisticated as if you
subjected them to rigorous statistical procedures, but they can still tell you a lot about your program.
Just the numbers – the number of dropouts (and when most dropped out), for instance, or the
characteristics of the people you serve – can give you important and usable information.
x You can try to learn enough about statistics and statistical software to conduct a formal evaluation
yourself. (Take a course, for example.)
x You can collect the data and then send it off to someone – a university program, a friendly statistician
or researcher, or someone you hire – to process it for you.
x You can collect and rely largely on qualitative data. Whether this is an option depends to a large
extent on what your program is about. You wouldn’t want to conduct a formal evaluation of
effectiveness of a new medication using only qualitative data, but you might be able to draw some
reasonable conclusions about use or compliance patterns from qualitative information.
x If possible, use a randomized or closely matched control group for comparison. If your control is
properly structured, you can draw some fairly reliable conclusions simply by comparing its results to
those of your intervention group. Again, these results won’t be as reliable as if the comparison were
made using statistical procedures, but they can point you in the right direction. It’s fairly easy to tell
whether or not there’s a major difference between the numbers for the two or more groups. Who
should actually collect and analyze data also depends on the form of your evaluation. If you’re doing
a participatory evaluation, much of the data collection - and analyzing - will be done by community
members or program participants themselves. If you’re conducting an evaluation in which the
observation is specialized, the data collectors may be staff members, professionals, highly trained
volunteers, or others with specific skills or training (graduate students, for example). Analysis also
could be accomplished by a participatory process. Even where complicated statistical procedures are
necessary, participants and/or community members might be involved in sorting out what those
results actually mean once the math is done and the results are in. Another way analysis can be
accomplished is by professionals or other trained individuals, depending upon the nature of the data
to be analyzed, the methods of analysis, and the level of sophistication aimed at in the conclusions.
Whether your evaluation includes formal or informal research procedures, you’ll still have to collect and
analyze data, and there are some basic steps you can take to do so.
We've previously discussed designing an observational system to gather information. Now it’s time to
put that system in place.
x Clearly define and describe what measurements or observations are needed. The definition and
description should be clear enough to enable observers to agree on what they’re observing and
reliably record data in the same way.
x Select and train observers. Particularly if this is part of a participatory process, observers need
training to know what to record; to recognize key behaviors, events, and conditions; and to reach an
acceptable level of inter-rater reliability (agreement among observers).
x Conduct observations at the appropriate times for the appropriate period of time. This may include
reviewing archival material; conducting interviews, surveys, or focus groups; engaging in direct
observation; etc.
x Record data in the agreed-upon ways. These may include pencil and paper, computer (using a laptop
or handheld device in the field, entering numbers into a program, etc.), audio or video, journals, etc.
How you do this depends on what you’re planning to do with it, and on what you’re interested in.
x Enter any necessary data into the computer. This may mean simply typing comments, descriptions,
etc., into a word processing program, or entering various kinds of information (possibly including
audio and video) into a database, spreadsheet, a GIS (Geographic Information Systems) program, or
some other type of software or file.
x Transcribe any audio- or videotapes. This makes them easier to work with and copy, and allows the
opportunity to clarify any hard-to-understand passages of speech.
x Score any tests and record the scores appropriately.
x Sort your information in ways appropriate to your interest. This may include sorting by category of
observation, by event, by place, by individual, by group, by the time of observation, or by a
combination or some other standard.
x When possible, necessary, and appropriate, transform qualitative into quantitative data. This might
involve, for example, counting the number of times specific issues were mentioned in interviews, or
how often certain behaviors were observed.
Conduct data graphing, visual inspection, statistical analysis, or other operations on the data as
appropriate
We’ve referred several times to statistical procedures that you can apply to quantitative data. If you have the
right numbers, you can find out a great deal about whether your program is causing or contributing to change
and improvement, what that change is, whether there are any expected or unexpected connections among
variables, how your group compares to another you’re measuring, etc.
Depending on the nature of your research, results may be statistically significant (the 95% or better certainty
that we discussed earlier), or simply important or unusual. They may or may not be socially significant (i.e.,
large enough to solve the problem).
There are a number of different kinds of results you might be looking for.
x Differences within people or groups. If you have repeated measurements for individuals/groups over
time, we can see if there are marked increases/decreases in the (frequency, rate) of behavior (events,
etc.) following introduction of the program or intervention. When the effects are seen when and only
when the intervention is introduced – and if the intervention is staggered (delayed) across people or
groups – this increases our confidence that the intervention, and not something else, is producing the
observed effects.
x Differences between or among two or more groups. If you have one or more randomized control
groups in a formal study (groups that are drawn at random from the same population as the group in
your program, but are not getting the same program or intervention, or are getting none at all), then
the statistical significance of differences between or among the groups should tell you whether your
program has any more influence on the dependent variable(s) than what’s experienced by the other
groups.
x Results that show statistically significant changes. With or without a control or comparison group,
many statistical procedures can tell you whether changes in dependent variables are truly significant
(or not likely due to chance). These results may say nothing about the causes of the change (or they
may, depending on how you’ve structured your evaluation), but they do tell you what’s happening,
and give you a place to start.
x Correlations. Correlation means that there are connections between or among two or more
variables. Correlations can sometimes point to important relationships you might not have predicted.
Sometimes they can shed light on the issue itself, and sometimes on the effects of a group’s cultural
practices. In some cases, they can highlight potential causes of an issue or condition, and thus pave
the way for future interventions.
Correlation between variables doesn’t tell you that one necessarily causes the other, but simply that changes
in one have a relationship to changes in the other. Among American teenagers, for instance, there is
probably a fairly high correlation between an increase in body size and an understanding of algebra. This is
not because one causes the other, but rather the result of the fact that American schools tend to begin
teaching algebra in the seventh, eighth, or ninth grades, a time when many 12-, 13-, and 14-year-olds are
naturally experiencing a growth spurt.
On the other hand, correlations can reveal important connections. A very high correlation between, for
instance, the use of a particular medication and the onset of depression might lead to the withdrawal of that
medication, or at least a study of its side effects, and increased awareness and caution among doctors who
prescribe it. A very high correlation between gang membership and having a parent with a substance abuse
problem may not reveal a direct cause-and-effect relationship, but may tell you something important about
who is more at risk for substance abuse.
Once you’ve organized your results and run them through whatever statistical or other analysis you’ve
planned for, it’s time to figure out what they mean for your evaluation. Probably the most common question
that evaluation research is directed toward is whether the program being evaluated works or makes a
difference. In research terms, that often translates to “What were the effects of the independent variable (the
program, intervention, etc.) on the dependent variable(s) (the behavior, conditions, or other factors it was
meant to change)?” There are a number of possible answers to this question:
x Your program had exactly the effects on the dependent variable(s) you expected and hoped it would.
Statistics or other analysis showed clear positive effects at a high level of significance for the people
in your program and – if you used a multiple-group design – none, or far fewer, of the same effects
for a similar control group and/or for a group that received a different intervention with the same
purpose. Your early childhood education program, for instance, greatly increased development
outcomes for children in the community, and also contributed to an increase in the percentage of
children succeeding in school.
x Your program had no effect. Your program produced no significant results on the dependent
variable, whether alone or compared to other groups. This would mean no change as a result of your
program or intervention.
x Your program had a negative effect. For instance, intimate partner violence increased (or at least
appeared to) as a result of your intervention. (It is relatively common for reported events, such as
violence or injury, to increase when the intervention results in improved surveillance and ease of
reporting).
x Your program had the effects you hoped for and other effects as well.
o These effects might be positive. Your youth violence prevention program, for instance, might
have resulted in greatly reduced violence among teens, and might also have resulted in
significantly improved academic performance for the kids involved.
o These effects might be neutral. The same youth violence prevention program might somehow
result in youth watching TV more often after school.
o These effects might be negative. (These effects are usually called unintended consequences.)
Youth violence might decrease significantly, but the incidence of teen pregnancies or alcohol
consumption among youth in the program might increase significantly at the same time.
o These effects might be multiple, or mixed.For instance, a program to reduce HIV/AIDS might
lower rates of unprotected sex but might also increase conflict and instances of partner violence.
Your program had no effect or a negative effect and other effects as well. As with programs
with positive effects, these might be positive, neutral, or negative; single or multiple; or
consistent or mixed.
Analyzing and interpreting the data you’ve collected brings you, in a sense, back to the beginning. You can
use the information you’ve gained to adjust and improve your program or intervention, evaluate it again, and
use that information to adjust and improve it further, for as long as it runs. You have to keep up the process
to ensure that you’re doing the best work you can and encouraging changes in individuals, systems, and
policies that make for a better and healthier community.
You have to become a cultural detective to understand your initiative, and, in some ways, every evaluation is
an anthropological study.
In Summary
The heart of evaluation research is gathering information about the program or intervention you’re evaluating
and analyzing it to determine what it tells you about the effectiveness of what you’re doing, as well as about
how you can maintain and improve that effectiveness.
Collecting quantitative data – information expressed in numbers – and subjecting it to a visual inspection or
formal statistical analysis can tell you whether your work is having the desired effect, and may be able to tell
you why or why not as well. It can also highlight connections (correlations) among variables, and call
attention to factors you may not have considered.
Collecting and analyzing qualitative data – interviews, descriptions of environmental factors, or events, and
circumstances – can provide insight into how participants experience the issue you’re addressing, what
barriers and advantages they experience,
3.5 Preparing and Presenting Findings and Recommendations
Reporting Survey Results
When your survey and analysis has been completed, the final step in the survey process is to present your
findings, which involves the creation of a research report. This report should include a background of why
you conducted the survey, a breakdown of the results, and conclusions and recommendations supported by
this material. This is one of the most important aspects of your survey research as it is the key in
communicating your findings to those who can make decisions to take action on those results.
Surveys Pro results can be displayed right from the software, or your data and graphics can easily be
exported to a variety of applications like Excel, Word and PowerPoint. For a more powerful report, you
should include descriptive text along with your charts, tables, and graphs to give added visual impact.
Provide a background
Before you start working on the details of your report, you need to explain the general background of your
survey research. If you will be presenting the findings to your audience (the decision-makers), you will need
to make the basis for your research clear, including what objectives were established, and the conclusions
drawn from your findings.
List the factors that motivated you to conduct this research in the first place. By stating the reasons behind
the research, your audience will have a better understanding of why the survey was conducted and the
importance of the findings.
Itemize the goals and objectives you set out to achieve. Before you constructed your survey, you had a plan
as to the information you needed to get from your respondents. Once you had those goals in mind, your
online survey questions were chosen. Did your respondent's answers give you the information you sought
after when you designed the survey? Make a list of the objectives you set out when you started, those
objectives that were met and those that were not, and any other information relating to the planning process.
Specify how your data was captured. For the purposes of this article, we are referring to a survey for
collecting the data. But be specific as what type of survey you used - online, telephone, or paper-based. Also
consider who and how many it was sent to, and how the analysis was conducted.
Explain findings discovered in your research, especially facts that were important, unusual, or surprising.
Briefly highlight some of the key points that were uncovered in your results. More detail will be revealed
later in the presentation.
Summarize findings in concise statements so that an action plan can be created. Your conclusions and
recommendations should be based on the data that you have gathered. It is from these final statements that
management will make their decisions on how to take action on a given situation.
The background information of your survey research may need to be fine-tuned into a structured report
format for a polished presentation. Survey research reports typically have the following components: title
page, table of contents, executive summary, methodology, findings, survey conclusions, and
recommendations.
Title Page
State the focus of your research. The title should what the report is about, for example, "Customer
Satisfaction in the European Market." Also include the names of who prepared the report, to whom it will be
presented, and the date the report is to be presented.
Table of Contents
List the sections in your report. Here is where you give a high-level overview of the topics to be discussed, in
the order they are presented in the report. Depending on the length of your report, you should consider
including a listing of all charts and graphs so that your audience can quickly locate them.
Executive Summary
Summarize the major findings up front. Listed at the beginning of your report, this short list of survey
findings, conclusions, and recommendations is helpful. The key word here is "short" so no more than a few
complete sentences, which may be bulleted if you wish. This summary can also be used as a reference when
your reader is finished the report and wants to just glance over the major points.
Methodology
Describe how you got your data. Whether you conducted an online, paper or telephone survey, or perhaps
you talked to people face to face, make sure you list how your research was conducted. Also make note of
how many people participated, response rates, and the time it took to conduct this research.
Findings
Present your research results in detail. You want to be detailed with this section of the report. Display your
results in the form of tables, charts and graphs, and incorporate descriptive text to explain what these visuals
mean, and to emphasize important points. eSurveysPro's charts are fully customizable so you can display
your data in a variety of ways, such as bar or pie charts, or even tables. The chart legends can also be
adjusted to suit your needs. This flexibility allows you to be creative when displaying your results. However
you arrange your results, it is helpful to have a close correlation between the text and visuals so that your
audience will understand how they are related. For example:
Survey Conclusions
Summarize the key points. This concise collection of findings is similar to the Executive Summary. These
conclusions should be strong statements that establish a relationship between the data and the visuals.
Remember that thoughts expressed here must be supported by data. You may also mention anything that may
be related to this survey research, such as previous studies or survey results that may prove useful if
included.
Recommendations
Suggest a course of action. Based on your conclusions, make suggestions at a high-level, as to what actions
could be taken to help the survey project meet the research objectives. For example, if you concluded that
customers are not satisfied with customer service from the support staff, you may recommend that
management should monitor support staff calls to assure quality customer service standards are met.
3.6 Schedule of work activities
Scheduling is the art of planning your activities so that you can achieve your goals and priorities in the time
you have available. When it's done effectively, it helps you:
Time is the one resource that we can't buy, but we often waste it or use it ineffectively. Scheduling helps you
think about what you want to achieve in a day, week or month, and it keeps you on track to accomplish your
goals.
Set a regular time to do your scheduling – at the start of every week or month, for example.
There are a number of different tools to choose from. A simple and easy way to keep a schedule is to use a
pen and paper, organizing your time using a weekly planner. (Click here for a free downloadable planner
template to get started.)
The most important thing when choosing your planner is that it lets you enter data easily, and allows you to
view an appropriate span of time (day/week/month) in the level of detail that you need.
Once you have decided which tool you want to use, prepare your schedule in the following way:
Start by establishing the time you want to make available for your work.
How much time you spend at work should reflect the design of your job and your personal goals in life.
For example, if you're pushing for promotion, it might be prudent to work beyond normal hours each day to
show your dedication. If, on the other hand, you want to have plenty of time for out-of-work activities, you
might decide to do your allocated hours and no more.
Next, block in the actions you absolutely must take to do a good job. These will often be the things you are
assessed against.
For example, if you manage people, make sure that you have enough time available to deal with team
members' personal issues, coaching, and supervision needs. Also, allow time to communicate with your boss
and key people around you.
Review your To-Do List , and schedule in high-priority and urgent activities, as well as essential
maintenance tasks that cannot be delegated or avoided.
Try to arrange these for the times of day when you are most productive – for example, some people are at
their most energized and efficient in the morning, while others focus more effectively in the afternoon or
evening. (Our article "Is This a Morning Task?" can help you identify your best times of day.)
Next, schedule some extra time to cope with contingencies and emergencies. Experience will tell you how
much to allow – in general, the more unpredictable your job, the more contingency time you'll need. (If you
don't schedule this time in, emergencies will still happen and you'll end up working late.)
Planning & Scheduling Guide
Planning Elements
Planning Definition
Scope of Work (WBS)
Method of Execution (Logic Network)
Scheduling Definition
CPM Scheduling
Schedule Hierarchy
Schedule Baseline and Process
Key Concepts
Float & Types
Resource Scheduling
Schedule Analysis
1) Definition of Planning:
Simplistic definition of planning is "making decisions with the objective of influencing the future" i.e.,
Resource Scheduling:
Resource Scheduling defines which resource should be used on specific tasks-between which dates. It
analytically manages and use schedule float. It also analyze staffing requirements and evaluate effects of
limited staffing and in thus avoid wide fluctuations in daily need for various resources. Resource scheduling
helps significantly in producing more realistic schedules better progress curves, and staffing curves. Good
resource scheduling is the basis for maximizing the productivity of people and equipment while minimizing
their cost.
A bucket is another term for "time period." In Primavera applications, you can view, enter, and edit activity
and assignment data in various buckets, such as daily, weekly, monthly, quarterly, yearly, and financial
period. The term "bucket" is generally used in future period bucket planning, which enables you to manually
enter and edit assignment data for some activities in future.
While assigning a resource curve to the resource/role assignment will yield more accurate results than
spreading units evenly across the duration of an activity, the work you plan to perform per period on an
activity may not be fully reflected by the curve. As a result, performance against the project plan cannot be
accurately measured.
To achieve the most precise resource/role distribution plan, you can manually enter the budgeted
resource/role allocation per assignment in the timescale unit you choose (days, weeks, months, quarters,
years, or financial periods). For example, assume an activity has an original duration of 28 days and
budgeted units of 80 hours. For this activity, you know that the actual work will not be spread evenly across
the duration of the activity; rather, the budgeted units will be spread as follows: Week 1 – 10hours, Week 2 -
30 hours, Week 3 – 15 hours, Week 4 – 25 hours. By manually entering the planned resource/role
distribution in future period assignment buckets, you can create an accurate baseline plan to measure against
current project progress. As the current project schedule progresses and you apply actuals, you can track how
the project is performing against plan by comparing the project’s budgeted future periods to the current
project’s actuals.
If work on an activity is not proceeding according to plan, you can manually update the remaining units for
an assignment's future periods, enabling you to measure the remaining work for an assignment without
changing the original plan. Alternatively, if you choose to re-estimate future work based on changes to the
project schedule, you can edit an assignment's future period budgeted units while the activity is in progress;
if many assignment's require re-estimation, you can establish a new baseline plan based on your changes.
Tips
• You can compare the planned future period resource distribution to actual units and costs in the Resource
Usage Profile, Resource Usage Spreadsheet, Activity Usage Profile, Activity Usage Spreadsheet, time-
distributed reports, and the Tracking window. If you plan your project work in defined financial periods,
after you store period performance, you can compare the resource distribution you planned to the project's
past period actual.
• Activity costs, including earned value and planned value, are calculated using the planned future period
resource distribution you define for activity assignments.
Note: You must have the 'Edit Future Periods' project privilege to manually enter future period data.
Key Terms -
Level 1 Schedule – Reflects management summary for overall/total time frame & scope of project
highlighting contractual and Project milestones.
Summary Schedule – The level of schedule by facility, discipline and areas for Engineering, highlighting
long lead items, critical items for Procurement, and summarized by work package for Construction to depicts
the relationship between facilities/phases and establish facility criticality.
Baseline Schedule – Originated from Contract Schedule and updated with agreed changes over the period of
time for effective project progress monitoring/tracking.
Current Schedule – Originated from Baseline Schedule and updated with periodic progress and changes to
reflect the current/actual status of the project.
Contract Schedule - Contractually agreed schedule for the defined scope and time offering reference points
against which actual progress/status can be measured.
Original (Control) Budget - Contractually agreed man-hours/cost for the defined scope and time.