No Image Available

Annie Hayes

Sift

Editor

Read more about Annie Hayes

Feature: Training Evaluation Part 3 – When and how to evaluate

pp_default1

In the third of four articles in this series, Martin Schmalenbach looks at when, and when, not to evaluate. Read part one and part two.

Setting aside any personal views about always evaluating training and performance enhancing interventions, there are times when you perhaps should NOT evaluate. The following covers most situations for when you should and shouldn’t evaluate.

Training may need to take place in a hurry, for whatever good reasons, to the extent that the elapsed time needed to do the evaluation properly is too long to be able to directly influence the decision as to whether to commit resources to the training or not.

There is one good reason to do the evaluation anyway, and that is to develop some data and that can help to validate the decision taken and so support organisational learning about how to respond to such scenarios in the future.

WHEN TO EVALUATE

Political Necessity

The training in question is in such a bright political spotlight that it needs to be seen to be value adding. The method for evaluating this "value-add" therefore needs some credibility & robustness to stand up to closer scrutiny.

Having a 'pre training' baseline will be very important in demonstrating robustly that the 'after' is better than the 'before'. This baseline will need to be firmly anchored on a solid root cause analysis to ensure credibility.

A trainer or training department should associate itself clearly with value-adding activities if it is to ensure it does not get 'downsized' the next time resources become even more scarce.

Client Requirement

Obviously if a client (internal or external) asks for it, then do it, or more appropriately, help them to do it themselves – it's their resources being spent!

Again, having a robust and rigorous process really helps. The client may already have one, but it won't hurt to challenge it in the sense of making sure it can actually answer the key questions the client is likely to have.

Decisions Surrounding Limited Resources

Any manager responsible for deciding where, when and how to deploy limited resources would like to know in advance the likely benefits of each option so he or she can more easily make the decisions that lead to success. This should apply to training, after all employees are diverted from their 'day jobs' to attend or participate, money and other resources are deployed in order to make sure the training takes place and is exploited. This is one case where evaluating in advance is going to be a huge help to managers.

Client Relations

A training function should ensure that its clients are completely clear about the aims, outcomes and expectations arising from the intervention. In doing so it has a better chance of actually meeting the client's requirements and managing effectively any expectations. This is part of the art of good customer or client relations, and can do wonders for bottom line performance for both client and training function, as well as encourage future repeat business. Such a situation is unlikely to lead to future downsizing on the grounds of limited resources!

Buy-In From Staff & Line Managers

When buy-in is needed from line managers and their staff who will be attending any training or participating in associated interventions, it helps if they know what is expected, why, their part to play in it and how the training and interventions help them do their jobs better/quicker/cheaper – and to some (great!) extent, answers the "what's in it for me" question as well.

After The Fact – i.e. After Training

If you are asked to conduct an evaluation there are generally 2 things that can come out of it: lessons to learn for the future, and political activities designed to show the worth or otherwise of a training programme or function that is either falling out of favour, or is the target for losing resources because all other departments have been able to defend their resource allocations and historically the training function has not been good at this.

If you can develop a baseline with supporting root cause analysis, even after the fact, then you can do a reasonable evaluation. Either way you can state what has and hasn't happened in the past, and how things will be in the future, starting immediately. It's a chance to show a reliable, robust and credible process and your part in it, and how the combination will contribute positively to the bottom line in the future. It may get you a reprieve!

WHEN NOT TO EVALUATE

Regulatory Or 'Must Have' Training

The training is required either as part of a statutory requirement or other legal instrument (e.g. health and safety related), or the organisation believes within its core values that the training is the right thing to do (e.g. induction).

The training can still be validated though – did the training do what "it says on the tin"? and perhaps also "was this the most efficient way of delivering the training?"

Luxury" Training

The training is not required to add to the bottom line or otherwise move forward the performance of the individual/team/organisation, i.e. it's a 'luxury' so deploying limited resources to evaluate or even validate the training is not good use of them. An obvious example is non-work related training that some organisations provide to employees in the form of say £100 per year to cover things like pottery classes or sports/arts/crafts at night school.

When There Is No Time To Do It Properly

Training may need to take place in a hurry, for whatever good reasons, to the extent that the elapsed time needed to do the evaluation properly is too long to be able to directly influence the decision as to whether to commit resources to the training or not.

There is one good reason to do the evaluation anyway, and that is to develop some data and that can help to validate the decision taken and so support organisational learning about how to respond to such scenarios in the future.

When Not Allowed Or Able To Develop A Baseline

If you can't develop (for any reason) a credible baseline of performance of key indicators, including those relating to any root cause analysis, you really have nothing to compare the new performance against. Any evaluation you do can not be judged as objective, is likely to lose credibility as a result, so why waste the effort and heartache of a full evaluation? You can certainly develop a subjective view of those involved, basically by asking "in what way was it worth it?"

When the reasons for doing the intervention cannot be expressed in terms of strategic objectives and/or key performance measures

If you can't measure the performance issue and/or explicitly and credibly link the activity to the strategic objectives, not only should you consider NOT evaluating, you should also consider NOT doing the intervention at all!

Who Is Responsible For Which Bits

Who does the evaluation is almost immaterial, so long as it is done competently. Arguably the persons involved should be suitably trained and credible. What is more important is who takes key decisions about direction, tasking and resource allocation as a result of any evaluation? This person is actually making the ultimate evaluation and presumably needs to know in advance of allocating resources so as to be more sure of making a suitably informed decision.

In practice the training function will oversee the process but staff and front line managers are most likely to be involved in gathering the data and probably even analysing it.

Who reports on the results, and to whom, is a question of structure and politics.

When “Quick And Dirty” Is Enough – And When It Isn’t

I guess the simple answer is “ask the client”. After all, they have to justify to higher authority why they allocate their limited resources the way they do for the results they deliver. So, ask them if they require an in-depth evaluation. If they do, go through an agreed process, using an agreed model.

If they don’t, get this in writing and tell them that evaluating at a later date will be at best a guess, and will be on their heads. Why should you be responsible for somebody else’s bad judgement?

Training/Training Department

Kirkpatrick Level 1 – useful feedback on environmental issues, pace of delivery etc and specifics for the actual trainer. ALWAYS. Kirkpatrick Level 2 – this will indicate if the training is being learned – if there are future problems this will help to eliminate or otherwise pinpoint if training methods and objectives are at fault. Ideally ALWAYS do this.

ROI-focused approach – this will demonstrate the value-adding contribution of the training function. ALWAYS do this. If you can't, put it in writing WHY, and share this with the client and higher authority.

Manager/Client

ROI-focused approach – this is the business case they need to get behind the intervention and fully support it in resourcing and priorities. Do this ALWAYS at the programme level unless the manager/client waives this in writing, in which case ensure they know the consequences of this, and that changing their minds in the future does not make for a credible evaluation. DO NOT do this at the individual level – let line managers make that judgement, with your "guidance". Kirkpatrick Level 3 – behaviours need to change if performance is to change, so ALWAYS do this at the programme level, and if possible, for each course and individual too.

Shareholder/Owner

ROI-focused approach – this is the business case for having and keeping the training function. Do this ALWAYS, but not necessarily for each programme, but certainly for the function as a whole in any major reporting cycle, at least quarterly.

Delegate/Employee

CIRO / CIPP as these look at the process, outcomes or product, inputs etc – much closer to the operational or 'shop floor' end of the organisation. Help delegates to do this for themselves ALWAYS, ideally with involvement from their line managers.

Academic and other research

Any approach you want, as required by the research!

What Is The Bottom Line Or ROI Contribution?

The bottom line or ROI question is almost always going to be set in context: “Is/was this improvement project worth doing?” That means everything, from the overtime needed to install new equipment to the training needed to ensure it is used effectively and efficiently – the training on it’s own is usually meaningless and worthless – it needs context. So in the above example, where a benefit of increased profits of $1M as a result of $40K spend on training and $300K bringing in new equipment, the ROI for the training is meaningless, just as meaningless as the ROI on bringing the equipment in. The actual ROI is ($1M less $340K)/($340K), or about 194%. By the way, the $40K could be formal training or it could be the cost in time and waste as a result of trial and error on the job – either way there is a cost! If trial and error is cheaper and no more dangerous than formal training, it’s clear which way to go (assuming the employees don’t take it to heart!).

About The Author
Martin Schmalenbach has been enhancing performance through change and training & development for more than 10 years in organisations such as the RAF, local government, through to manufacturing and financial services. He has degrees in engineering and management training and development. For the past three years he has focused on developing and implementing a rigorous, robust and repeatable process for ensuring interventions contribute to the bottom line. You can find out more at 5boxes.com.

No Image Available
Annie Hayes

Editor

Read more from Annie Hayes
Newsletter

Get the latest from HRZone

Subscribe to expert insights on how to create a better workplace for both your business and its people.

 

Thank you.