Design streamlined online final selection stages that deliver better results
Assessment centers are typically designed to assess multiple competencies (e.g. Team Working, Relationship Building, Achieving Results) through multiple assessment exercises (e.g. role plays, group exercises, inbox exercises).
Given how complex it is to design assessment centers in this way, there is a significant associated expense; we estimate they cost, on average, £2,500 per candidate.
Are assessment centers of this kind effective?
When we evaluate the effectiveness of an assessment center, we consider its validity – its ability to forecast performance in the role the participants are competing for. Figure 1 shows the validity scores for several different assessment methods.
Figure 1. Validity of different assessment methods
You can see in Figure 1 how increasing validity can have a significant impact in terms of reducing the risk of making a bad hire.
Assessment centers have a validity of over 0.3. This means using them will reduce the risk of a bad hire to less than 1 in 10; we are not saying this method is terrible. However, given the resources invested in designing and delivering them, they should be performing better.
Why are assessment centers not the most valid method?
Assessment centers typically attempt to assess competencies. Anyone who has designed one of this kind will be familiar with the craft required in making sure each competency is assessed twice, and in exercises where you are able to write specific indicators so assessors can accurately score that competency. What this craft and effort is in fact doing is creating a situation where assessment methods with less validity (oral presentations, case studies, inbox exercises, group exercises, role plays etc.) contribute just as much to the final selection decision as assessment methods with more validity, such as structured interviews.
The fact that the less valid exercises contribute the same weight to the competency score, and therefore overall score, is problematic. If you look at Figure 2 you will see a classic competency matrix for an assessment center.
Figure 2. Assessment matrix from a typical assessment center
If the British Psychological Society’s (2015) standards for the design and delivery of assessment centers are followed, a strictly arithmetic approach would be taken in this example without a ’wash-up’ session. The candidate’s total score would be made up of the average score in each competency. For the competency of Communicating Information, equal weight is placed on the candidate’s score in the role play exercise and the structured interview. In the Figure 2 example, the candidate ends up with a solid overall score (perhaps passing the benchmark for the center) despite performing poorly in the most predictive exercise – the structured interview.
How can we increase the validity of assessment centers?
We recommend two simple steps to improve the performance of assessment centers:
1) Use the most valid and relevant assessments. Typically, this would be:
– A highly-structured interview: Structured interviews can be made even more powerful by being informed by a high-quality behavioral questionnaire such as the Wave Personality Questionnaire.
– Aptitude testing: Cognitive ability tests have consistently been shown to be the single biggest predictor of workplace performance (particularly in cognitively demanding roles). They should feed into your candidate’s overall assessment center score.
– Highly-relevant exercise(s): We are not saying job-relevant exercises should not be used. If you are hiring for a sales role you would want to see how potential recruits present in a sales environment. In this situation we would recommend a role play exercise simulating a client business development meeting or a presentation exercise simulating a pitch for some client work. Free from the design restrictions of only assessing certain competencies in an exercise, you can measure exactly what is needed for success in that role. From our experience, one highly-relevant exercise is enough. On occasions we have used two.
2. Switch to validity-optimized scoring. Rather than the matrix in Figure 2, your matrix will look like Figure 3.
Figure 3. Assessment matrix using validity-optimized scoring
You will gain scores for key behavioral criteria through the structured interview. Whilst you still score multiple behavioral indicators for the additional, work-relevant, exercise(s), these should be averaged to one overall exercise score which goes into the matrix and contributes to the scoring. The total score in this case is calculated by adding each of the scores in the rows. If we put the same candidate as we considered in Figure 2 through this scoring method, you will see they are less likely to progress. This is what you would want to see as they have performed less well on the most predictive exercises.
For recruiters and assessors:
– Reduces the risk of a bad hire.
– Less demanding (on the day itself and in terms of pre-event training/reading).
– No more complex scoring matrices, removing the need for wash-up meetings.
– Greater flexibility to add or remove exercises without having a significant impact on how the center is scored and delivered.
What are the benefits of this approach?
In addition to the validity and cost benefits already discussed, this approach has specific benefits for participants:
– Shorter, less intense day.
– Not disproportionately punished for failing to take the opportunity to demonstrate a specific competency in a specific exercise.
– A fairer process as they are assessed by a smaller group of assessors meaning quality can be better managed.
– Assessment focuses on meaningful and job-relevant elements of the role they’ve applied for. This means they can demonstrate the qualities they will bring to the role and get a more realistic job preview at the same time.
Delivering this approach virtually
We have been using this approach successfully with our clients for 18 months so now have the experience to share the use of this approach. Given global events, there is a current focus on delivering face-to-face assessments virtually. You will probably have concluded yourself that our recommended method is far easier to administer virtually than a traditional, multi-assessment, assessment center. We would recommend avoiding group exercises as, from experience, these are particularly complicated to deliver virtually and the behaviors can be assessed in other job-relevant exercises which can be delivered online.
If you are currently struggling with moving your existing multi-assessment, multi-competency assessment process online, perhaps now is the right time to take on board our recommendations and move to a validity-optimized approach. Please do get in touch if you would like any support with this process.
Article written by Martin Kavanagh & Maya Mistry