How to hire the best candidate?

How to hire the best candidate

The goal of fulfilling any position, whether it being internally or externally, must be to hire the best candidate. Thus, the main question of any recruitment process should be: How do we define who the best candidate is?

In the recruitment process we want to look into the crystal ball, to see how well the candidates will perform in the future. Thus, the best recruitment practice must consist of tools predicting future job performance. The way we select these tools can have high impact on the outcome of the recruitment process. And most often the decision of which tools to use, is based on previous experience, intuition, and tradition; and not on data driven research (Fisher et al, 2020).

Research agrees on the recommendations

To determine if the assessment tool is capable of repeatedly predicting job performance, researchers look for high correlations between the tool and Job performance. This is measured as the criterion validity of a tool, showing a number between 0 and 1. If the number is 0, there is absolutely no correlation between the tool used to assess a candidate and the future job performance of that candidate. In other words, it is purely coincidence if the candidate will be a good employee or not. On the other hand, if the criterion validity is 1, there is an exact match between what we see from the assessment tool and the future performance. In reality, there is no assessment tool, that can predict job performance with 100% certainty, but the aim should be to use the assessment tool with the highest value.

When looking at different research in the area, there are small variations in the findings, and it is estimated that these variations in practice have very little importance. The overall picture is quite clear.

GMA is best at predicting job performance

The reason we can say that GMA is the most powerful single predictor of job performance is because the link between the two have been the interest of research for more than 100 years (Schimdt, Oh & Shaffer, 2016). Read more about how you chose the best test to measure GMA here (link).

A lot of this research have been summarized using a meta-analytic approach. The renown meta-study by Schmidt and Hunter (1998) compared GMA to 18 other methods of assessing candidates and found that different methods and combinations of methods have very different validities for predicting future job performance. Some, such as the amount of education, have very low validity. Others, such as graphology (the study of handwriting), have virtually no validity; in other words, if you were to select a candidate based on the candidate’s handwriting it would be equivalent to hiring randomly. Other methods, such as GMA tests, have very high validity. Schmidt and Hunter (1998) studied the combinations of methods and found that using GMA test and a structured interview combined showed a validity of 0.63.

10 years later Schmidt, Shaffer & Oh (2008) concluded that GMA has even greater value when predicting job performance, than earlier studies show. And in 2016 they updated the meta-analysis from 1998 collecting data from 100 years of research. Their findings show a compelling strength in GMA tests (Schmidt, Oh & Shaffer, 2016).

Salgado et al. (2003) conducted a meta-analysis of the criterion validity of GMA in 6 European countries, they found the validity to be in the range 0.56-0.68, similar findings to the American meta-studies, which show consistency of GMA’s validity across countries.

Le & Schmidt (2006) conduct a different meta-analysis to shed light on GMA validity across different job complexity levels. They found that even for the least complex profession the validity of GMA is 0.39 and an overwhelming 0.73 for the most complex professions. For professions of average complexity (where the largest number of workers are active), predictive validity is estimated to 0.66 (Le & Schmidt, 2006).

The multitude of studies showing such a strong consensus in the high criterion validity between GMA and future job performance makes it difficult to overlook when recruiting.


Fisher, P., Risavy, S., Robie, C., Köni, C., Christiansen, N.D., Tett, R.P., and Simonet, D.V. (2020) Selection Myths: A Conceptual Replication of HR Professionals’ Beliefs About Effective Human Resource Practices in the US and Canada. Journal of Personnel Psychology, Volume 20(1): 1-28 (in press). https://doi.org/10.1027/1866-5888/a000263

Le, H. and Schmidt, F. (2006) Correcting for Indirect range restriction in meta-analysis: Testing a new meta-analytic procedure. Psychological Methods, Volume 11: 416–438. https://psycnet.apa.org/doi/10.1037/1082-989X.11.4.416

Salgado, J.F., Anderson, N., Moscoso, S., Bertua, C., and De Fruyt, F. (2003) International Validity Generalization of GMA and Cognitive Abilities: A European Community Meta‐Analysis. Personnel Psychology, Volume 56: 573-605. https://doi.org/10.1111/j.1744-6570.2003.tb00751.x

Schmidt, F. and Hunter, J. (1998) The Validity and Utility of Selection Methods in Personnel Psychology. Psychological Bulletin, Volume 124(2): 262-274. https://doi.org/10.1037/0033-2909.124.2.262

Schmidt, F. L., Oh, I. S., and Shaffer, J. A. (2016) Validity and utility of Selection Methods in Personnel Psychology: Practical and theoretical implications of 100 years of research findings. Working paper

Date: 26.03.2021