The science of recruiting isn’t years behind our peers in other disciplines; it’s just gone down the wrong path. In fact, science applied to work and hiring in a systematic and disciplined fashion is more than 100 years old- it’s first serious introduction is credited to W.F.Taylor’s controversial 1912 tome, Principles of Scientific Management
Taylor was an engineer who saw the worker as a cog in the industrial machine. He systematically examined what could be done to maximize efficiency without consideration to the long-term impact on a ‘human’. Short sighted yet, short-term he gets credit for proving that periodic breaks actually improve overall performance. Thank him every day. He described in detail how what was learned about the worker could then be applied in the selection process. Lucky us. Taylor’s work was disparaged for his mechanistic view but he gave us the foundation for the data-driven approach we long for today. An approach that allows for discussions to lead to continuous improvement. (And yet, we still seem to be teaching practices as if the candidates only non-manipulative involvement comes at the end but, that is another story.)
Today, unfortunately, we have no scarcity of vendors and suppliers masquerading their content creation dressed in poorly collected, poorly analyzed and poorly reported data solely to support their commercial objectives without any notion of the basic principles of the scientific method.
My latest personal favorite is ‘ …our proprietary, predictive machine-learning algorithm has been scientifically validated to…” At it’s very best, nearly all of these vendors can now predict with unerring and increasingly automated accuracy the person that hiring managers would choose if left to their own devices i.e. untrained, unaudited, unconscious bias. What a thrill to know that technology can now scrape enough data from internet sources to institutionalize our least reliable selection tool (validity btw is a function of reliability) and call it ‘predictive’. Not a single approach to my knowledge has demonstrated value longitudinally against any performance criteria aligned to the business (let me know when that happens please, happy to chat).
There is always a little light at the end of this tunnel- like when we see research similar to this journal article, “Paying More to Get Less: The Effects of External Hiring versus Internal Mobility”. This study was was published in the Administrative Science Quarterly in September 2011 and also described in detail by Peter Cappelli, in his monthly column for HRExecutive Magazine, Do Outside Hires Perform Better. In the original research, the author describes how he dug into the data of one financial services firm to identify and track jobs filled by both internally and externally sourced candidates over a protracted period of time. He compared subsequent performance ratings of the incumbents over years and found statistically significant evidence that: – Internal candidates performed better than those hired from the outside. – External candidates took as long as 3 years to achieve the performance levels of their internally promoted peers. – External candidates, however, were paid 15% more on average. – Performance of individuals who were externally sourced was higher if they were NOT brought in through search.
Now this is science! Not because it is proven correct but, because it is done by transparently describing the methodology in a way that we would be able to replicate within our own firm. It offers descriptions and conclusions we can debate and discuss and test over ‘opinions’…which are merely untested hypotheses. No ‘proprietary’ algorithm can lead to effective learning…machine or otherwise. It fails as science. Vendors may need to protect their investment but until I can independently understand and test their hypotheses over time, their prediction is no more valid than my crystal ball. #landminewaitingtogooff #hros
When evaluating Machine Learning tools and applications for sourcing leads and, especially screening candidates, make absolutely sure that what they are ‘predicting’ is related to performance and not the opinion. Insist on their unbundling their algorithm- if necessary to a trusted third party consultant who can attest that the weights do not inadvertently and systematically advance a non-performance related bias in the name of ‘fit’.