In the last 12 months, interest in Machine Learning has doubled according to Google. Look further and it’s about a 4000% increase over the last 5 years. AI’s wunderkind seems to be appearing everywhere lately as ‘proof’ of quality for a new generation of Talent Acquisition applications.
"Our app ‘learns’ from what the prospects say about themselves (and everything else we can find about them) and predicts which ones will be more likely to succeed."
"We’ll consume all your old applications and apply our ‘Machine Learning’ algorithm, update them, compare them to all your open positions and essentially ‘renew’, stack rank and…"
Really? I’m the first to admit my knowledge of AI/Machine Learning fits in a very small container - until I noted the increasing claims touted by salespeople and business development pros - many recent converts to the recruiting industry promoting their latest and greatest shiny object. None of them were really helpful in educating me on what lay under the hood although the ‘solutions’ were always intriguing. The black box explanation always had to wait for the 'scientist'. I was beginning to feel a bit behind - not able to join the 'advanced' class and always wondering why the 'how' seemed so complicated.
Eventually, I decided I simply had to learn a bit on my own. (I won’t recommend a long list of tomes or online courses because a simple Google search of the phrase, ‘Machine Learning’ will return 30 million results you can explore till you drop).
My educational journey was helped last week with an opportunity to attend a lecture at my College, Stevens Institute of Technology, where Google’s head of Research, Dr. Peter Norvig, spoke on the Challenges and Promise of Machine Learning. About 500 professors, students and alums filled SIT's latest Distinguished Lecture Series with an overflow room also jammed to capacity.
It was eye-opening on several levels.
First, Peter is an outstanding teacher, I even asked him which challenges most concern him given Google's recent interest in 'jobs' at the end of his lecture so I feel like I'm on a first name basis- not really. His analogies describing what Machine Learning ‘is’ were easy to follow, understandable and repeatable- especially the “Bag of words” methodology. (And no, I’m not going to create a long post by repeating his 90-minute lecture.)
Dr. Norvig was one of the first professors to demonstrate the potential for web-based learning (Berkeley, 2009, 160,000 signed up online for his undergraduate course on AI). His book on Artificial intelligence- which I’ll readily admit I’ve not fully read- it weighs as much as a small oak tree is considered the most important textbook on the subject worldwide. I have, however, consumed all the early chapters and can almost understand them.
Dr. Norvig is also self-effacing and considers himself as well-known for a snarky spoof about the uselessness of Powerpoints (he published a 6 slide PPT deck of Lincoln’s Gettysburg address in 2000) as for his AI expertise.
Second and perhaps the most important part of my learning was his detailed description of the challenges and obstacles machine learning algorithms must manage: Adversaries, Debugability & Explainability, Privacy/Security/Fairness, & Change.
How does machine learning impact Talent Acquisition?
- Machine Learning applications aren’t programmed. They are ‘trained’ by humans supplying large amounts of what [these humans] consider to be relevant data for the machines to consume. As a result, it is equally plausible as any other outcome that the ML applications will create algorithms that replicate the same biases embedded in the data- unless specific care is taken. All training is not the same.
- The algorithms created via machines are more and more difficult to reverse engineer as the number of data increases. And, of course, their ability to predict an outcome- like which candidate for a given job would be most likely to compete successfully also improves based on more and more data of those who have succeeded in the past. Another way of stating this is that all successful Machine Learning applications’ are ‘black boxes’ that are essentially undiscoverable. Whether they institutionally contribute to the decisions they were trained to examine without unintended consequences…or not, cannot be proven. If you must prove your approach is ‘fair’, for example, you may not be able to. You may not even be able to explain what that black box does at all.
- Prediction of a given outcome is not the same as predictive validation of a scientific hypothesis. In the short run, massive amounts of data might help me determine which candidate out of a pool of candidates is the most likely candidate to be chosen by an equally determined team of recruiters and hiring managers who are attempting to select the ‘best’ candidate. Faster, Cheaper. However, that isn’t the same as knowing whether the selected candidate is, in fact, the one who would perform the best over time, or even significantly better than any ‘lesser’ candidate.
We would encourage employers with TA analytics capabilities to prepare and use a comprehensive series of questions for considering any product claiming machine learning capability that will influence how prospects and candidates are screened, assessed and selected. Document their answers carefully.
As far as we can determine, neither SHRM, the Society of I/O Psychology nor any other independent body has offered any suggestions for assessing ML suppliers to Employers regarding TA applications (possibly a committee put together by ATAP could offer a sensible template). There is no question but that the growing promise and challenge of Machine Learning capabilities is both extraordinary in its promise and chilling in its challenge.
Chris and I would be interested in how machine learning applications are vetted in your company and whether this area of interest is deemed important. Give us a call. Send us a note.Discuss this article & related topics on the CXR eXchange.