AARP Hearing Center
On average, corporate job openings attract a whopping 250 applicants each, yet only a handful of applicants will make it through to interview. At two to three pages each, that's upwards of 750 pages to sift through to fill just one job — and who's got time for that? Enter: the algorithms.
At their core, algorithms are designed to spot patterns in existing data and make predictions regarding future data, based on a definition of which variables (such as skills, for example) are a match. And the more data you feed an algorithm — so the thinking goes — the more accurate its predictions. This is certainly the case when it comes to online shopping. Analysis of past purchasers indicates that if you buy a certain kind of phone, you'll probably want a particular type of case/ring/dashboard mount to go with it. And that works fine — when it comes to buying phones.
The problems with building algorithms to weed out the wrong candidates is that many designs rely on a) the definition of what a “good” candidate is, b) previous examples of what “good” candidates have been, or c) which candidates from the whole pool of applicants end up advancing through successive rounds. Defining what makes a candidate a good fit for a job is notoriously difficult. Up to 50 percent of new hires fail within 18 months. And when data on previous successful candidates is fed into the algorithms designed to predict which candidates will be successful next, you just end up getting more of the same. Furthermore, if your job descriptions and recruiting processes are engineered in ways that overtly or unconsciously suppress interest from diverse candidates, your pipeline likely won't include a large enough variety to provoke the algorithm to suggest diverse candidates in future.
The good news is that some strategies can help prevent bias from creeping into the data stream and the algorithms that learn from it.