Visit Our Sponsors
Tell us more about your tools. How do they work?
Polli: We use gamified neuroscience and A.I. to help companies understand what cognitive and emotional traits predict success in different roles, and help them match people to those roles. We test the algorithms to ensure that women and men (as well as people of different ethnic backgrounds) are getting similar scores, and if they aren’t, we adjust the inputs until they are.
Glazebrook: Applied is the first hiring platform designed entirely around the psychology of decision-making that helps firms make recruitment decisions smart (more predictive of performance), fair (less biased) and easy. We anonymize applications, chunk them up into batches to allow for better comparative assessment, randomize candidates to avoid ordering effects and allow multiple evaluators to contribute their scores independently to harness the wisdom of the crowd. We also check for gendered language on job descriptions and provide automated, personalized feedback to every candidate on how they performed.
Does the evidence show that your respective approaches work in leveling the playing field for men and women?
Polli: Yes! Clients like Accenture and Unilever have found Pymetrics highly successful in improving gender, ethnic and socioeconomic representation.
Glazebrook: We ran a large-scale experiment last year to test whether we could beat a traditional résumé-sifting process, and found that on every measure, we were driving better outcomes. Indeed, over half of the people that were hired wouldn’t have been were it not for the platform.
Are you concerned about algorithmic bias?
Glazebrook: The risk of algorithmic bias is real: The data you use to train the algorithm is critical, and naïveté in design can be disastrous. If you train a computer to optimize outcomes using biased past decisions, it’ll do a fantastic job of replicating that bias.
Polli: I’m also very concerned about algorithmic bias because A.I. is so powerful that, unless checked, it can magnify the bias in hiring, and very few technologies test to see if their tools have bias. However, Pymetrics does. Algorithms can be fairer than people, which, although true, can be a tough sell!
Enjoy curated articles directly to your inbox.