Transparent Practice #2: Make models explainable while preserving performance
All consumers of models, especially those impacted by a model’s hiring recommendations, deserve an understanding of why a model has made a recommendation.
At AdeptID, we present regularly updated ‘under the hood’ analyses of how our model works, so users can see which characteristics of a job candidate lead to a given recommendation. Making this a regular habit has helped our users and partners advance their understanding of how a model works, and even inform future model improvement. At the same time, it has allowed us to achieve the “reasonable explainability” we believe is a fundamental requirement of good practice.
This practice gives a range of stakeholders clarity on how a model works without subjecting them to complexities of a model that they lack the time, or the technical expertise, to parse. This approach also frees developers to use and grow newer classes of models, for which full explainability is technically impossible, but which hold great promise for improved AI quality and fairness.