Mykola Pechenizkiy
The cross-roads of algorithmic fairness, accountability and transparency in predictive analytics
Abstract
Modern machine learning techniques contribute to the massive automation of the data-driven decision making and decision support. It becomes better understood and accepted, in particular due to the new General Data Protection Regulation (GDPR), that employed predictive models may need to be audited. Disregarding whether we deal with so-called black-box models (e.g. deep learning) or more interpretable models (e.g. decision trees), answering even basic questions like “why is this model giving these answer?” and “how do particular features affect the model output?” is nontrivial. In reality, auditors need tools not just to explain the decision logic of an algorithm, but also to uncover and characterize undesired or unlawful biases in predictive model performance, e.g. by law hiring decisions cannot be influenced by race or gender. In this talk I will give a brief overview of the different facets of comprehensibility of predictive analytics and reflect on the current state-of-the-art and further research needed for gaining a deeper understanding of what it means for predictive analytics to be truly transparent and accountable. I will also reflect on the necessity to study utility of the methods for interpretable predictive analytics.