An Introduction on Interpretable Machine Learning
Neel Pradip Shah1, Sheetal Jeshwani2, Pavni Bhatt3

1Neel Pradip Shah, Masters’ Student, WIAI Faculty, University of Bamberg, Germany. 

2Sheetal Jeshwani, Masters’ Student, WIAI Faculty, University of Bamberg, Germany. 

3Pavni Bhatt, Data Analyst, MTLB India Private Ltd., India. 

Manuscript received on 27 April 2020 | Revised Manuscript received on 09 May 2020 | Manuscript Published on 22 May 2020 | PP: 107-111 | Volume-9 Issue-7S July 2020 | Retrieval Number: 100.1/ijitee.G10230597S20 | DOI: 10.35940/ijitee.G1023.0597S20

Open Access | Editorial and Publishing Policies | Cite | Zenodo | Indexing and Abstracting
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open-access article under the CC-BY-NC-ND license (

Abstract: As Artificial Intelligence penetrates all aspects of human life, more and more questions about ethical practices and fair uses arise, which has motivated the research community to look inside and develop methods to interpret these Artificial Intelligence/Machine Learning models. This concept of interpretability can not only help with the ethical questions but also can provide various insights into the working of these machine learning models, which will become crucial in trust-building and understanding how a model makes decisions. Furthermore, in many machine learning applications, the feature of interpretability is the primary value that they offer. However, in practice, many developers select models based on the accuracy score and disregarding the level of interpretability of that model, which can be chaotic as predictions by many high accuracy models are not easily explainable. In this paper, we introduce the concept of Machine Learning Model Interpretability, Interpretable Machine learning, and the methods used for interpretation and explanations.

Keywords: Machine Learning, Interpretability, Black Box Models, Explainable Artificial Intelligence.
Scope of the Article: Machine Learning