Are fuzzy systems as interpretable (readable and understandable) as the fuzzy community usually claims?
- Alonso, José María; Magdalena, L.
- Year: 2010
- Type of Publication: In Proceedings
- Keywords: Interpretability; discussion; review
- Editor: Martinez, L; Barrenechea, E; Espinilla, M; Alcala, J; Lopez, V; Mucientes, M; Olivas, J.A; Rodriguez, R
- Book title: CEDI2010
- Pages: 475-482
- Address: Valencia
- The use of fuzzy logic (FL) makes easier the knowledge extraction and representation tasks carried out when modeling real-world complex systems. Thanks to their semantic expressiv- ity, close to natural language, fuzzy rule-based systems made up of linguistic variables and rules are likely to be easily understandable by human beings. Nevertheless, although FL fa- vors interpretability of the final model, it is not enough to guarantee it. Interpretability is a very much appreciated property in many applications, especially in those with high hu- man interaction where it actually becomes a strong requirement. However, several con- straints have to be imposed along the whole design process with the aimof producing really interpretable fuzzy systems. In consequence, interpretability is achieved at the cost of pe- nalizing accuracy and usually increasing com- putational time. For this reason, most fuzzy systems are built disregarding interpretability, only paying attention to accuracy but claim- ing the final model is much more interpretable than other black-box techniques, like neural networks, because it is based on FL. Unfortu- nately, this way of doing yields some fuzzy sys- tems so hardly interpretable that they become useless black-boxes from the interpretability point of view.