Explainable AI – why trust artificial intelligence algorithms?
If you had to decide whether to undergo treatment and a computer was there to advise you, would you trust it? To what extent can we trust someone we do not know, especially when that someone is not human but an artificial intelligence algorithm?
Explainable AI (or #XAI – explainable artificial intelligence) is a set of tools and techniques used to help people better understand why an AI model generates certain decisions by describing how it works. For example, AI algorithms in the field of digital imaging and instrumental diagnostics are able to distinguish healthy tissues from those altered by the possible effect of cancer. These are differences that are often not detectable by the naked eye, but recognizing them becomes possible only because of the high level of accuracy of computer systems. Explaining, therefore, how AI technologies, which are increasingly widespread in a variety of areas of medical and health research and application, are able to support physicians in choosing a therapy, could help patients accept an early diagnosis, which is difficult to deal with first and foremost from a psychological point of view, but in whose confidence very often lie the actual hopes of preventing the most acute stages of the disease.
Monica Moroni, a mathematics graduate from Milan and now a post-doc in the ranks of the Data Science for Health research unit at Fondazione Bruno Kessler‘s Digital Health and Well being Center in Trento, tells us what she and her colleagues are working on to succeed in bringing the algorithms they develop from the labs to the ospital setting.
Podcast and interview by Marzia Lucianer (FBK/TS4.0) – interview conducted in November 2022, in Trento.
Music : “Wholesome” Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution
Share on Facebook Share on Twitter Share on Pinterest