Events

イベント

Hokkaido University
Center for Human Nature,
Artificial Intelligence,
and Neuroscience

CHAIN ACADEMIC SEMINAR #44

CHAIN Seminar #44: Timo Speith “Making AI Understandable: The Goals, Methods, and Open Challenges of Explainable AI”

Date & Time Monday, August 6, 4:30-6:00 PM
Venue W309 (in-person) & Zoom (online)
Language English
Organizer Center for Human Nature, Artificial Intelligence, and Neuroscience (CHAIN)
Format Hybrid (Zoom registration required for online participation)

The speaker of the 44th CHAIN academic seminar is Dr. Timo Speith from University of Buyreuth, Germany. He is also visiting CHAIN as a visiting researcher until the end of September. The talk is delivered in English, but we will take questions in both English and Japanese in the Q&A.

Please register from the button below to participate online via zoom.

Seminar1

Lecturer

Timo Speith
ティモ・シュパイト

Making AI Understandable: The Goals, Methods, and Open Challenges of Explainable AI

Abstract:

As AI systems become increasingly integrated into high-stakes decision-making processes, understanding their operations and outcomes is essential for satisfying societal desiderata such as trust, accountability, and fairness. This talk explores the goals, methods, and open challenges of an increasingly popular field of research dedicated to understanding AI systems: explainable AI (XAI). I will begin by motivating XAI, starting from the above-mentioned societal desiderata. Next, I will discuss the various stakeholders involved in XAI and their respective needs to understand AI systems. Furthermore, I will introduce a variety of methods for achieving explainability, focusing on saliency maps. Finally, I will address the open challenges that remain in the field. By highlighting these aspects, the talk aims to provide a comprehensive overview of XAI, emphasizing its importance in fostering societally desirable AI development and deployment.

Lecturer profile

Dr. Speith is a fixed-term lecturer at the chair for Philosophy, Computer Science, and Artificial Intelligence at University of Bayreuth, Germany. His research focuses on topics in AI ethics, especially on the explainability of AI (XAI).