Explaining Artificial Intelligence’s Results by Being Unethical
25th or 26th of October 2020 NordiCHI Tallinn
Call for Papers: Workshop NordiCHI
The possibilities of Machine Learning (ML) and Artificial Intelligence (AI) applications have exponentially grown. Their impact on the shape of our society continues to be a subject of investigation, especially the way in which ML supports and assists decision making. The field of eXplainable Artificial Intelligence (XAI) has seen a resurgence in interest, especially for XAI solutions for developers.
However, results and experience of using XAI for end-users is currently lacking. This has resulted in limited practical knowhow or skill sharing in regards to distilling what can constitute a meaningful explanation to a wide variety of end-users. We assume that there have been initiatives that have involved XAI and end-users that are either in its infancy or were not seen as successful. As a result the obtained skills and lessons learned have not yet been shared with a wider audience. The intention of this workshop is to create a platform to share and understand the best practices to provide an explanation to end users. Additionally we examine how these best practices ensure that explainability is included within the research and innovation process of a new AI/ML service.
Sharing experiences can help evolve the debate of explainability and ensure that best practices are distilled to assist in creating a more explainable AI/ML system. The workshop will use creative and participatory methods to engage an interdisciplinary group of participants on how explainability can become a core facilitator for including fairness, transparency and other ethical concerns in the whole design process.
We invite submissions from researchers and others who are currently working on or have worked on explainability for AI or ML.
What will happen? During our full-day workshop at NordiCHI’20 in Tallinn on the 25th or 26th of October 2020, we ask participants to engage with use cases involving explanations by AI/ML systems and to approach this as unethical as possible.
Have you always wondered how you can make an explanation as wrong as possible? Send in your short paper describing your use case and find out in this creative and malicious workshop. We will explore what is needed to make an explanation or interpretation as unethical as possible.
Deadline: You can submit your short paper via this link below until the
1st of September 2020. 10th of September.
As we understand that a lot of uncertainty surrounded the organisation of NordiCHI we have extended the deadline.
Please ensure that the use case includes:
- A description of the study detailing the problem domain, the aim of the study, the current stage of the research, targeted end-user;
- The methodology and study set-up detailing: the (experimental) design, participant demographics (if applicable), brief description of the purpose of the AI/ML process, and which ethical concerns were explicitly addressed (for example fairness);
- A description of the encountered challenges of explaining AI/ML;
- A thorough discussion of the lessons learned.
If you have any queries please do not hesitate to contact :
More information about NordiCHI’20 can be found here
Krijg nieuwe content direct in je mailbox.