1st Workshop on

Interactive Natural Language Technology for Explainable Artificial Intelligence

In the era of the Internet of Things and Big Data, data scientists are required to extract valuable knowledge from the given data. They first analyze, cure and pre-process data. Then, they apply Artificial Intelligence (AI) techniques to automatically extract knowledge from data.

The focus of this workshop is on the automatic generation of interactive explanations in natural language (NL), as humans naturally do, and as a complement to visualization tools. NL technologies, both NL Generation (NLG) and NL Processing (NLP) techniques, are expected to enhance knowledge extraction and representation through human-machine interaction (HMI). As remarked in the last challenge stated by the USA Defense Advanced Research Projects Agency (DARPA), "even though current AI systems offer many benefits in many applications, their effectiveness is limited by a lack of explanation ability when interacting with humans". Accordingly, users without a strong background on AI, require a new generation of Explainable AI systems. They are expected to naturally interact with humans, thus providing comprehensible explanations of decisions automatically made. The ultimate goal is building trustworthy AI that is beneficial to people through fairness, transparency and explainability. To achieve it, not only technical but also ethical and legal issues must be carefully considered.

The workshop will be held as part of the International Conference on Natural Language Generation (INLG2019), which is supported by the Special Interest Group on NLG of the Association for Computational Linguistics. INLG 2019 is to be held in Tokyo (Japan), 29 October - 1 November, 2019.

This is the first of a series of workshops to be organized in the next years in the context of the European project NL4XAI.

Important Note

We don't admit double submissions. Anyway, if you have submitted a good paper to the INLG main track but it is not accepted, don't miss the chance to address reviewers comments and resubmit your manuscript to our NL4XAI workshop, no matter if you didn't submit tentative title in advance. Accordingly, submission deadline is extended to September 9th (while INLG notification is due to September 1st).

Aims and Scope

This half-day workshop goes a step ahead of the workshop 2IS&NLG that we co-organized with Mariët Theune at INLG2018. We have narrowed the workshop topic to become a specialized event on Explainable AI. In this sense, the workshop follows the line started with the workshop XCI at INLG2017. Moreover, this workshop follows a series of thematic special sessions in international conferences as follows:

The aim of this workshop is to provide a forum to disseminate and discuss recent advances on Explainable AI. We are mainly interested in attracting early stage researchers and practitioners who desire having feedback on their work in progress and mentoring of senior researchers in the area of Explainable AI.

As a result, we expect to identify challenges and explore potential transfer opportunities between related fields, generating synergy and symbiotic collaborations in the context of Explainable AI, HMI and Language Generation. Moreover, we expect to strengthen the network of researchers and practitioners interested in taking NLG further to enable the next generation of Explainable AI systems.

How to participate

We solicit researchers for contributions dealing with NLG issues in relationship with any of the many aspects concerned with Explainable AI systems.

It will be possible to submit regular papers (up to 4 pages + unlimited references) and demo papers (up to 2 pages). Papers should follow the ACL paper format. The contributions will be subject to a blind peer review process to assess their relevance and originality for the workshop. Accepted contributions will be the primary input source for the workshop and authors will be requested to present their contributions in either a poster or presentation, considering the most suitable format in each case.

Early stage researchers are encouraged to take part in this workshop. In addition, senior researchers as well as non-academic participants from the industry are very welcome to share their valuable experiences.

Contributions will be compiled in companion proceedings to be published in ACL Anthology.

Submissions should be made through Easychair here.

Topics

  • Definitions and Theoretical Issues on Explainable AI
  • Interpretable Models versus Explainable AI systems
  • Explaining black-box models
  • Explaining Bayes Networks
  • Explaining Fuzzy Systems
  • Explaining Logical Formulas
  • Multi-modal Semantic Grounding and Model Transparency
  • Explainable Models for Text Production
  • Verbalizing Knowledge Bases
  • Models for Explainable Recommendations
  • Interpretable Machine Learning
  • Self-explanatory Decision-Support Systems
  • Explainable Agents
  • Argumentation Theory for Explainable AI
  • Natural Language Generation for Explainable AI
  • Interpretable Human-Machine Multi-modal Interaction
  • Metrics for Explainability Evaluation
  • Usability of Explainable AI/interfaces
  • Applications of Explainable AI Systems

Important dates

  • Tentative Title and Authors due: August 15, 2019 September 6, 2019
  • Submissions due: September 9, 2019
  • Notification of acceptance: October 1, 2019
  • Camera-ready papers due: October 15, 2019
  • Workshop session: October 29, 2019

Program Committee

  • Alberto Bugarin, CiTIUS, University of Santiago de Compostela (Spain)
  • Katarzyna Budzynska, Institute of Philosophy and Sociology of the Polish Academy of Sciences (Poland)
  • Claire Gardent, CNRS/LORIA, Nancy (France)
  • Albert Gatt, University of Malta (Malta)
  • Dirk Heylen, Human Media Interaction, University of Twente (The Netherlands)
  • Simon Mille, Universitat Pompeu Fabra (Spain)
  • Martı́n Pereira-Fariña, Institute of Heritage Sciences (Incipit), Spanish National Research Council (CSIC) (Spain)
  • Chris Reed, Center for Argument Technology, University of Dundee (UK)
  • Ehud Reiter, University of Aberdeen, Arria NLG plc. (UK)
  • Carles Sierra, Institute of Research on Artificial Intelligence (IIIA), Spanish National Research Council (CSIC) (Spain)
  • Mariët Theune, Human Media Interaction, University of Twente (The Netherlands)
  • Nava Tintarev, Technische University of Delft (The Netherlands)
  • Hitoshi Yano, Minsait, INDRA (Spain)

Organizers and contact

José M. Alonso

Research Centre in Intelligent Technologies

(Centro Singular de Investigacion en Tecnoloxias Intelixentes, CiTIUS)

University of Santiago de Compostela, Spain

Alejandro Catala

Research Centre in Intelligent Technologies

(Centro Singular de Investigacion en Tecnoloxias Intelixentes, CiTIUS)

University of Santiago de Compostela, Spain