*Call for Papers: *The First Workshop on Natural Language Argument-Based Explanations (ArgNLE - https://argnle.github.io/ECAI-ArgNLE/)
Co-located with ECAI 2024 (https://www.ecai2024.eu/). Universidad de Santiago de Compostela, Spain.
*Workshop description*
Explainability and Computational Argumentation have usually been approached as separate, independent research topics, which neglects many aspects arising from considering the interdependencies between them. To be effective for human users, explanations are required to be formulated in natural language, possibly in an argumentative fashion. A workshop on exploring Natural language Argument-based Explanations is proposed to investigate this challenging topic, at the crossroad of these different research fields. Providing high quality explanations for AI predictions based on machine learning is a challenging and complex task. To work well it requires, among other factors: selecting a proper level of generality/specificity of the explanation; considering assumptions about the familiarity of the explanation beneficiary with the AI task under consideration; referring to specific elements that have contributed to the decision; making use of additional knowledge (e.g., metadata) which might not be part of the prediction process; selecting appropriate examples; providing evidence supporting negative hypothesis. Finally, the system needs to formulate the explanation in a clearly interpretable, and possibly convincing, way.
Given these considerations, the workshop welcomes contributions showing an integrated vision of Explainable AI (XAI), where low level characteristics of the deep learning process are combined with higher level schemas proper of the human argumentation capacity. These integrated vision relies on three main considerations: i) In neural architectures the correlation between internal states of the network and the justification of the network classification outcome is not well studied; ii) High quality explanations are crucially based on argumentation mechanisms (e.g., provide supporting examples and rejected alternatives); iii) In real settings, providing explanations is inherently an interactive process involving the system and the user. Accordingly, the workshop calls for cross-disciplinary contributions in three areas, i.e., deep learning, argumentation and interactivity, to support a broader and innovative view of explainable AI. More precisely, the workshop is intended to discuss research challenges that will allow to advance the state of the art in explainable AI. Providing explanations to support a certain conclusion has been largely studied in logic, as a fundamental characteristic of human reasoning. As a result, both theoretical and computational models of human argumentation are investigated. The recent resurgence of AI highlighted the idea that low level system behaviors not only need to be interpretable (e.g., showing those elements that most contributed to the system decision), but also need to fit high level human schemas to produce convincing arguments.
**
*Topics of interest*
* Natural language argument-based explanations * Dialectical, dialogical and conversational explanations * AI methods to support argumentative explainability * User-acceptance and evaluation of argumentation-based explanations * Tools that provide argumentation-based explanations * Use of argument-based explanations for research from the social sciences, digital humanities, and related fields * Real-world applications
The workshop solicits the submission of three types of contributions relevant to the workshop topics and suitable to generate discussion:
* Original, unpublished contributions * Dataset related submissions (presenting a dataset or a corpus related to the workshop topics, that has been or is currently under development. These papers may have already been published in another venue). * Projects related submissions (presenting funded projects or lines of work within the topics of the workshop, both academic and industrial).
*Invited speaker*
Professor Francesca Toni, Faculty of Engineering, Department of Computing, Imperial College London, UK. (https://www.imperial.ac.uk/people/f.toni)
*Important Dates *
* Paper submission: 31 May 2024 * Notification of acceptance: 1 July 2024 * Camera-ready papers: 31 July 2024 * ArgNLE workshop: 19 or 20 October 2024
*Submission Instructions *Papers must be written in English, be prepared for double-blind review using the ECAI LaTeX template, and not exceed 7 pages (not including references). The ECAI LaTeX Template can be found at https://ecai2024.eu/download/ecai-template.zip. Papers should be submitted via EasyChair: https://easychair.org/conferences/?conf=argnle2024
*Workshop Organizers:*
* Rodrigo Agerri https://ragerri.github.io/ - HiTZ Center - Ixa, University of the Basque Country UPV/EHU, Spain * Elena Cabrio https://www-sop.inria.fr/members/Elena.Cabrio/ - Université Côte d’Azur, Inria, CNRS, I3S, France * Serena Villata https://webusers.i3s.unice.fr/~villata/Home.html - Université Côte d’Azur, Inria, CNRS, I3S, France * Marcin Lewinski https://ifilnova.pt/en/people/marcin-lewinski/ - IFILNOVA, Universidade Nova de Lisboa, Portugal * Bernardo Magnini http://hlt.fbk.eu/people/magnini - Fondazione Bruno Kessler, Italy * Marie-Francine Moens https://people.cs.kuleuven.be/~sien.moens/ - KU Leuven, Belgium
*Call for Papers: *The First Workshop on Natural Language Argument-Based Explanations (ArgNLE - https://argnle.github.io/ECAI-ArgNLE/), October 20th, 2024. Co-located with ECAI 2024 (https://www.ecai2024.eu/). Universidad de Santiago de Compostela, Spain. * *
*Workshop description* Providing high quality explanations for AI predictions based on machine learning is a challenging and complex task. Explainability and Computational Argumentation have usually been approached as separate, independent research topics, which neglects many aspects arising from considering the interdependencies between them. To be effective for human users, explanations are required to be formulated in natural language, possibly in an argumentative fashion. A workshop on exploring *Natural language Argument-based Explanations* is proposed to investigate this challenging topic, at the crossroad of these different research fields.
The workshop welcomes contributions showing an integrated vision of Explainable AI (XAI), where low level characteristics of the deep learning process are combined with higher level schemas proper of the human argumentation capacity. More precisely, the workshop is intended to discuss research challenges that will allow to advance the state of the art in explainable AI.
*Topics of interest* Natural language argument-based explanations Dialectical, dialogical and conversational explanations AI methods to support argumentative explainability User-acceptance and evaluation of argumentation-based explanations Tools that provide argumentation-based explanations Use of argument-based explanations for research from the social sciences, digital humanities, and related fields Real-world applications
*Submissions* The workshop solicits the submission of three types of contributions relevant to the workshop topics and suitable to generate discussion: - Original, unpublished contributions - Dataset related submissions (presenting a dataset or a corpus related to the workshop topics, that has been or is currently under development. These papers may have already been published in another venue). - Projects related submissions (presenting funded projects or lines of work within the topics of the workshop, both academic and industrial).
*Invited speaker* Professor Francesca Toni, Faculty of Engineering, Department of Computing, Imperial College London, UK. (https://www.imperial.ac.uk/people/f.toni)
*Important Dates* Paper submission: *20 June 2024 (EXTENDED)* Notification of acceptance: 10 July 2024 Camera-ready papers: 31 July 2024 ArgNLE workshop: 20 October 2024
*Submission Instructions* Papers must be written in English, be prepared for double-blind review using the ECAI LaTeX template, and not exceed 7 pages (not including references). The ECAI LaTeX Template can be found at https://ecai2024.eu/download/ecai-template.zip. Papers should be submitted via EasyChair: https://easychair.org/conferences/?conf=argnle2024
*Workshop Organizers:* Rodrigo Agerri - HiTZ Center - Ixa, University of the Basque Country UPV/EHU, Spain Elena Cabrio - Université Côte d’Azur, Inria, CNRS, I3S, France Serena Villata - Université Côte d’Azur, Inria, CNRS, I3S, France Marcin Lewinski - IFILNOVA, Universidade Nova de Lisboa, Portugal Bernardo Magnini - Fondazione Bruno Kessler, Italy Marie-Francine Moens - KU Leuven, Belgium
*Call for Papers: *The First Workshop on Natural Language Argument-Based Explanations (ArgNLE - https://argnle.github.io/ECAI-ArgNLE/), October 20th, 2024. Co-located with ECAI 2024 (https://www.ecai2024.eu/). Universidad de Santiago de Compostela, Spain. * *
*Workshop description* Providing high quality explanations for AI predictions based on machine learning is a challenging and complex task. Explainability and Computational Argumentation have usually been approached as separate, independent research topics, which neglects many aspects arising from considering the interdependencies between them. To be effective for human users, explanations are required to be formulated in natural language, possibly in an argumentative fashion. A workshop on exploring *Natural language Argument-based Explanations* is proposed to investigate this challenging topic, at the crossroad of these different research fields.
The workshop welcomes contributions showing an integrated vision of Explainable AI (XAI), where low level characteristics of the deep learning process are combined with higher level schemas proper of the human argumentation capacity. More precisely, the workshop is intended to discuss research challenges that will allow to advance the state of the art in explainable AI.
*Topics of interest* Natural language argument-based explanations Dialectical, dialogical and conversational explanations AI methods to support argumentative explainability User-acceptance and evaluation of argumentation-based explanations Tools that provide argumentation-based explanations Use of argument-based explanations for research from the social sciences, digital humanities, and related fields Real-world applications
*Submissions* The workshop solicits the submission of three types of contributions relevant to the workshop topics and suitable to generate discussion: - Original, unpublished contributions - Dataset related submissions (presenting a dataset or a corpus related to the workshop topics, that has been or is currently under development. These papers may have already been published in another venue). - Projects related submissions (presenting funded projects or lines of work within the topics of the workshop, both academic and industrial).
*Invited speaker* Professor Francesca Toni, Faculty of Engineering, Department of Computing, Imperial College London, UK. (https://www.imperial.ac.uk/people/f.toni)
*Important Dates* Paper submission: *20 June 2024 (EXTENDED)* Notification of acceptance: 10 July 2024 Camera-ready papers: 31 July 2024 ArgNLE workshop: 20 October 2024
*Submission Instructions* Papers must be written in English, be prepared for double-blind review using the ECAI LaTeX template, and not exceed 7 pages (not including references). The ECAI LaTeX Template can be found at https://ecai2024.eu/download/ecai-template.zip. Papers should be submitted via EasyChair: https://easychair.org/conferences/?conf=argnle2024
*Workshop Organizers:* Rodrigo Agerri - HiTZ Center - Ixa, University of the Basque Country UPV/EHU, Spain Elena Cabrio - Université Côte d’Azur, Inria, CNRS, I3S, France Serena Villata - Université Côte d’Azur, Inria, CNRS, I3S, France Marcin Lewinski - IFILNOVA, Universidade Nova de Lisboa, Portugal Bernardo Magnini - Fondazione Bruno Kessler, Italy Marie-Francine Moens - KU Leuven, Belgium
/Apologies for crossposting./* *
*Call for Papers: *The First Workshop on Natural Language Argument-Based Explanations (ArgNLE - https://argnle.github.io/ECAI-ArgNLE/), October 20th, 2024. Co-located with ECAI 2024 (https://www.ecai2024.eu/). Universidad de Santiago de Compostela, Spain.
**
*Workshop description* Providing high quality explanations for AI predictions based on machine learning is a challenging and complex task. Explainability and Computational Argumentation have usually been approached as separate, independent research topics, which neglects many aspects arising from considering the interdependencies between them. To be effective for human users, explanations are required to be formulated in natural language, possibly in an argumentative fashion. A workshop on exploring *Natural language Argument-based Explanations* is proposed to investigate this challenging topic, at the crossroad of these different research fields.
The workshop welcomes contributions showing an integrated vision of Explainable AI (XAI), where low level characteristics of the deep learning process are combined with higher level schemas proper of the human argumentation capacity. More precisely, the workshop is intended to discuss research challenges that will allow to advance the state of the art in explainable AI.
*Topics of interest* Natural language argument-based explanations Dialectical, dialogical and conversational explanations AI methods to support argumentative explainability User-acceptance and evaluation of argumentation-based explanations Tools that provide argumentation-based explanations Use of argument-based explanations for research from the social sciences, digital humanities, and related fields Real-world applications
*Submissions* The workshop solicits the submission of three types of contributions relevant to the workshop topics and suitable to generate discussion: - Original, unpublished contributions - Dataset related submissions (presenting a dataset or a corpus related to the workshop topics, that has been or is currently under development. These papers may have already been published in another venue). - Projects related submissions (presenting funded projects or lines of work within the topics of the workshop, both academic and industrial).
*Invited speaker* Professor Francesca Toni, Faculty of Engineering, Department of Computing, Imperial College London, UK. (https://www.imperial.ac.uk/people/f.toni)
*Important Dates* Paper submission: *30 June 2024 (EXTENDED)* Notification of acceptance: 10 July 2024 Camera-ready papers: 31 July 2024 ArgNLE workshop: 20 October 2024
*Submission Instructions* Papers must be written in English, be prepared for double-blind review using the ECAI LaTeX template, and not exceed 7 pages (not including references). The ECAI LaTeX Template can be found at https://ecai2024.eu/download/ecai-template.zip. Papers should be submitted via EasyChair: https://easychair.org/conferences/?conf=argnle2024
*Workshop Organizers:* Rodrigo Agerri - HiTZ Center - Ixa, University of the Basque Country UPV/EHU, Spain Elena Cabrio - Université Côte d’Azur, Inria, CNRS, I3S, France Serena Villata - Université Côte d’Azur, Inria, CNRS, I3S, France Marcin Lewinski - IFILNOVA, Universidade Nova de Lisboa, Portugal Bernardo Magnini - Fondazione Bruno Kessler, Italy Marie-Francine Moens - KU Leuven, Belgium