*** Apologies for cross-posting ***
Call for papers: Explainable AI in Natural Language Processing
Traditional Natural Language Processing (NLP) models (e.g., decision trees, Markov models, etc.) have primarily been based on techniques that are inherently interpretable models, referred to as white-box techniques. However, in recent years, NLP models have employed advanced neural approaches along with language embedding features. Using these advanced approaches, mostly referred to as black-box techniques, the NLP models have yielded state-of-art performance. Nonetheless, the level of interpretability (e.g., how the model arrives at its results) has reduced significantly. This obfuscated interpretability not only lowers the end users’ trust in the NLP models but also makes it challenging for the developers to debug or improve by analyzing the models for further improvement. Therefore, nowadays, researchers in the NLP community are giving significant attention to the emerging field called Explainable AI (XAI) to tackle the obfuscated complexity of AI systems for trust and improvement. Apart from academia, organizations and companies also have launched high-funding projects such as DARPA XAI, People +AI Research (PAIR), etc.
As XAI is still a growing field, there is plenty of room for innovation to improve the explainability of NLP systems. In recent works, explainable NLP models have captured linguistic knowledge of neural networks, explain predictions, stress-test models via challenge sets or adversarial examples, and interpret language embeddings.
The goal of this Research Topic is to better understand the present status of the XAI in NLP by identifying: new dimensions for a better explanation, evaluation techniques used to measure the quality of explanations, approaches or developments of new software toolkits to explain XAI in NLP, and transparent deep learning models for different NLP task.
The scope of this Research Topic covers (but is not restricted to) the following topics:
• Survey of XAI in NLP in general or any particular NLP task such as NER, QA, Sentiment analysis, social media (SocialNLP), etc.
• Explainable Neural models in Machine Translation
• Explainable Neural models in Named Entity Recognition
• Explainable Neural models in Question Answering
• Explainable Neural models in Sentiment Analysis
• Explainable Neural models in Opinion Mining
• Explainable Neural models in SocialNLP
• Evaluation techniques used to measure the quality of explanations
• Tools for explaining explainability
• Resources related to XAI in the context of NLP
The Research Topic welcomes contributions toward interpretable models for efficient solutions to NLP research problems that explain the explainability of the proposed model using suitable explainability technique(s) (e.g., example-driven, provenance, feature importance, induction, surrogate models, etc.), visualization technique(s) (e.g., raw examples, saliency, raw declarative, etc.), and other aspects. Software toolkits or approaches that can help users express explainability to their models and ML pipelines are also welcome.
The full Call for Papers is available at https://www.frontiersin.org/research-topics/48440/explainable-ai-in-natural-...
Impact of the publication: https://www.frontiersin.org/about/impact
The current deadlines are:
* Abstract Deadline:16 December 2022 - A soft deadline, thus whilst not mandatory, if you would like feedback on your prospective manuscript’s suitability, I encourage you to submit an abstract around this time. * Manuscript Deadline: 14 April 2023, This is a mandatory deadline for your full manuscript submission. However, we can accommodate personal extensions on a case-by-case basis.
Guest Associate Editors:
Somnath Banerjee (University of Tartu, somnath.banerjee@ut.ee)
David Tomás (University of Alicante, dtomas@dlsi.ua.es)
Somnath Banerjee
Lecturer, Institute of Computer Science, University of Tartu, Narva mnt 18, room 3063 51009 Tartu, ESTONIA webpage: http://www.ut.ee//~somnath/