Apologies for cross-posting.
---------------------------------------------------------------------------
*1st Workshop on NLP for Indigenous Languages of Lusophone Countries
(ILLC-NLP 2024) -- 2nd CFP*
*January 17, 2024: Papers submission due (extended)*
February 01, 2024: Notification of Acceptance
March 12, 2024: Workshop
Workshop website: https://sites.google.com/view/illc-nlp-2024/home
<https://sites.google.com/view/illc-nlp-2024/home>
Co-located with PROPOR 2024 <https://propor2024.citius.gal/> in Santiago
de Compostela
——————————————————————————————————
*Overview and goals:*
The workshop aims to explore, discuss, and enhance the development of
resources, methods, and applications of NLP for indigenous languages,
especially those spoken or that have influenced languages spoken in
countries where Portuguese is currently the official language. We hope to
contribute to the preservation and promotion of these languages.
This is one of several initiatives to expand knowledge and research
in NLP for underrepresented languages. We encourage the participation of
everyone who shares an interest in preserving and enriching the linguistic
and cultural heritage of indigenous languages in a broad sense. This way,
we welcome the submission of works including languages from all
Portuguese-speaking nations, like those of African origin in Angola,
Mozambique, and the Atlantic islands, as well as minority languages in
Portugal.
*Submissions*:
IILC-NLP seeks submissions under the following categories:
- Full papers: 8 pages+unlimited reference
- Short papers (work in progress, innovative ideas/proposals, research
ideas): 4 pages+unlimited reference
- Submissions should be written in English. At submission time,
papers must be in PDF format only. For the final versions, authors of
accepted papers will be given one extra content page to consider the
reviews. Authors of accepted papers will be requested to send the source
files to produce the proceedings. All submitted papers must conform to the
official ACL style guidelines (Latex
<https://github.com/acl-org/acl-style-files/tree/master/latex> or Word
<https://github.com/acl-org/acl-style-files/tree/master/word>).
Both long and short papers will be published in the ACL Anthology.
Submission site: https://easychair.org/conferences/?conf=illcnlp2024
Reviewing format: At least two reviewers will evaluate each submission.
The reviewing format will be single-blind.
Please help us spread the word about this event by sharing this call with
your contacts and institutions. Your participation and support are crucial
for the success of this workshop.
Sincerely,
Aline Paes, Aline Villavicencio, Claudio Pinhanez, Edward Gow-Smith, Paulo
Rodrigo Cavalin (Workshop organisers)
-------------------------------------------------------------------------------------------------
*Profa. Dra. Aline Paes (she/her)*
*Associate professor - Computer Science (Artificial Intelligence)*
Institute of Computing / Universidade Federal Fluminense (IC/UFF)
Member of CE-PLN <https://sites.google.com/view/ce-pln/inicio> and BPLN
<https://brasileiraspln.com/>
CNPq PQ-2 and FAPERJ JCNE
__________________________________________________________
url: www.ic.uff.br/~alinepaes
Av Gal Milton Tavares de Souza, S/N, Computing Building, Office 504
São Domingos, Niterói, RJ, Brazil. ZIP 24210-346
-------------------------------------------------------------------------------------------------
****Please do not feel any pressure to respond out of your own regular
working hours. Remember that this is supposed to be an asynchronous tool***
We invite proposals for tasks to be run as part of SemEval-2025
<https://semeval.github.io/SemEval2025/>. SemEval (the International
Workshop on Semantic Evaluation) is an ongoing series of evaluations of
computational semantics systems, organized under the umbrella of SIGLEX
<https://siglex.org/>, the Special Interest Group on the Lexicon of the
Association for Computational Linguistics.
SemEval tasks explore the nature of meaning in natural languages: how to
characterize meaning and how to compute it. This is achieved in practical
terms, using shared datasets and standardized evaluation metrics to
quantify the strengths and weaknesses and possible solutions. SemEval tasks
encompass a broad range of semantic topics from the lexical level to the
discourse level, including word sense identification, semantic parsing,
coreference resolution, and sentiment analysis, among others.
For SemEval-2025 <https://semeval.github.io/SemEval2025/cft>, we welcome
tasks that can test an automatic system for the semantic analysis of text
(e.g., intrinsic semantic evaluation, or an application-oriented
evaluation). We especially encourage tasks for languages other than
English, cross-lingual tasks, and tasks that develop novel applications of
computational semantics. See the websites of previous editions of SemEval
to get an idea about the range of tasks explored, e.g. SemEval-2020
<http://alt.qcri.org/semeval2020/> and SemEval-2021-/2023/2024
<https://semeval.github.io/>.
We strongly encourage proposals based on pilot studies that have already
generated initial data, evaluation measures and baselines. In this way, we
can avoid unforeseen challenges down the road which that may delay the task.
In case you are not sure whether a task is suitable for SemEval, please
feel free to get in touch with the SemEval organizers at
semevalorganizers(a)gmail.com to discuss your idea.
=== Task Selection ===
Task proposals will be reviewed by experts, and reviews will serve as the
basis for acceptance decisions. Everything else being equal, more
innovative new tasks will be given preference over task reruns. Task
proposals will be evaluated on:
- Novelty: Is the task on a compelling new problem that has not been
explored much in the community? Is the task a rerun, but covering
substantially new ground (new subtasks, new types of data, new languages,
etc.)?
- Interest: Is the proposed task likely to attract a sufficient number
of participants?
- Data: Are the plans for collecting data convincing? Will the resulting
data be of high quality? Will annotations have meaningfully high
inter-annotator agreements? Have all appropriate licenses for use and
re-use of the data after the evaluation been secured? Have all
international privacy concerns been addressed? Will the data annotation be
ready on time?
- Evaluation: Is the methodology for evaluation sound? Is the necessary
infrastructure available or can it be built in time for the shared task?
Will research inspired by this task be able to evaluate in the same manner
and on the same data after the initial task?
- Impact: What is the expected impact of the data in this task on future
research beyond the SemEval Workshop?
-
Ethical: The data must be compliant with privacy policies. e.g.
a) avoid personally identifiable information (PII). Tasks aimed at
identifying specific people will not be accepted,
b) avoid medical decision making (compliance with HIPAA, do not try to
replace medical professionals, especially if it has anything to do with
mental health)
c) these are representative and not exhaustive
=== New Tasks vs. Task Reruns ===
We welcome both new tasks and task reruns. For a new task, the proposal
should address whether the task would be able to attract participants.
Preference will be given to novel tasks that have not received much
attention yet.
For reruns of previous shared tasks (whether or not the previous task was
part of SemEval), the proposal should address the need for another
iteration of the task. Valid reasons include: a new form of evaluation
(e.g. a new evaluation metric, a new application-oriented scenario), new
genres or domains (e.g. social media, domain-specific corpora), or a
significant expansion in scale. We further discourage carrying over a
previous task and just adding new subtasks, as this can lead to the
accumulation of too many subtasks. Evaluating on a different dataset with
the same task formulation, or evaluating on the same dataset with a
different evaluation metric, typically should not be considered a separate
subtask.
=== Task Organization ===
We welcome people who have never organized a SemEval task before, as well
as those who have. Apart from providing a dataset, task organizers are
expected to:
- Verify the data annotations have sufficient inter-annotator agreement
- Verify licenses for the data allow its use in the competition and
afterwards. In particular, text that is publicly available online is not
necessarily in the public domain; unless a license has been provided, the
author retains all rights associated with their work, including copying,
sharing and publishing. For more information, see:
https://creativecommons.org/faq/#what-is-copyright-and-why-does-it-matter
- Resolve any potential security, privacy, or ethical concerns about the
data
- Commit to make the data available after the task
- Provide task participants with format checkers and standard scorers.
- Provide task participants with baseline systems to use as a starting
point (in order to lower the obstacles to participation). A baseline system
typically contains code that reads the data, creates a baseline response
(e.g. random guessing, majority class prediction), and outputs the
evaluation results. Whenever possible, baseline systems should be written
in widely used programming languages and/or should be implemented as a
component for standard NLP pipelines.
- Create a mailing list and website for the task and post all relevant
information there.
- Create a CodaLab or other similar competition for the task and upload
the evaluation script.
- Manage submissions on CodaLab or a similar competition site.
- Write a task description paper to be included in SemEval proceedings,
and present it at the workshop.
- Manage participants’ submissions of system description papers, manage
participants’ peer review of each others’ papers, and possibly shepherd
papers that need additional help in improving the writing.
- Review other task description papers.
- Define Roles for each Organizer:
- Lead Organizer - main point of contact, expected to ensure
deliverables are met on time and participate in contributing to
task duties
(see below).
- Co-Organizers - provide significant contributions to ensuring the
task runs smoothly. Some examples include, maintaining communication with
task participants, preparing data, creating and running
evaluation scripts,
and leading paper reviewing and acceptance.
- Advisory Organizers - more of a supervisor role, may not contribute
to detailed tasks but will provide guidance and support.
=== Important dates ===
- Task proposals due March 31, 2024 (Anywhere on Earth)
- Task selection notification May 18, 2024
=== Preliminary timetable ===
- Sample data ready July 15, 2024
- Training data ready September 1, 2024
- Evaluation data ready December 1, 2024 (internal deadline; not for public
release)
- Evaluation starts January 10, 2025
- Evaluation end by January 31, 2025 (latest date; task organizers may
choose an earlier date)
- Paper submission due February 2025
- Notification to authors on March 2025
- Camera-ready due April 2025
- SemEval workshop Summer 2025 (co-located with a major NLP conference)
Tasks that fail to keep up with crucial deadlines (such as the dates for
having the task and CodaLab website up and dates for uploading sample,
training, and evaluation data) or that diverge significantly from the
proposal may be cancelled at the discretion of SemEval organizers. While
consideration will be given to extenuating circumstances, our goal is to
provide sufficient time for the participants to develop strong and
well-thought-out systems. Cancelled tasks will be encouraged to submit
proposals for the subsequent year’s SemEval. To reduce the risk of tasks
failing to meet the deadlines, we are unlikely to accept multiple tasks
with overlap in the task organizers.
=== Submission Details ===
The task proposal should be a self-contained document of no longer than 3
pages (plus additional pages for references). All submissions must be in
PDF format, following the ACL template
<https://github.com/acl-org/acl-style-files>.
Each proposal should contain the following:
- Overview
- Summary of the task
- Why this task is needed and which communities would be interested
in participating
- Expected impact of the task
- Data & Resources
- How the training/testing data will be produced. Please discuss whether
existing corpora will be re-used.
- Details of copyright, so that the data can be used by the research
community both during the SemEval evaluation and afterwards
- How much data will be produced
- How data quality will be ensured and evaluated
- An example of what the data would look like
- Resources required to produce the data and prepare the task for
participants (annotation cost, annotation time, computation time, etc.)
- Assessment of any concerns with respect to ethics, privacy, or
security (e.g. personally identifiable information of private
individuals;
potential for systems to cause harm)
- Pilot Task (strongly recommended)
- Details of the pilot task
- What lessons were learned and how these will impact the task design
- Evaluation
- The evaluation methodology to be used, including clear evaluation
criteria
- For Task Reruns
- Justification for why a new iteration of the task is needed (see
criteria above)
- What will differ from the previous iteration
- Expected impact of the rerun compared with the previous iteration
- Task organizers
- Names, affiliations, email addresses
- (optional) brief description of relevant experience or expertise
- (if applicable) years and task numbers, of any SemEval tasks you
have run in the past
- Role of each organizer
Proposals will be reviewed by an independent group of area experts who may
not have familiarity with recent SemEval tasks, and therefore all proposals
should be written in a self-explanatory manner and contain sufficient
examples.
*The submission webpage is:* SemEval2025 Task Proposal Submission
<https://openreview.net/group?id=aclweb.org/ACL/2024/Workshop/SemEval> (
https://openreview.net/group?id=aclweb.org/ACL/2024/Workshop/SemEval)
For further information on this initiative, please refer to
https://semeval.github.io/SemEval2025/cft
=== Chairs ===
Atul Kr. Ojha, Insight SFI Centre for Data Analytics, DSI, University of
Galway
A. Seza Doğruöz, Ghent University
Giovanni Da San Martino, University of Padua
Harish Tayyar Madabushi, The University of Bath
Sara Rosenthal, IBM Research AI
Aiala Rosá, Universidad de la República - Uruguay
Contact: semevalorganizers(a)gmail.com
*** First Call for Doctoral Consortium Papers ***
36th International Conference on Advanced Information Systems Engineering
(CAiSE'24)
June 3-7, 2024, 5* St. Raphael Resort and Marina, Limassol, Cyprus
https://cyprusconferences.org/caise2024/
(*** Submission Deadline: 8th March, 2024 AoE ***)
The CAiSE conference series has a proud track record of running an international Doctoral
Consortium affiliated with the event. The CAiSE'24 Doctoral Consortium aims to attract PhD
students working on foundations, techniques, tools and applications in the Information
Systems Engineering field. At the Doctoral Consortium, the participating PhD students will
have the opportunity to present their research and to get feedback from an audience of peers
and senior faculty in a supportive environment. There will also be discussions tailored to the
needs and interests of PhD students.
The goals of the Doctoral Consortium are to ensure that participating PhD students:
• receive constructive and personalized feedback and advice on their research program by
dedicated Doctoral Consortium mentors,
• provide an opportunity to meet, interact with and learn from established researchers and
practitioners in the Information Systems Engineering community,
• develop a supportive community of peer scholars and a spirit of collaborative research,
• discuss broader opportunities and concerns related to a PhD study and post-PhD pathways.
To be eligible for the Doctoral Consortium, the candidate must be a current PhD student
within a recognized research institution. We welcome submissions of both late-stage PhD
students (having at least 6 months of work after the conference and before their expected
completion), and early-stage PhD students (with at least 6 months of work already performed
prior to the submission date).
WHY SUBMITTING TO THE CAiSE'24 DOCTORAL CONSORTIUM?
The CAiSE'24 Doctoral Consortium will be attended by renowned academics from the
Information Systems Engineering field who will actively participate as mentors for the PhD
students accepted to the Doctoral Consortium. The participating PhD students will receive
constructive reviews on their submission, as well as personalized guidance by Doctoral
Consortium mentors regarding their research program and presentation at the Consortium
event. Accepted papers will be published in the CEUR proceedings (https://ceur-ws.org/),
which are indexed in DBLP. Participants of the CAiSE Doctoral Consortium will be subsequently
eligible to submit their PhD thesis (after the degree is granted) for a CAiSE PhD Award.
SUBMISSION PROCESS
Submissions must be made electronically by the stated deadline via the EasyChair conference
system at https://easychair.org/conferences/?conf=caise2024 .
Each submission should contain (i) a recommendation letter from the student’s PhD advisor,
and (ii) a paper describing the research plans and the current status of progress (see more
details in the Paper Submission Guidelines section). Submissions must have a single author,
but the name of the PhD advisors should be mentioned in the paper (usually in the
Acknowledgments section).
Submissions should concern original research. All submitted materials must be in English.
Attendees must have sufficient proficiency in English for being allowed to participate in the
academic discussions of the Consortium.
Submissions of both early and late-stage PhD students are welcome. Submissions of
early-stage PhD students should concentrate on the selection of the research methods to
apply, the realization and contextualization of the relevant literature, the expected pitfalls and
ways to mitigate them. Submissions of late-stage PhD students should also include preliminary
research results and discuss to some extent conclusions and threats.
The recommendation letter from the PhD advisor should include an assessment of the current
status of the research, an expected date for the completion of the dissertation, a delineation
of the anticipated benefits for the student's participation at the Consortium and details of any
submissions associated with the research.
PAPER CONTENT AND FORMAT
The paper must:
• clearly formulate the research questions investigated in the thesis,
• identify a significant problem in the field of Information System Engineering,
• outline the current status of the problem domain and related solutions,
• describe the research methods that are applied or proposed and the expected artifacts,
• outline the contributions of the applicant’s work to the problem domain and highlight their
uniqueness,
• present any preliminary results achieved so far (mainly relevant for late-stage PhD students),
• conform to the CEURART template using the 1-column layout format (thus, NOT the Springer
LNCS format and NOT in multiple column layouts); the most recent template (including
Word and LaTeX) can be downloaded from http://ceur-ws.org/Vol-XXX/index.html,
• contain up to 4,000 words (including everything, e.g., references, tables, figures).
REVIEW PROCESS
Each submission will be reviewed by two members of the Doctoral Consortium Mentoring
Board. The main evaluation criteria are: relevance, originality, significance, technical
soundness, accuracy, clarity and the expected benefits to the student from participating in
the Doctoral Consortium. Acceptance is based on the review outcomes.
ATTENDANCE AND REGISTRATION FEE
The Doctoral Consortium is held in parallel with the main CAiSE conference on 5-7 June 2024.
The presentations and decisions are expected to take place in person, so attendance in the
entire Doctoral Consortium is required. To facilitate detailed feedback to the participants,
attendance to the Doctoral Consortium is by invitation only, limited to the participants and the
Mentoring Board.
There is no separate registration fee for participants in the Doctoral Consortium. Participants
should register to the main conference by selecting either the “Main conference” option or
another option that includes the main conference.
IMPORTANT DATES
• Paper Submission: 8th March 2024 (AoE)
• Notification of Acceptance: 19th April 2024
• Camera-ready Copy: 26th April 2024
• Doctoral Consortium: 5th-7th June 2024
QUESTIONS AND INQUIRIES
Questions about eligibility and other inquiries can be sent to the CAiSE’24 Doctoral
Consortium chairs at caise2024_dc(a)easychair.org .
MENTORING BOARD
• Raimundas Matulevičius, University of Tartu, Estonia
• Massimo Mecella, Sapienza University of Rome, Italy
• Barbara Pernici, Politecnico di Milano, Italy
• Jolita Ralyte, University of Geneva, Switzerland
• Hajo Reijers, Utrecht University, the Netherlands
• Monique Snoeck, KU Leuven, Belgium
• Barbara Weber, University of St. Gallen, Switzerland
• Jelena Zdravkovic, Stockholm University, Sweden
DOCTORAL CONSORTIUM CHAIRS
• Iris Reinhartz-Berger, University of Haifa, Israel
• Chiara Di Francescomarino, University of Trento, Italy
• Aggeliki Tsohou, Ionian University, Greece
Hello,
Hello everyone, we are hiring a Software Engineer in ML. The post is based in Geneva, Switzerland, and is a great opportunity to work on ML systems development through all stages of their lifecycle.
The position is based in a specialized agency of the UN - World Intellectual Property Organization.
You can find the details in the LinkedIn ad here - https://www.linkedin.com/jobs/view/3790127610/
Kind regards
Akshat
Classification: WIPO FOR OFFICIAL USE ONLY
World Intellectual Property Organization Disclaimer: This electronic message may contain privileged, confidential and copyright protected information. If you have received this e-mail by mistake, please immediately notify the sender and delete this e-mail and all its attachments. Please ensure all e-mail attachments are scanned for viruses prior to opening or using.
GAMES AND NLP 2024 @ LREC-COLING 2024
=====================================
Co-located with LREC-COLING in Turin, Italy
21st May 2024
https://gamesandnlp.com
Call for Papers
--------------------
The 10th Workshop on Games and Natural Language Processing (Games and NLP 2024)—to be held at LREC-COLING 2024 — will examine the use of games and gamification for Natural Language Processing (NLP) tasks, as well as how NLP research can advance player engagement and communication within games. The Games and NLP workshop aims to promote and explore the possibilities for research and practical applications of games and gamification that have a core NLP aspect, either to generate resources and perform language tasks or as a game mechanic itself. This workshop investigates computational and theoretical aspects of natural language research that would be beneficial for designing and building novel game experiences, or for processing texts to conduct formal game studies. NLP would benefit from games in obtaining language resources (e.g., construction of a thesaurus or a parser through a crowdsourcing game), or in learning the linguistic characteristics of game users as compared to those of other domains.
Topics (include, but are not limited to)
--------------------------------------------------
• Games for collecting data useful for NLP
• Gamification of NLP tasks
• Player motivation and experience
• Game design
• Novel uses of natural language processing or generation as a game mechanic
• Natural language in games as an alternative method of input for people with disabilities
• Processing NLP game data
• Analysis of large-scale game-related corpora
• Real-time sentiment analysis of player discourse or chat
• Evaluation of games for NLP
• Serious games for learning languages
• Player immersion in language-enabled mixed reality or physically embodied games
• Narrative plot or text generation of text-based interactive narrative systems
• Natural language understanding and generation of character dialogue
• Ethical and privacy concerns of ownership of text and audio chat in massively multiplayer online games
Submissions:
------------------
The papers should be submitted as a PDF document, conforming to the formatting guidelines provided in the call for papers of LREC-COLING conference (https://lrec-coling-2024.org/authors-kit/). Submissions are to be made via Softconf/START Conference Manager at https://softconf.com/lrec-coling2024/gamesandnlp2024/
Important Dates
---------------------
• Submission Deadline: Feb 19th
• Notification of Acceptance: Mar 26th
• Camera Ready Deadline: Apr 1st
• Workshop: May 21st
Organisation Committee
--------------------------------
• Chris Madge, chair (Queen Mary University of London)
• Jon Chamberlain (University of Essex, UK)
• Karën Fort (Sorbonne Université, France)
• Udo Kruschwitz (University of Regensburg, Germany)
• Stephanie Lukin (U.S. Army Research Laboratory)
Programme Committee
-------------------------------
• Alice Millour (Sorbonne Université)
• Brent Harrison (University of Kentucky, US)
• Ian Horswill (Northwestern University)
• Jonathan Lessard (Universite Condoria)
• Luisa Coheur (INESC-ID & Instituto Superior Técnico, University of Lisbon)
• Mariët Theune (University of Twente)
• Massimo Poesio (Queen Mary University, UK)
• Mathieu Lafourcade (LIRMM, France)
• Morteza Behrooz (University of California, Santa Cruz, US)
• Pedro Santos (INESC-ID & Instituto Superior Técnico, University of Lisbon)
• Richard Bartle (University of Essex, UK)
• Seth Cooper (Northeastern University, US)
• Valerio Basile (University of Turin, Italy)
• Fatima Althani (Queen Mary University, UK)
NLPerspectives: The 3rd Workshop on Perspectivist Approaches to NLP
Collocated with LREC-COLING in Turin, Italy
2ND CALL FOR PAPERS
https://nlperspectives.di.unito.it/w/3rd-workshop-on-perspectivist-approach…
Until recently, the dominant paradigm in natural language processing (and other areas of artificial intelligence) has been to resolve observed label disagreement into a single “ground truth” or “gold standard” via aggregation, adjudication, or statistical means. However, in recent years, the field has increasingly focused on subjective tasks, such as abuse detection or quality estimation, in which multiple points of view may be equally valid, and a unique ‘ground truth’ label may not exist (Plank, 2022). At the same time, as concerns have been raised about bias and fairness in AI, it has become increasingly apparent that an approach which assumes a single “ground truth” can erase minority voices.
Strong perspectivism in NLP (Cabitza et al., 2023) pursues the spirit of recent initiatives such as Data Statements (Bender and Friedman, 2018), extending their scope to the full NLP pipeline, including the aspects related to modelling, evaluation and explanation.
In line with the first<https://nlperspectives.di.unito.it/w/w2022/> and second<https://nlperspectives.di.unito.it/w/2nd-workshop-on-perspectivist-approach…> editions, the third NLPerspectives (Perspectivist Approaches to Disagreement in NLP) workshop will explore current and ongoing work on: the collection and labelling of non-aggregated datasets; and approaches to modelling and including these perspectives in NLP pipelines, as well as evaluation and applications of multi-perspective Machine Learning models. We also welcome opinion pieces and literature reviews, e.g., fairness and inclusion in a perspectivist framework.
Following our previous workshops, a key outcome of the third edition will be to continue the work begun at https://pdai.info/ to create a repository of perspectivist datasets with non-aggregated labels for use by researchers in perspectivist NLP modelling.
Authors are, therefore, invited to share their LRs (data, tools, services, etc.) and provide essential information about resources (i.e., also technologies, standards, evaluation kits, etc.) that have been used for the work or are a result of their research. In addition, authors will be required to adhere to ethical research policies on AI and may include an ethics statement in their papers.
The NLPerspectives workshop will be co-located with the 14th edition of LREC-COLING 2024<https://lrec-coling-2024.org> in Torino, Italy, in May 20-25, 2024.
Submissions
The papers should be submitted as a PDF document, conforming to the formatting guidelines provided in the call for papers of LREC-COLING conference: authors-kit<https://lrec-coling-2024.org/authors-kit/>
We accept three types of submissions:
* Regular research papers;
* Non-archival submissions: like research papers, but will not be included in the proceedings;
* Research communications: 4-page abstracts summarising relevant research published elsewhere.
Research papers (archival or non-archival) may consist of up to 8 pages of content. Research communications may consist of up to 4 pages of content. More details will be up soon.
Please make submissions at https://softconf.com/lrec-coling2024/nlperspectives2024/
Topics
We invite original research papers from a wide range of topics, including but not limited to:
* Non-aggregated data collection and annotation frameworks
* Descriptions of corpora collected under the perspectivist paradigm
* Multi-perspective Modelling and Machine Learning
* Evaluation of multi-perspective models/ models of disagreement
* Multi-perspective disagreement as applied to NLP evaluation
* Fairness and inclusive modelling
* Perspectivist approaches for social good
* Applications of multi-perspective modelling
* Computing with (dis)agreement
* Perspectivist Natural Language Generation
* Foundational aspects of perspectivism
* Opinion pieces and reviews on perspectivist approaches to NLP
Submissions are open to all, and are to be submitted anonymously (and must conform to the instructions for double-blind review). All papers will be refereed through a double-blind peer review process by at least three reviewers, with final acceptance decisions made by the workshop organisers. Scientific papers will be evaluated based on relevance, significance of contribution, impact, technical quality, scholarship, and quality of presentation.
Attendance
At least one author of each accepted paper is required to participate in the conference and present the work..
Important Dates
* Friday February 23, 2024: Paper submission
* Friday March 29, 2024: Notification of acceptance
* Friday April 12, 2024: Camera-ready papers due
* Tuesday May 21, 2024: Workshop
Workshop organisers:
Gavin Abercrombie, Heriot-Watt University
Valerio Basile, University of Turin
Davide Bernardi, Amazon Alexa
Shiran Dudy, Northeastern University
Simona Frenda, University of Turin
Lucy Havens, University of Edinburgh
Sara Tonelli, Fondazione Bruno Kessler
Contact us at g.abercrombie(a)hw.ac.uk if you have any questions.
Website: https://nlperspectives.di.unito.it/
________________________________
Founded in 1821, Heriot-Watt is a leader in ideas and solutions. With campuses and students across the entire globe we span the world, delivering innovation and educational excellence in business, engineering, design and the physical, social and life sciences. This email is generated from the Heriot-Watt University Group, which includes:
1. Heriot-Watt University, a Scottish charity registered under number SC000278
2. Heriot- Watt Services Limited (Oriam), Scotland's national performance centre for sport. Heriot-Watt Services Limited is a private limited company registered is Scotland with registered number SC271030 and registered office at Research & Enterprise Services Heriot-Watt University, Riccarton, Edinburgh, EH14 4AS.
The contents (including any attachments) are confidential. If you are not the intended recipient of this e-mail, any disclosure, copying, distribution or use of its contents is strictly prohibited, and you should please notify the sender immediately and then delete it (including any attachments) from your system.
[We apologize for the cross-postings]
CFP LKE 2024
June 4th - 6th, 2024, Dublin, Ireland
https://lkesymposium.tudublin.ie/
The 9th International Symposium on Language & Knowledge Engineering will be
held in Dublin, Ireland. LKE 2024 is organized by the School of Enterprise
Computing and Digital Transformation of the Technological University
Dublin, Grangegorman Campus. LKE 2024 will be a forum for exchanging
scientific results and experiences, as well as sharing new knowledge, and
increasing the co-operation between research groups in natural language
processing and related areas.
[[[ Seven different journals for publication are anticipated ]]]
Topics:
Submissions reporting original research work are invited under the
following tracks:
Track 1: Language and Knowledge Engineering Natural Language Processing
- AI for NLP
- Intelligent Techniques for Language Processing
- Natural Language Inference
- Knowledge Representation and Inferences
- Machine Learning for Text Analytics
- Deep Learning methods for Text Processing
- Fuzzy Inference and Language Processing
- Computational Linguistics for Language and Knowledge Engineering
- Question-Answering Systems
- Emotion and Sentiment analysis
- Social Media Text Analytics
- Intelligent Systems for Knowledge
- Human Computer Interaction
- Related issues and applications
Note: The acepted papers in this track will appear in a Special Issue of
Springer Nature Computer Science (Indexed in Scopus, Cite Score: 0.6.).
Track 2: Scholarly Information Processing Information seeking & searching
with scientific information
- Mining the scientific literature
- Academic search/recommender systems
- Dataset development for bibliographic research
- Scholarly Databases and their use
- Science of science
- Citation and co-citation analysis
- Research collaboration mobility and internationalization
- Knowledge dissemination and interdisciplinarity
- Bibliometric indicators
- Webometrics and altmetrics
- Science mapping and visualization
- Communication channels: periodicals, proceedings, books, and electronic
publications
- Knowledge discovery
- AI and data mining
- Bibliometrics-aided information retrieval
- Open science – open access and open data
- AI assisted peer review
Note: The accepted papers in this track will appear in a Special Issue of
Journal of Scientometric Research (Indexed in Scopus, Cite Score: 1.7,
Indexed in ESCI, Impact Factor: 0.8).
Track 3: Computational approaches to Language & Knowledge Engineering
Combinatorial Optimization Problems
- Computer Science Security
- Computational Complexity
- Computational aspects in Science of life
- Computational Intelligence
- Meta-heuristic and heuristics algorithms
- Operations Research
- Semantic Web
- Software engineering
- Web Technologies
- Smart Cities and related topics
- Robotics
- Language and knowledge engineering
Note: These papers will go to Special Issue of International Journal of
Combinatorial Optimization Problems and Informatics (IJCOPI) (Indexed in
ESCI, Impact Factor: 0.3, CONACYT-Mexico).
Track 4: Artificial Intelligence and Ethics Human-Centred AI approaches to
its application
- Legal, regulatory-compliant, and ethical adoption of AI (Compliance and
Legality)
- AI risks
- AI development lifecycle
- Human values and fundamental rights on development, deployment, use and
monitoring of AI systems
- Natural and social environment in which AI tools operate
- Socially Responsible AI
- Trustworthy AI
Note: These papers will go to Special Issue of Springer Nature
Transformative Journal in AI and Ethics.
Note: It is mandatory for authors to register and present their papers in
the conference.
Paper Submission and Review Process
All the papers will be blind submitted (author’s names are not allowed) in
English. All submissions will be peer reviewed by the Scientific Committee,
for originality, technical content and relevance. The final acceptance will
be based upon double blind peer review of the full-length paper, and based
on this the final decision will be to be considered either as ORAL or
POSTER presentation.
All papers accepted in the main conference (oral presentation) will be
published in one of these journals:
- “Springer Nature Computer Science (SNCS)“, SCOPUS ISSN: 2662-995X.
- “Journal of Scientometric Research“, SCOPUS ISSN : [Print -2321-6654,
Online - 2320-0057].
- “International Journal of Combinatorial Optimization Problems and
Informatics (IJCOPI)“, Web of Science Core Collection: Emerging Sources
Citation Index (ESCI) SCOPUS CONACYT ISSN: 2007-1558.
- “Springer Nature Transformative Journal in AI and Ethics“, Electronic
ISSN: 2730-5961.
Submissions are invited for papers presenting high quality, previously
unpublished research. Selection criteria include originality of ideas,
correctness, clarity and significance of results and quality of
presentation. Papers must be formatted according to the author guidelines
for SN Computer Science.
Please prepare your paper using either the LaTeX (recommended) or the Ms
Word Text Formating guide. Please limit the size of your paper to the
suggested number of pages and the provided journal format. Papers that do
not follow these format requirements may be rejected without review or may
be not included in the journal even if they were accepted for publication.
Other Journal Special Issues from the Conference Organizers:
The organizers of the conference are also organizing related Special Issues
of the following journals. Authors of some accepted good quality papers in
LKE 2024 will be invited to submit extended/revised versions of their
submissions to these venues:
- Special Issue of ACM Transactions on Asian and Low-Resource Language
Information Processing (TALLIP) (Clarivate: Science Citation Index Expanded
(SCIE), Impact Factor: 2.0), Print ISSN:2375-4699, Electronic
ISSN:2375-4702.
- Special Issue of Computer Speech and Language (CS&L) (Clarivate: SCI,
Impact Factor: 4.3), Print ISSN: 0885-2308, Online ISSN: 1095-8363.
- Special Issue of Journal of Natural Language Engineering (Cambridge
University Press) (Clarivate: SCI, Impact Factor: 1.841), ISSN: 1351-3249
(Print), 1469-8110 (Online).
Language:
Manuscripts must be written in English. Authors whose native language is
not English are recommended to seek the advice of a native English speaker,
if possible, before submitting their manuscripts.
The papers should be submitted electronically at the Microsoft CMT system:
CMT - LKE2024: https://cmt3.research.microsoft.com/LKE2024
Submission implies the willingness of at least one of the authors to
register and to present the communication at the conference, if it is
accepted.
Size:
We are accepting papers of sizes 14 - 16 pages using the suggested format.
Registration fee for authors includes publication of a paper of up to 16
pages. The additional fee is charged for the pages exceeding the page limit
in either the version submitted for review or in the camera-ready version,
whichever is greater. In particular, you must not shorten the camera-ready
version in comparison with the version submitted for review unless the
reviewers required this (contact us if you feel you should do shorten it;
in any case this would not reduce the fee).
Double blind review policy:
The review procedure is double blind. Thus the papers submitted for review
must not contain the authors’ names, affiliations, or any information that
may disclose the authors’ identity (this information is to be restored in
the camera-ready version upon acceptance). In particular, in the version
submitted for review please avoid explicit auto-references.
https://lkesymposium.tudublin.ie/
*** Second Call for Workshop Papers ***
36th International Conference on Advanced Information Systems Engineering
(CAiSE'24)
June 3-7, 2024, 5* St. Raphael Resort and Marina, Limassol, Cyprus
https://cyprusconferences.org/caise2024/
(*** Submission Deadline: 26th February, 2024 AoE ***)
CAiSE is a well-established, highly visible conference series on Advanced Information Systems
(IS) Engineering. It covers all relevant topics in the area, including methodologies and
approaches for IS engineering, innovative platforms, architectures and technologies, and
engineering of specific kinds of IS. CAiSE conferences also have the tradition of hosting
workshops in related fields. Workshops are intended to focus on particular topics and provide
ample room for discussions of new ideas and developments.
CAiSE'24, the 36th edition of the CAiSE series, will host the following workshops. For more
information for each workshop please visit the workshops' web sites.
CAiSE'24 Workshops
• 3rd International Workshop on Agile Methods for Information Systems Engineering (Agil-ISE)
https://agilise.github.io/2024/index.html
• International Workshop on Blockchain for Information Systems (BC4IS24) and Blockchain for
Trusted Data Sharing (B4TDS)
https://pros.unicam.it/bc4isb4tds/
• 2nd International Workshop on Hybrid Artificial Intelligence and Enterprise Modelling for
Intelligent Information Systems (HybridAIMS)
https://hybridaims.com/
• 2nd Workshop on Knowledge Graphs for Semantics-driven Systems Engineering
https://www.omilab.org/activities/events/caise2024_kg4sdse/
• 16th International Workshop on Enterprise & Organizational Modeling and Simulation
(EOMAS 2024)
https://eomas2024.fel.cvut.cz/
• Digital Transformation with Business Process Mining (DigPro2024)
https://digpro.iiita.ac.in/
IMPORTANT DATES
• Paper Submission Deadline: 26th February, 2024 (AoE)
• Notification of Acceptance: 27th March, 2024
• Camera-ready Deadline: 5th April, 2024
• Author Registration Deadline: 5th April, 2024
Workshop Chairs
• João Paulo A. Almeida, Federal University of Espírito Santo, Brazil
• Claudio di Ciccio, Sapienza University of Rome, Italy
• Christos Kalloniatis, University of the Aegean, Greece
(Apologies for potential cross-posting)
Dear all,
An 18-month post-doctoral (or research engineer) position in argument mining (mainly) is available in the WIMMICS team at the I3S laboratory in Sophia Antipolis, France.
A detailed description of the position and the AGGREY project is provided at the end of the e-mail.
Required Qualifications
● A PhD: preferably in computer science, but not necessarily. (If post-doctorate, otherwise a Master's degree for a research engineer)
● Research interest in one or more of the following: Argument Mining, Natural Language Processing (NLP), Argumentation Theory, Computational Argumentation, E-democracy, Graph Theory, Game Theory, Similarity Measure, Explainable AI.
● Interest in interdisciplinary research.
● Excellent critical thinking, written and spoken English.
Application Materials – send by email to Victor DAVID: victor.david(a)inria.fr
● Current CV
● Short statement of interest
Application deadline: February 05, 2024.
Questions about the position can also be sent to Victor DAVID: victor.david(a)inria.fr
==========================================================================================================================================================
Description of the AGGREY project (An argumentation-based platform for e-democracy)
This project brings together 4 French laboratories, including:
- CRIL with VESIC Srdjan, KONIECZNY Sébastien, BENFERHAT Salem, VARZINCZAK Ivan, AL ANAISSY Caren,
- LIP6 with MAUDET Nicolas, BEYNIER Aurélie, LESOT Marie-Jeanne,
- LIPADE with DELOBELLE Jérôme, BONZON Elise, MAILLY Jean-Guy and
- I3S with CABRIO Elena, VILLATA Serena and DAVID Victor.
Summary of the project in general:
E-democracy is a form of government that allows everybody to participate in the development of laws. It has numerous benefits since it strengthens the integration of citizens in the political debate. Several on-line platforms exist; most of them propose to represent a debate in the form of a graph, which allows humans to better grasp the arguments and their relations. However, once the arguments are entered in the system, little or no automatic treatment is done by such platforms. Given the development of online consultations, it is clear that in the near future we can expect thousands of arguments on some hot topics, which will make the manual analysis difficult and time-consuming. The goal of this project is to use artificial intelligence, computational argumentation theory and natural language processing in order to detect the most important arguments, estimate the acceptability degrees of arguments and predict the decision that will be taken.
Given the size of the project, the tasks were defined and distributed between 5 work packages.
The one corresponding to the postdoc (or research engineer) we are looking for is number 3, and depending on progress and priorities, it will also be possible to participate in number 5.
Work package 3: Manipulation detection
Leader: Elena Cabrio (I3S)
Aim:
We will rely on both heuristics and on state-of-the-art argument mining methods in order to detect an anomaly, a fallacious or a duplicate argument [Vorakitphan et al., 2021] (i.e., speech acts that violate the rules of a rational argumentative discussion for assumed persuasive gains), and manipulations (e.g., an organised group of users massively voting for the exact same arguments in a short time period, or submitting variants of the same argument).
Background:
The use of NLP, and more precisely of argument mining methods [Cabrio and Villata, 2018], will be relevant in supporting the smooth functioning of the debate, automatically detecting its structure (supporting and attacking argumentative components) and analysing its content (premises or claims)[Haddadan et al., 2019b]. Moreover, we will rely on previous studies of the similarity between arguments [Amgoud et al., 2018]. This includes, among other things, assistance in detecting manipulation by identifying duplicate arguments with argument similarity calculation [Reimers et al., 2019], or checking the relationships (attack or support) between arguments provided by users in the argument graph.
Challenges/Subtasks:
Subtask 3.1. Development of argument mining methods for finding missing elements and duplicates
We plan to use argument mining methods to automatically build the argumentative graph and detect the missing elements and duplicates. Identifying argument components and relations in the debates is a necessary step to improve the model’s result in detecting and classifying fallacious and manipulative content in argumentation [Vorakitphan et al., 2021]. The use of the notion of similarity between arguments [Amgoud et al., 2018] will be further investigated in this context.
Subtask 3.2. Development of methods for detecting manipulations
We will develop and test different heuristics for dealing with manipulations. Those heuristics will be based on natural language processing, argument mining, graph theory, game theory, etc. Some parameters that we might take into account include also the ratio of added arguments and votes; the number of users that vote on similar arguments during the same time period; the votes on arguments attacking / supporting the same argument.
Subtask 3.3. Development of graph-based methods for finding missing elements and duplicates
We will develop graph-based properties for dealing with missing elements and duplicates. Consider, for instance, two arguments x and y that have the same attackers except that y is also attacked by z; suppose also that x and y attack exactly the same arguments. We might want to check whether z also attacks x. This might not be the case, so the system will not add those attacks automatically, but ask the users that put forward the arguments attacking x and y to consider this question.
Bibliography:
[Vorakitphan et al., 2021]: Vorakit Vorakitphan, Elena Cabrio, and Serena Villata. "Don’t discuss": Investigating semantic and argu- mentative features for supervised propagandist message detection and classification. In Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2021), Held Online, 1-3September, 2021, pages 1498–1507, 2021. URL https://aclanthology.org/2021.ranlp-1.168.
[Cabrio and Villata, 2018]: Elena Cabrio and Serena Villata. Five years of argument mining: a data-driven analysis. In IJCAI, pages 5427–5433, 2018. URL https://www.ijcai.org/proceedings/2018/766.
[Haddadan et al., 2019b]: Shohreh Haddadan, Elena Cabrio, and Serena Villata. Disputool - A tool for the argumentative analysis of political debates. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019, Macao, China, August 10-16, 2019, pages 6524–6526, 2019b. doi: 10.24963/ijcai.2019/944. URL https://doi.org/10.24963/ijcai.2019/944.
[Amgoud et al., 2018]: Leila Amgoud, Elise Bonzon, Jérôme Delobelle, Dragan Doder, Sébastien Konieczny, and Nicolas Maudet. Gradual semantics accounting for similarity between arguments. In International Conference on Principles of Knowledge Representation and Reasoning (KR 2018), pages 88–97. AAAI Press, 2018. URL https: //aaai.org/ocs/index.php/KR/KR18/paper/view/18077.
[Reimers et al., 2019]: Nils Reimers, Benjamin Schiller, Tilman Beck, Johannes Daxenberger, Christian Stab, and Iryna Gurevych. Classification and clustering of arguments with contextualized word embeddings. In ACL, pages 567–578, 2019. URL https://aclanthology.org/P19-1054/.
==========================================================================================================================================================
Work package 5: Implementation and evaluation of the platform
Leaders: Jean-Guy Mailly (LIPADE) and Srdjan Vesic (CRIL)
Aim:
The goal of this WP is to implement the platform, evaluate it through experiments with end users, and use the obtained data to improve the design of the framework. Our experiments will also help us to better understand how humans use online platforms, which is essential for the future success of online debates. After implementing the platform, we will measure to which extent our platform leads to more informed decisions and attitudes. We plan to do this by measuring the extent of disagreement between the participants before and after the use of our system. We expect that the instructions to explicitly state one’s arguments and to link them with other justified counter-arguments make people more open to opposite views and more prone to changing their opinion.
Background:
The field of computational argumentation progressively went from using toy examples and theoretical only evaluation of the proposed approaches to constructing benchmarks [Cabrio and Villata, 2014] and evaluating the proposed approaches by comparing their output to that of human reasoners [Rosenfeld and Kraus, 2016, Polberg and Hunter, 2018, Cerutti et al., 2014, 2021]. Our recent results [Vesic et al., 2022] as well as our current work (unpublished experiments) found that when people see the graph representation of the corresponding debate they comply significantly more often to rationality principles. Furthermore, our experiments show that people are able to draw the correct graph (i.e. the one that corresponds to the given discussion) in the absolute majority of cases (even after no prior training except reading a three minute tutorial). The fact that those participants respect rationality principles more frequently is crucial, since it means that they are e.g., less prone to accept weak or fallacious arguments.
Challenges/Subtasks:
Subtask 5.1. Implementation of the platform
This task aims at implementing the platform. Implementation of the platform will be done by using the agile method. This means that it will be implemented progressively and tested in order to allow for adaptive planning, evolutionary development and constant improvement. We could use an existing platform and add our functionalities. However, we find that building a dedicated platform more adequate for several reasons: many of the platforms are proprietary and would not allow us to use and publish their code, most of the functionalities we need do not exist in any platform, so using an existing platform would not help us gain a lot of time.
Subtask 5.2 Measuring the quality of the platform
We will conduct experiments with users in order to test if the platform can be used to reduce opinion polarisation and to enhance more rational and informed estimations of arguments’ qualities / strengths. To this end, we will examine if relevant parameters (such as the degree to which individuals agree with a given statement, the extent to which individuals diverge in their opinions, and in understanding the issue they debate about, etc.) are significantly different before and after the use of our debate platform. Our hypothesis is that seeing or producing the graph, making the arguments explicit, and engaging in a structured discussion will yield a better understanding of the questions and a better chance to reach an agreement with other parties. Ethical permission will be asked before conducting the experiments.
Subtask 5.3. Improving the platform
We will take into account the results of the experiments, user feed-back, bugs reports etc. in order to develop the final version of the platform.
Bibliography:
[Cabrio and Villata, 2014]: Elena Cabrio and Serena Villata. Node: A benchmark of natural language arguments. In Simon Parsons, Nir Oren, Chris Reed, and Federico Cerutti, editors, Computational Models of Argument - Proceedings of COMMA 2014, Atholl Palace Hotel, Scottish Highlands, UK, September 9-12, 2014, volume 266 of Frontiers in Artificial Intelligence and Applications, pages 449–450. IOS Press, 2014. doi: 10.3233/978-1-61499-436-7-449. URL https://doi.org/10.3233/978-1-61499-436-7-449.
[Rosenfeld and Kraus, 2016]: Ariel Rosenfeld and Sarit Kraus. Providing arguments in discussions on the basis of the prediction of human argumentative behavior. ACM Trans. Interact. Intell. Syst., 6(4):30:1–30:33, 2016. doi: 10.1145/2983925. URL https://doi.org/10.1145/2983925.
[Polberg and Hunter, 2018]: Sylwia Polberg and Anthony Hunter. Empirical evaluation of abstract argumentation: Supporting the need for bipolar and probabilistic approaches. Int. J. Approx. Reason., 93:487–543, 2018. doi: 10.1016/j.ijar.2017. 11.009. URL https://doi.org/10.1016/j.ijar.2017.11.009.
[Cerutti et al., 2014]: Federico Cerutti, Nava Tintarev, and Nir Oren. Formal arguments, preferences, and natural language interfaces to humans: an empirical evaluation. In Torsten Schaub, Gerhard Friedrich, and Barry O’Sullivan, edi- tors, ECAI 2014 - 21st European Conference on Artificial Intelligence, 18-22 August 2014, Prague, Czech Republic, volume 263, pages 207–212. IOS Press, 2014. doi: 10.3233/978-1-61499-419-0-207. URL https://doi.org/10.3233/978-1-61499-419-0-207.
[Cerutti et al., 2021]: Federico Cerutti, Marcos Cramer, Mathieu Guillaume, Emmanuel Hadoux, Anthony Hunter, and Sylwia Polberg. Empirical cognitive studies about formal argumentation. In Guillermo R. Simari Dov Gabbay, Massimiliano Giacomin and Matthias Thimm, editors, Handbook of Formal Argumentation, volume 2. College Publications, 2021.
[Vesic et al., 2022]: Srdjan Vesic, Bruno Yun, and Predrag Teovanovic. Graphical representation enhances human compliance with principles for graded argumentation semantics. In AAMAS ’22: 21st International Conference on Autonomous Agents and Multiagent Systems, Virtual Event, 2022. URL https://hal-univ-artois. archives-ouvertes.fr/hal-03615534.
The Natural Language Processing Section at the Department of Computer Science, Faculty of Science at University of Copenhagen is offering a PhD position in Explainable Natural Language Understanding with a start date of 1 September 2024. The application deadline is 1 February 2024.
Applications for the position can be submitted via UCPH's job portal<https://candidate.hr-manager.net/ApplicationInit.aspx/?cid=1307&departmentI…>.
The Natural Language Processing Section<https://di.ku.dk/english/research/nlp/> provides a strong, international and diverse environment for research within core as well as emerging topics in natural language processing, natural language understanding, computational linguistics and multi-modal language processing. It is housed within the main Science Campus, which is centrally located in Copenhagen. The successful candidate will join Isabelle Augenstein’s Natural Language Understanding research group<http://www.copenlu.com/>. The Natural Language Processing research environment at the University of Copenhagen is internationally leading, as e.g. evidenced by it being ranked 2nd in Europe according to CSRankings.
The position is offered in the context of an ERC Starting Grant held by Isabelle Augenstein on ‘Explainable and Robust Automatic Fact Checking (ExplainYourself)’. ERC Starting Grant is a highly competitive funding program by the European Research Council to support the most talented early-career scientists in Europe with funding for a period of 5 years for blue-skies research to build up or expand their research groups.
The project team will consist of the principle investigator, three PhD students and two postdocs, collaborators from CopeNLU as well as external collaborators. The role of the PhD student to be recruited in this call will be to research methods for generating faithful free-text explanations of NLU models in collaboration with the larger project team.
More information about the project can also be found here<http://www.copenlu.com/talk/2022_11_erc/>.
Informal enquiries about the positions can be made to Professor Isabelle Augenstein, Department of Computer Science, University of Copenhagen, e-mail: augenstein(a)di.ku.dk<mailto:augenstein@di.ku.dk?subject=PhD%20position%20on%20Explainable%20Natural%20Language%20Understanding>.
Isabelle Augenstein, Dr. Scient., Ph.D.
Professor and Head of the NLP Section, Department of Computer Science (DIKU)
Co-Lead, Pioneer Centre for Artificial Intelligence
University of Copenhagen
Østervold Observatory
Øster Voldgade 3
1350 Copenhagen
augenstein(a)di.ku.dk<mailto:augenstein@di.ku.dk>
http://isabelleaugenstein.github.io/