Dear all,
The Computational Cognition Lab at Open University of Cyprus, and
the Socially-Competent Robotic and Agent Technologies group at CYENS Center of Excellence
are looking to recruit research associates and post-docs as part of a number of ongoing and upcoming projects on topics related to cognitive assistants, explainable AI, neural-symbolic integration, conversational AI, natural language understanding/generation, formal argumentation, and knowledge-based systems.
Relevant announcements:
https://www.ouc.ac.cy/images/files/hr/2022/WeNet_-_Researchers__Developers_…https://www.cyens.org.cy/en-gb/vacancies/job-listings/research-associates/r…
Interested candidates should apply directly following the procedures described in the links above.
Regards,
Loizos
The Hasso Plattner Institute (HPI) in Potsdam, Germany is offering
fully-funded PhD/PostDoc scholarships on data-related topics, including
NLP, computational linguistics, IR, and AI. The deadline is fairly soon
(August 15).
Feel free to get in touch with me if you have any questions!
--
Gerard de Melo
Professor at HPI / Uni Potsdam
http://gerard.demelo.org/
-----
The Hasso Plattner Institute at the University of Potsdam is one of
Germany’s top-ranked computer science institutes. Its PhD school "Data
Science and Engineering" currenty supports 15 PhD students. Annually, we
seek to add talented young researchers and offer
10 full Ph.D. and Postdoc Scholarships
for a support of up to four years in our Potsdam location, on the border
of Berlin.
The increasing abundance of data in science and in industry creates many
challenges and opportunities. Data science has grown to be a
foundational discipline in information technology, allowing new insights
from data and creating ever more intelligent applications.
Simultaneously, it is becoming increasingly difficult to collect, clean
and deliver the vast amounts of data and apply and maintain complex data
science processes. Targeting these challenges, the discipline of data
engineering has become equally foundational.
The newly established PhD school "Data Science and Engineering" unites
top PhD students and Postdocs in all areas of data-driven research and
technology, including scalable storage, stream processing, data
cleaning, machine learning and deep learning, text processing, data
visualization and more. We apply our research to many different use
cases across the participating interdisciplinary research groups,
joining forces whenever possible.
Students and Postdocs enjoy a great research environment, close
mentorship by professors and postdocs, interaction with many peers,
enormous computing power with HPI’s data lab, and significant travel
funding. HPI professors and their research groups ensure high quality
research and will supervise Ph.D. students.
Applications must be submitted to phd-application-data-science(a)hpi.de by
August 15, 2022. Please include
- Curriculum vitae and copies of certificates/transcripts
- Brief research proposal, ideally mentioning the suitable HPI research
groups you would like to work with.
- Work samples/copies of relevant scientific work (e.g., master's thesis
or scientific papers)
- Letter(s) of recommendation and any other supporting documents
More information about the program and the application process is at
https://hpi.de/en/research/research-schools/scholarships.html
Feel free to also contact us at felix.naumann(a)hpi.de
Natural Language Processing (John Benjamin’s)
Call for Book Proposals
John Benjamins' NATURAL LANGUAGE PROCESSING Book Series invites new book proposals to respond to the growing demand for Natural Language processing (NLP) literature. Three general types of books are considered for publication:
Monographs - featuring (i) original leading cutting-edge research (the monograph could be based on an outstanding PhD thesis), or (ii) surveys of the state-of-the art of specific NLP tasks or applications.
Collections – (i) books focusing on a particular NLP area (e.g. emerging from successful NLP workshops or as a result of editors’ calls for papers) or (ii) books which include papers covering a wide range of topics (e.g. emerging from competitive NLP conferences or as a result of proposals for books of the type “Reading In NLP”).
Course books – (i) general NLP course books or (ii) course books on a particular key area of NLP (e.g. Speech Processing, Computational Syntax/Parsing).
Authors will be encouraged to append supplementary materials such as demonstration programs, NLP software, corpora etc. and to indicate websites, computational language resources etc. where appropriate.
This call invites proposals from potential authors of the types of books described above. Proposals on any topic related to Natural Language Processing are welcome.
Topics
The scope of the series is comprehensive ranging from theoretical Computational Linguistics topics (Computational Syntax, Computational Semantics etc.) to highly practical Language Technology topics (speech recognition, information extraction, information retrieval etc.). The series covers both written language and speech; it welcomes works covering (but not limited to) areas such as: phonology, morphology, syntax, semantics, discourse, pragmatics, dialogue, text understanding and generation, machine translation, machine-aided translation, translation aids and tools, corpus-based language processing; written and spoken natural language interfaces, knowledge acquisition, information extraction, text summarisation, text classification, computer-aided language learning, language resources.
New results in NLP based on modern alternative theories and methodologies as opposed to the mainstream techniques of symbolic NLP such as analogy-based, statistical, connections as well as hybrid and multimedia approaches, will be also welcome.
Advisory board
The new series’ editor is Ruslan Mitkov (University of Wolverhampton) and the advisory board of the series includes:
- Eduardo Blanco (University of North Texas)
- Gloria Corpas (University of Malaga)
- Robert Dale (Macquarie University)
- Elizaveta Goncharova (National Research University)
- Veronique Hoste (Veronique Hoste)
- Eduard Hovy (Carnegie Mellon University)
- Lori Lamel (The Computer Sciences Laboratory for Mechanics and Engineering Sciences)
- Carlos Martín-Vide (Rovira i Virgili University)
- Johanna Monti (University of Naples "L'Orientale" )
- Roberto Navigli (Sapienza University of Rome)
- Nicolas Nicolov (AI/ML, Avalara Inc.)
- Constantin Orasan (University of Surrey)
- Paolo Rosso (Universitat Politècnica de València)
- Raheem Sarwar (University of Wolverhampton)
- Khalil Sima’an (University of Amsterdam)
- Richard Sproat (Google Research)
- Key-Yih Su (Institute of Information Science, Academia Sinica)
The managing editor at John Benjamins is Kees Vaes (Email kees.vaes(a)benjamins.nl).
Submission of proposals
Interested authors should submit proposals by email (plain text or pdf files) to the series editor:
Prof. Dr. Ruslan Mitkov
Email R.Mitkov(a)wlv.ac.uk<mailto:R.Mitkov@wlv.ac.uk>
with a copy to Ms Rocío Caro Quintana (R.Caro(a)wlv.ac.uk<mailto:R.Caro@wlv.ac.uk>), Editorial Assistant.
The proposals should include an outline of the book (1-2 pages), a preliminary table of contents, the target readership, related publications, how the book will differ from other similar books in the area (if applicable), time-scale and information about the prospective author (relevant experience in the field, publications etc.).
Each proposal will be reviewed by members of the advisory board or additional reviewers.
More information
More information at Natural Language Processing (benjamins.com)<https://benjamins.com/catalog/nlp>.
## ACM SIGIR Artifact Badges ##
The ACM Special Interest Group on Information Retrieval (SIGIR) adheres to and implements the ACM policies for "Artifact Review and Badging” (https://www.acm.org/publications/policies/artifact-review-and-badging-curre… <https://www.acm.org/publications/policies/artifact-review-and-badging-curre…>).
Artifact badging is not only intended for further improving our experimental practices, but especially to highlight and recognize the outstanding efforts made by those who go the extra mile to make their experiments’ code and data not only available online, but also easy to use, fully functional, and reproducible.
Overall, the initiative promotes reproducibility of research results and allows scientists and practitioners to immediately benefit from state-of-the-art research results, without spending months re-implementing the proposed algorithms and trying to find the right parameter values, or creating datasets or running intensive user-oriented evaluation. We also hope that it will indirectly foster scientific progress, since it allows researchers to reliably compare with and build upon existing techniques, knowing that they are using exactly the same implementation.
Badge Types:
** Artifacts Evaluated – Functional ** The artifacts associated with the research are found to be documented, consistent, complete, exercisable, and include appropriate evidence of verification and validation.
** Artifacts Evaluated – Reusable and Available ** The artifacts associated with the paper are of a quality that significantly exceeds minimal functionality. That is, they have all the qualities of the Artifacts Evaluated – Functional level, but, in addition, they are very carefully documented and well-structured to the extent that reuse and repurposing are facilitated. In particular, the norms and standards of the research community for artifacts of this type are strictly adhered to. This badge is applied to papers in which associated artifacts have been made permanently available for retrieval.
** Results Reproduced ** The main results of the paper have been obtained in a subsequent study by a person or team other than the authors, using, in part, artifacts provided by the author.
** Results Replicated ** The main results of the paper have been independently obtained in a subsequent study by a person or team other than the authors, without the use of author-supplied artifacts
The different types of ACM stamps are not meant to be a measure of the scientific quality of the paper themselves or of the usefulness of presented algorithms, which are assessed by means of the traditional peer-review processes and by adoption/impact in the research and industry community. Rather, they are a recognition of the service provided by authors to the community by releasing the code and/or data and they are an endorsement of the replicability and/or reproducibility of the results presented in the paper. The stamps also alert users of the ACM Digital Library about the presence and location of these artifacts in the ACM DL:
Datasets – https://dl.acm.org/artifacts/dataset <https://dl.acm.org/artifacts/dataset>
Software – https://dl.acm.org/artifacts/software <https://dl.acm.org/artifacts/software>
In this way, each artifact will be assigned its own DOI, will be directly citable, and will be linked to its corresponding paper.
## Artifact Submission ##
ACM SIGIR Artifact Badges applies to artifacts complementing papers accepted in one of the following venues:
ACM Transactions on Information Systems (TOIS)
Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR)
ACM on Conference on Human Information Interaction and Retrieval (CHIIR)
ACM SIGIR International Conference on the Theory of Information Retrieval (ICTIR)
The submission is always open and authors are welcome to apply for badges as soon as their papers get accepted in one of the above venues.
Irrespective of the nature of the artifacts, authors should create a single Web page (whether on their site or a third-party repository service) that contains the artifact, the paper, and all necessary instructions.
For artifacts where this would be appropriate, it would be helpful to also provide a self-contained bundle (including instructions) as a single file (.tgz or .zip) for convenient offline use.
The artifact submission thus consists of just the URL and any credentials required to access the files submitted into the submission system:
https://openreview.net/group?id=ACM.org/SIGIR/Badging <https://openreview.net/group?id=ACM.org/SIGIR/Badging>
## Artifact Preparation Guidelines and Review Procedure ##
You can find more information about the ACM SIGIR Artifact Badges at:
https://sigir.org/general-information/acm-sigir-artifact-badging/ <https://sigir.org/general-information/acm-sigir-artifact-badging/>
There you can also find detailed instructions and suggestions about how to prepare your artifacts for each type of badge and the reviewing criteria for each of them.
Each artifact will be reviewed by a senior and junior member of the Artifact Evaluation Committee (AEC).
For any questions or clarifications, please contact us at:
aec_sigir(a)acm.org <mailto:aec_sigir@acm.org>
## ACM SIGIR Artifact Evaluation Committee (AEC) ##
Chair and Vice-chair
Nicola Ferro, University of Padua, Italy [chair]
Johanne Trippas, RMIT University , Australia [vice-chair]
Senior Members
Alessandro Benedetti, Sease, UK
Rob Capra, University of North Carolina at Chapel Hill, USA
Diego Ceccarelli, Bloomberg, UK
Anita Crescenzi, University of North Carolina at Chapel Hill, USA
Charles L . A. Clarke, University of Waterloo, Canada
Yi Fang, Santa Clara University, USA
Norbert Fuhr, University of Duisburg-Essen, Germany
Claudia Hauff, Delft University of Technology, The Netherlands
Jiqun Liu, University of Oklahoma, USA
Maria Maistro, University of Copenhagen, Denmark
Miguel Martinez, Signal AI, UK
Parth Mehta, Parmonic, USA
Martin Potthast, Leipzig University, Germany
Tetsuya Sakai, Waseda University, Japan
Ian Soboroff, National Institute of Standards and Technology (NIST), USA
Paul Thomas, Microsoft, Australia
Andrew Trotman, University of Otago, New Zealand
Min Zhang, Tsinghua University, China
Junior Members
Valeriia Baranova, RMIT University, Australia
Arthur Barbosa Câmara, Delft University of Technology, The Netherlands
Hamed Bonab, University of Massachusetts Amherst, USA
Kathy Brennan, Google, USA
Timo Breuer, TH Köln, Germany
Guglielmo Faggioli, University of Padua, Italy
Alexander Frummet, University of Regensburg, Germany
Darío Garigliotti, Aalborg University, Denmark
Chris Kamphuis, Radboud University, The Netherlands
Johannes Kiesel, Bauhaus-Universität Weimar, Germany
Yuan Li, University of North Carolina at Chapel Hill, USA
Joel Mackenzie, University of Melbourne, Australia
Antonio Mallia, New York University, USA
David Maxwell, Delft University of Technology, The Netherlands
Felipe Moraes, Delft University of Technology, The Netherlands
Ahmed Mourad, University of Queensland, Australia
Zuzana Pinkosova, University of Strathclyde, UK
Chen Qu, University of Massachusetts Amherst, USA
Anna Ruggero, Sease, UK
Svitlana Vakulenko, University of Amsterdam, The Netherlands
Sasha Vtyurina, KIRA systems, Canada
Oleg Zendel, RMIT University, Australia
Steven Zimmerman, University of Essex, UK
Call for Papers - ACM Transactions on Information Systems
Special Section on Efficiency in Neural Information Retrieval
Full Call of Papers: https://dl.acm.org/journal/tois/calls-for-papers
Overview 🧐
--------------------------
The aim of this Special Section is to engage with researchers in Information Retrieval, Natural Language Processing, and related areas and gather insight into the core challenges in measuring, reporting, and optimizing all facets of efficiency in Neural Information Retrieval (NIR) systems, including time-, space-, resource-, sample- and energy- efficiency, among other factors.
This special section solicits perspectives from active researchers to advance our understanding of and to overcome efficiency challenges in NIR.
In particular, researchers are encouraged to examine the ever-growing model complexity through appropriate empirical analysis, to propose models that require less data, computational resources, and energy for training and fine-tuning with similarly efficient inference, to ask if there are meaningful simplifications of the existing training processes or model architectures that lead to comparable quality, and explore a multi-faceted evaluation of NIR models from quality to all dimensions of efficiency with standardized metrics.
Topics 🔍
--------------------------
We welcome submissions on the following topics, including but not limited to:
* Novel NIR models that reach competitive quality but are designed to provide efficient training or
inference;
* Efficient NIR models for decentralized IR tasks such as conversational search;
* Efficient NIR models for IR-related tasks such as question answering and recommender systems;
* Efficient NIR for resource-constrained devices;
* Scalability of NIR systems;
* Efficient NIR for text and cross-modal search;
* Strategies to optimize training or inference of existing NIR models;
* Sample-efficient training of NIR models;
* Efficiency-driven distillation, pruning, quantization, retraining, and transfer learning;
* Empirical investigation of the complexity of existing NIR models through an analysis of quality, interpretability, robustness, and environmental impact;
* Evaluation protocols for efficiency in NIR.
Important Dates 🔥
--------------------------
* Open for Submissions: Aug 1, 2022
* Submissions deadline: Dec 31, 2022
* First-round review decisions: Mar 31, 2023
* Deadline for minor revision submissions: Apr 30, 2023
* Deadline for major revision submissions: Jun 30, 2023
* Notification of final decisions: Jul 31, 2023
* Tentative publication: 2023
Guest Editors 📚
--------------------------
* Dr. Sebastian Bruch, Pinecone, United States of America
* Prof. Claudio Lucchese, Ca' Foscari University of Venice, Italy
* Dr. Maria Maistro, University of Copenhagen, Denmark
* Dr. Franco Maria Nardini, ISTI-CNR, Italy
———
Maria Maistro, PhD
Tenure-track Assistant Professor
Department of Computer Science
University of Copenhagen
Universitetsparken 5, 2100 Copenhagen, Denmark
Call for Participation - TREC Health Misinformation Track 2022
https://trec-health-misinfo.github.io
Overview 🧐
--------------------------
Web search engines are frequently used to help people make decisions about health-related issues. Unfortunately, the web is filled with misinformation regarding the efficacy of treatments for health issues. Search users may not be able to discern correct from incorrect information, nor credible from non-credible sources. As a result of finding misinformation deemed by the user to be useful to their decision making task, they can make incorrect decisions that waste money and put their health at risk.
The TREC Health Misinformation track fosters research on retrieval methods that promote reliable and correct information over misinformation for health-related decision making tasks.
Tasks 💼
--------------------------
* Ad-hoc Retrieval Task: design a ranking model that promotes credible and correct information over incorrect information;
* Answer Prediction Task: predict the answer to the topic’s stance.
Guidelines 📋
--------------------------
* Corpus: noclean version of the C4 dataset (https://huggingface.co/datasets/allenai/c4);
* Topics: about consumer health search (people seeking health advice online);
* Runs: runs may be either automatic or manual with the standard TREC run format.
Detailed guidelines: https://trec-health-misinfo.github.io
Important Dates 🔥
--------------------------
* Runs due from participants: August 28, 2022
* Evaluation results returned: End of September 2022
* Notebook paper due: October 2022
* TREC 2022 Conference: November 14-18, 2022
* Final paper due: February 2023
Organization 👔
--------------------------
* Charles Clarke, University of Waterloo
* Maria Maistro, University of Copenhagen
* Mark Smucker, University of Waterloo
———
Maria Maistro, PhD
Tenure-track Assistant Professor
Department of Computer Science
University of Copenhagen
Universitetsparken 5, 2100 Copenhagen, Denmark
Research Fellow in Spatial Reasoning
University of Leeds
Do you have an interest in interdisciplinary research involving developing and applying AI techniques, in particular those involving qualitative spatial reasoning, to datasets from the humanities? Would you like to collaborate with a range of partners from other Universities in the UK (Lancaster, Bristol, Manchester) and the USA (Stanford, IUPUI) across a range of disciplines (History, English, Digital Humanities) to help gain an understanding of spatial information in textual corpora such as The Corpus of Lake District Writing, and Holocaust survivor testimonies?
You will be involved in designing the semantic representations for the project as part of Work Package 2 (WP2). Work Package 1 (WP1) will develop methods to analyse the texts and annotate the constituent elements with named entities, spatial and temporal relations and PoS tags. WP2 will use the annotation from WP1 to analyse meanings of spatio-temporal relationships in the corpora.
You will investigate the overlap between these relationships and existing spatio-temporal calculi (sets of relationships encoding spatial and temporal semantics with associated inference mechanisms). You will help design calculi that extend current AI-focussed work in qualitative spatio-temporal representation to narratives. These representations will not only allow the expression of qualitative relationships, but also those which are vague and imprecise.
You will have a strong background and hold a PhD (or have submitted your thesis before taking up the role) in the area of knowledge representation and reasoning or a closely allied area.
This role will be based on the university campus, with scope for it to be undertaken in a hybrid manner. We are also open to discussing flexible working arrangements.
Further details of the post and the application procedure can be found here: https://jobs.leeds.ac.uk/Vacancy.aspx?ref=EPSCP1106
Prof A G Cohn, FREng, CEng, CITP, FAAAI, FEurAI, FAISB, FIET, FBCS
School of Computing, University of Leeds, Leeds, LS2 9JT
Turing Fellow, Alan Turing Institute
T: 0113 3435482 https://tinyurl.com/A-G-Cohn
E: a.g.cohn(a)leeds.ac.uk<mailto:a.g.cohn@leeds.ac.uk>
> [Apologies for cross-posting]
>
> -> Due to several requests, SIMBig 2022 is pleased to extend
> the submission deadline to August 19, 2022 (hard deadline) <-
>
> =================================================================
> LAST CALL FOR PAPERS - SIMBig 2022
> =================================================================
>
> SIMBig 2022 - 9th International Conference on Information Management and Big Data
> Where: Universidad Nacional Mayor de San Marcos, Lima, PERU
> When: November 16 - 18, 2022
> Website: http://simbig.org/SIMBig2022/ <http://simbig.org/SIMBig2022/>
>
> =================================================================
>
> OVERVIEW
> ----------------------------------
>
> SIMBig 2022 seeks to present new methods of Artificial Intelligence (AI), Data Science, and related fields, for analyzing, managing, and extracting insights and patterns from large volumes of data.
>
>
> KEYNOTE SPEAKERS
> -------------
>
> Leman Akoglu, Carnegie Mellon University, USA
> Jiang Bian, University of Florida, USA
> Rich Caruana, Microsoft, USA
> Dilek Hakkani-Tur, Amazon Alexa AI, USA
> Monica Lam, Stanford University, USA
> Wang-Chiew Tan, Facebook AI, USA
> Andrew Tomkins, Google, USA
> Bin Yu, University of California, Berkeley, USA
>
> IMPORTANT DATES
> -------------
>
> August 05, 2022 August 19, 2022 --> Papers submission deadline
> September 09, 2022 September 17, 2022 ---> Notification of acceptance
> October 07, 2022 --> Camera-ready versions
> November 16 - 18, 2022 --> Conference held in Lima, Peru
>
> PUBLICATION AND TRAVEL AWARDS
> -------------
>
> All accepted papers of SIMBig 2022 (tracks including) will be published with Springer CCIS Series <https://www.springer.com/series/7899>.
>
>
> The best 8-10 papers of SIMBig 2022 (tracks including) will be selected to submit an extension to be published with the Springer SN Computer Science Journal. <https://www.springer.com/journal/42979>
>
> Thanks to the support of the North American Chapter of the Association for Computational Linguistics (NAACL) <http://naacl.org/>, SIMBig 2022 will offer 4 student travel awards for the best papers.
>
>
>
> TOPICS OF INTEREST
> -------------
>
> SIMBig 2022 has a broad scope. We invite contributions on theory and practice, including but not limited to the following technical areas:
>
> Artificial Intelligence
> Data Science
> Machine Learning
> Natural Language Processing
> Semantic Web
> Healthcare Informatics
> Biomedical Informatics
> Data Privacy and Security
> Information Retrieval
> Ontologies and Knowledge Representation
> Social Networks and Social Web
> Information Visualization
> OLAP and Business intelligence
> Data-driven Software Engineering
>
> SPECIAL TRACKS
> -------------
>
> SIMBig 2022 proposes three special tracks in addition to the main conference:
>
> ANLP <https://simbig.org/SIMBig2022/en/anlp.html> - Applied Natural Language Processing
> DISE <https://simbig.org/SIMBig2022/en/dise.html> - Data-drIven Software Engineering
> SNMAM <https://simbig.org/SIMBig2022/en/snmam.html> - Social Network and Media Analysis and Mining
>
> CONTACT
> -------------
>
> SIMBig 2022 General Chairs
>
> Juan Antonio Lossio-Ventura, National Institutes of Health, USA (juan.lossio(a)nih.gov <mailto:juan.lossio@nih.gov>)
> Hugo Alatrista-Salas, Pontificia Universidad Católica del Perú, Peru (halatrista(a)pucp.pe <mailto:halatrista@pucp.pe>)
>
INTRODUCTION
The CLEF Initiative (Conference and Labs of the Evaluation Forum) is a self-organized body whose main mission is to promote research, innovation, and development of information access systems with an emphasis on multilingual and multimodal information with various levels of structure.
The CLEF Initiative is structured in two main parts:
- a series of Evaluation Labs, i.e. laboratories to conduct evaluation of information access systems and workshops to discuss and pilot innovative evaluation activities;
- a peer-reviewed Conference on a broad range of issues, including
- investigation continuing the activities of the Evaluation Labs;
- experiments using multilingual and multimodal data; in particular, but not only, data resulting from CLEF activities;
- research in evaluation methodologies and challenges.
Since 2000 CLEF has played a leading role in stimulating investigation and research in a wide range of key areas in the information retrieval domain. It has promoted the study and implementation of appropriate evaluation methodologies for diverse types of tasks and media. Over the years, a wide, strong, and multidisciplinary research community has been built, which covers and spans the different areas of expertise needed to deal with the breadth of CLEF activities, making CLEF one of the top
CALL FOR BIDS
The CLEF Steering Committee solicits proposals from groups interested in organizing the CLEF conference and labs in September 2024.
Groups submitting a bid for CLEF 2024 also commit themselves to collect membership fees on behalf of the CLEF Association and to pass them to the CLEF Association.
Guidelines on submitting a bid can be found in the Template for Bids available at:
https://www.clef-initiative.eu/documents/71612/87713/CLEF-Initiative-Templa… <http://www.clef-initiative.eu/documents/71612/60f6dc78-cc9a-4866-97bc-a4bc8…>
Bids must be submitted by **Friday, December 11th, 2022** by email to the Steering Committee Chair Nicola Ferro (chair(a)clef-initiative.eu <mailto:chair@clef-initiative.eu>).
The Steering Committee will review and select the proposals. The Steering Committee can ask for modifications and changes to the proposals, if deemed necessary.
Interested parties can contact the Steering Committee Chair Nicola Ferro (chair(a)clef-initiative.eu <mailto:chair@clef-initiative.eu>) to receive further details.
IMPORTANT DATES
- Bid submission deadline: Friday, December 11th, 2022
- Feedback to bidders and discussion: December 2022
- Bid selection: early January 2023
STEERING COMMITTEE
- Martin Braschler, Zurich University of Applied Sciences, Switzerland
- Khalid Choukri, Evaluations and Language resources Distribution Agency (ELDA), France
- Fabio Crestani, Università della Svizzera italiana, Switzerland
- Carsten Eickhoff, Brown University, USA
- Nicola Ferro, University of Padua, Italy
- Norbert Fuhr, University of Duisburg-Essen, Germany
- Lorraine Goeuriot, Université Grenoble Alpes, France
- Julio Gonzalo, National Distance Education University (UNED), Spain
- Donna Harman, National Institute for Standards and Technology (NIST), USA
- Bogdan Ionescu, University Politehnica of Bucharest, Romania
- Evangelos Kanoulas, University of Amsterdam, The Netherlands
- Birger Larsen, University of Aalborg, Denmark
- David E. Losada, Universidade de Santiago de Compostela, Spain
- Mihai Lupu, Vienna University of Technology, Austria
- Maria Maistro, University of Copenhagen, Denmark
- Josiane Mothe, IRIT, Université de Toulouse, France
- Henning Müller, University of Applied Sciences Western Switzerland (HES-SO), Switzerland
- Jian-Yun Nie, Université de Montréal, Canada
- Paolo Rosso, Universitat Politècnica de València, Spain
- Eric SanJuan, University of Avignon, France
- Giuseppe Santucci, Sapienza University of Rome, Italy
- Jacques Savoy, University of Neuch\^{e}tel, Switzerland
- Laure Soulier, Pierre and Marie Curie University (Paris 6), France
- Theodora Tsikrika, Information Technologies Institute (ITI), Centre for Research and Technology Hellas (CERTH), Greece
- Christa Womser-Hacker, University of Hildesheim, Germany
Apologies for the multiple postings.
----
*Indian Language Summarization (ILSUM 2022)*
Website: https://ilsum.github.io/
To be organized in conjunction with FIRE 2022 (fire.irsi.res.in)
9th-13th December 2022 (Hybrid Event, hosted in Kolkata)
-------------------------------------------------------
The first shared task on Indian Language Summarization (ILSUM) aims at
creating an evaluation benchmark dataset for Indian Languages. While
large-scale datasets exist for a number of languages like English, Chinese,
French, German, Spanish, etc. no such datasets exist for any Indian
languages. Through this shared task, we aim to bridge the existing gap.
In the first edition, we cover two major Indian languages Hindi and
Gujarati alongside Indian English, a widely recognized dialect of the
English Language. It is a classic summarization task, where we will provide
~10,000 article-summary pairs for each language and the participants are
expected to generate a fixed-length summary.
*Timeline*
-------------
8th June - Task announced and Registrations open
22nd June - Training Data Release
30th August - Test Data Release
10th September - Run Submission Deadline
15th September - Results Declared
5th October - Working notes due
9th-13th December - FIRE 2022 (Hybrid Event hosted at Kolkata)
*Organisers*
----------------
Bhavan Modha, University of Texas at Dallas, USA
Shrey Satapara, Indian Institute of Technology, Hyderabad, India
Sandip Modha, LDRP-ITR, Gandhinagar, India
Parth Mehta, Parmonic, USA
*For regular updates subscribe to our mailing list: **ilsum(a)googlegroups.com
<+ilsum(a)googlegroups.com>*
Regards,
Parth Mehta
Co-organiser, ILSUM 2022