************************************


Call for Papers: EvoNLP - The First Workshop on Ever Evolving NLP + Shared task


Workshop: https://sites.google.com/view/evonlp/

Shared Task: https://sites.google.com/view/evonlp/shared-task 


Submission deadline (for papers requiring review / non-archival): 10 October, 2022

Submission deadline (with ARR reviews): 25 October, 2022

Notification of acceptance: 31 October, 2022

Camera-ready paper deadline: 11 November, 2022

Workshop date: 7 December, 2022


************************************


Advances in language modeling have led to remarkable accuracy on several NLP tasks, but most benchmarks used for evaluation are static, ignoring the practical setting under which training data from the past and present must be used for generalizing to future data. Consequently, training paradigms also ignore the time sensitivity of language and essentially treat all text as if it was written at a single point in time. Recent studies have shown that in a dynamic setting, where the test data is drawn from a different time period than the training data, the accuracy of such static models degrades as the gap between the two periods increases.


--------------------------------------------------------------

This workshop focuses on these time-related issues in NLP models and benchmarks. We invite researchers from both academia and industry to redesign experimental settings, benchmark datasets, and modeling by especially focusing on the “time” variable. We will welcome papers / work-in-progress on several topics including (but not limited to):


- Dynamic Benchmarks: Evaluation of Model Degradation in Time

Measuring how NLP models age

Random splits vs time-based splits (past/future)

Latency (days vs years) at which models need to be updated for maintaining task accuracy

Time-sensitivity of different tasks and the type of knowledge which gets stale

Time-sensitivity of different domains (e.g., news vs scientific papers) and how domain shifts interact with time shifts

Sensitivity of different models and architectures to time shifts


- Time-Aware Models

Incorporating time information into NLP models

Techniques for updating / replacing models which degrade with time

Learning strategies for improving temporal degradation

Trade-offs between updating a degraded model vs replacing it altogether

Mitigating catastrophic forgetting of old knowledge as we update models with new knowledge

Improving plasticity of models so that they can be easily updated

Retrieval based models for improving temporal generalization


- Analysis of existing models / datasets

Characterizing whether degradation on a task is due to outdated facts or changes in language use

Effect of model scale on temporal degradation – do large models exhibit less degradation?

Efficiency / accuracy trade-offs when updating models

--------------------------------------------------------------


All accepted papers will be published in the workshop proceedings unless requested otherwise by the authors. Submissions can be made either via OpenReview where they will go through the standard double-blind process, or through ACL Rolling Review with existing reviews. See details below.

 

---- Submission guidelines ---- 

We seek submissions of original work or work-in-progress. Submissions can be in the form of long/short papers and should follow the ACL main conference template. Authors can choose to make their paper archival/non-archival. All accepted papers will be presented at the workshop.

Archival track

We will follow double-blind review process and use OpenReview for the submissions. ​​We also will accept ACL rolling review (ARR) submissions with reviews. Since these submissions already come with reviews, the submission deadline is much later than the initial deadline. We will use Open Review for the submissions.

Submission link: https://openreview.net/group?id=EMNLP/2022/Workshop/EvoNLP

For papers needing review click “EMNLP 2022 Workshop EvoNLP submission”

For papers from ARR click “EMNLP 2022 Workshop EvoNLP commitment Submission”


---- Non-archival track ---- 

Non-archival track seeks recently accepted / published work as well as work-in-progress. It does not need to be anonymized and will not go through the review process. The submission should clearly indicate the original venue and will be accepted if the organizers think the work will benefit from exposure to the audience of this workshop.

Submission: Please email your submission as a single PDF file to evonlp@googlegroups.com. Include “EvoNLP Non-Archival Submission” in the title and the author names and affiliation within the body of your email.


---- Shared task ---- 

The workshop will feature a shared task on meaning shift detection in social media. Initial data already available! Winners of the shared task will also receive a cash prize. More details at the workshop website :https://sites.google.com/view/evonlp/shared-task.

Best paper award

Thanks to generous support from our sponsors Snap Inc, we will award the best paper award (with cash prize) to one of the submissions selected by our program committee and organizing committee. The best paper will be given the opportunity for a lightning talk to introduce their work.





--


Jose Camacho Collados