Join Veeva Systems , a pioneer in cloud solutions for the life sciences
industry, as a Senior/Principal Data Scientist focusing on NLP.
Your role will primarily involve developing LLM-based agents that are
specialized in searching and extracting detailed information about Key
Opinion Leaders (KOLs) in the healthcare sector.
You will craft an end-to-end human-in-the-loop pipeline to sift through a
large array of unstructured medical documents—ranging from academic
articles to clinical guidelines and meeting notes from therapeutic
committees.
You will also collaborate with over 2000 data curators and dedicated team
of software developers and DevOps engineers to refine these models and
deploy them into production environments.
*What You'll Do*
-
Adopt the latest technologies and trends in NLP to your platform
-
Develop LLM-based agents capable of performing function calls and
utilizing tools such as browsers for enhanced data interaction and
retrieval.
-
Experience with Reinforcement Learning from Human Feedback (RLHF)
methods such as Direct Preference Optimization (DPO) and Proximal Policy
Optimization (PPO) for training LLMs based on human preferences.
-
Design, develop, and implement an end-to-end pipeline for extracting
predefined categories of information from large-scale, unstructured data
across multi-domain and multilingual settings
-
Create a robust semantic search functionality that effectively answers
user queries related to various aspects of the data
-
Use and develop named entity recognition, entity-linking, slot-filling,
few-shot learning, active learning, question/answering, dense passage
retrieval and other statistical techniques and models for information
extraction and machine reading
-
Deeply understand and analyze our data model per data source and
geo-region and interpret model decisions
-
Collaborate with data quality teams to define annotation tasks, metrics,
and perform qualitative and quantitative evaluation
-
Utilize cloud infrastructure for model development, ensuring seamless
collaboration with our team of software developers and DevOps engineers for
efficient deployment to production
*Requirements*
-
4+ years of experience as a data scientist (or 2+ years with a Ph.D.
degree)
-
Master's or Ph.D. in Computer Science, Artificial Intelligence,
Computational Linguistics, or a related field.
-
Strong theoretical knowledge of Natural Language Processing, Machine
Learning, and Deep Learning techniques.
-
Proven experience working with large language models and transformer
architectures, such as GPT, BERT, or similar.
-
Familiarity with large-scale data processing and analysis, preferably
within the medical domain.
-
Proficiency in Python and relevant NLP libraries (e.g., NLTK, SpaCy,
Hugging Face Transformers).
-
Experience in at least one framework for BigData (e.g. Ray, Spark) and
one framework for Deep Learning (e.g. PyTorch, JAX)
- Experience working with cloud infrastructure (e.g., AWS, GCP, Azure)
and containerization technologies (e.g., Docker, Kubernetes) and
experience with bashing script
- Strong collaboration and communication skills, with the ability to
work effectively in a cross-functional team
- Used to start-up environments
- Social competence and a team player
- High energy and ambitious
- Agile mindset
*Application Links*
You can work remotely anywhere in the UK, The Netherlands or Spain and you
have to be a resident of one of the aforementioned countries and be legally
authorized to work there without requiring Veeva’s support for visa or
relocation. *If you do not meet this condition, but you think you are an
exceptional candidate please clarify it in a separate note and we will
consider it.*
Spain: https://jobs.lever.co/veeva/2bf92570-a680-40e8-96b0-a8629e3feac7
<https://jobs.lever.co/veeva/61dc60d9-c888-4636-836e-2a75ff9f0567>UK:
https://jobs.lever.co/veeva/f0e989b5-9d14-4f82-baaa-2fc56a76ba16
<https://jobs.lever.co/veeva/f0e989b5-9d14-4f82-baaa-2fc56a76ba16>
Netherlands:
https://jobs.lever.co/veeva/2bf92570-a680-40e8-96b0-a8629e3feac7
1.
2.
--
Ehsan Khoddam
Data Science Manager - Medical NLP
Link Data Science
Veeva Systems
m +31623213197
ehsan.khoddam(a)veeva.com
[apologies if you received multiple copies of this call]
We are pleased to invite abstract submissions for session 3, "Large
Language Models," at the upcoming "1st Conference of the German AI Service
Centers (KonKIS24)" with a focus on "Advancing Secure AI in Critical
Infrastructures for Health and Energy." Please visit the main event page
https://events.gwdg.de/event/615/ for more details.
We encourage submissions that align with the conference's theme,
particularly in the following areas:
- *Pretraining Techniques for LLMs*: Exploring foundational strategies
and algorithms.
- *Testing and Evaluating LLM Fitness*: Methods for assessing
performance on well-known tasks and benchmarks.
- *Application of LLMs in Scientific Research*: Case studies and
examples of LLMs driving discovery and innovation.
- *Innovative Insights Generation*: Strategies for leveraging LLMs to
generate novel insights and accelerate research outcomes.
- *Challenges and Solutions in LLM Application*: Discussing the
practical challenges and potential solutions in scientific research.
Accepted abstracts will be featured through short presentations during the
session. The conference will take place on September 18-19 in picturesque
Göttingen. For more information, to submit an abstract, book a stand, or
register, please visit the program homepage
https://events.gwdg.de/event/615/program.
Feel free to contact me (jennifer[dot]dsouza[at]tib[dot]eu) directly with
any questions about this session.
Dear all,
We are excited to announce the 7th FEVER workshop and shared task collocated with EMNLP 2024. The full CFP is here: https://fever.ai/workshop.html , below are some highlights:
New Shared Task: In this year’s workshop we will organise a new fact checking shared task AVeriTeC: A Dataset for Real-world Claim Verification with Evidence from the Web. It will consist of claims that are fact checked using evidence from the web. For each claim, systems must return a label (Supported, Refuted, Not Enough Evidence, Conflicting Evidence/Cherry-picking) and appropriate evidence. The evidence must be retrieved from the document collection provided by the organisers or from the Web (e.g. using a search API). For more information, see our shared task page<https://fever.ai/task.html>.
The timeline for it is as follows:
* Training/dev data release: April 2024
* Test data release: July 10, 2024
* Shared task deadline: July 20, 2024
* Shared task submission due: August 15, 2024
We invite long and short papers on all topics related to fact extraction and verification, including:
* Information Extraction
* Semantic Parsing
* Knowledge Base Population
* Natural Language Inference
* Textual Entailment Recognition
* Argumentation Mining
* Machine Reading and Comprehension
* Claim Validation/Fact checking
* Question Answering
* Information Retrieval and Seeking
* Theorem Proving
* Stance detection
* Adversarial learning
* Computational journalism
*
Descriptions of systems for the FEVER<http://fever.ai/2018/task.html>, FEVER 2.0<http://fever.ai/2019/task.html>, FEVEROUS<https://fever.ai/2021/task.html> and AVERITEC<https://fever.ai/dataset/averitec.html> Shared Tasks
Important dates:
* Submission deadline: August 15, 2024 (ARR and non-ARR submission deadline)
* Commitment deadline: September 23, 2024
* Notification: September 27, 2024
* Camera-ready deadline: October 4, 2024
* Workshop: November 15 or 16, 2024
All deadlines are 11.59 pm UTC -12h ("anywhere on Earth").
Feel free to contact us on our slack channel<https://join.slack.com/t/feverworkshop/shared_invite/zt-4v1hjl8w-Uf4yg~dift…> or via email: fever-organisers(a)googlegroups.com with any questions.
Looking forward to your participation!
--
The FEVER workshop organizers
Hi everyone,
Please find a request for participation in a very short study of one of my student's bachelor thesis below.
Best,
Dominik
-------- Weitergeleitete Nachricht --------
Betreff: Searching participants for my quick study
Datum: Thu, 13 Jun 2024 09:38:22 +0000
Von: Wolkober, Marcel <st163937(a)stud.uni-stuttgart.de>
An: dominik.schlechtweg(a)ims.uni-stuttgart.de <dominik.schlechtweg(a)ims.uni-stuttgart.de>
Hello!
For my bachelor thesis I need participants in my quick online study.
It will take approximately 5 to 10 minutes to complete and is in English. You can use your smartphone, but it's recommended to use a PC browser.
Here you can get to the study: https://semantic-nlp-captcha.de/study <https://semantic-nlp-captcha.de/study> .
Everything else will be explained there. If you have troubles on mobile, activate the desktop mode.
It would be of great help if you can forward this study to others, thanks!
Best wishes,
Marcel Wolkober
Apologies for crossposting.
Call for Papers
Information Processing & Management (IPM), Elsevier
-
CiteScore: 14.8
-
Impact Factor: 8.6
Guest editors:
-
Omar Alonso, Applied Science, Amazon, Palo Alto, California, USA.
E-mail: omralon(a)amazon.com
-
Stefano Marchesin, Department of Information Engineering, University of
Padua, Padua, Italy. E-mail: stefano.marchesin(a)unipd.it
-
Gianmaria Silvello, Department of Information Engineering, University
of Padua, Padua, Italy. E-mail: gianmaria.silvello(a)unipd.it
Special Issue on “Large Language Models and Data Quality for Knowledge
Graphs”
In recent years, Knowledge Graphs (KGs), encompassing millions of
relational facts, have emerged as central assets to support virtual
assistants and search and recommendations on the web. Moreover, KGs are
increasingly used by large companies and organizations to organize and
comprehend their data, with industry-scale KGs fusing data from various
sources for downstream applications. Building KGs involves data management
and artificial intelligence areas, such as data integration, cleaning,
named entity recognition and disambiguation, relation extraction, and
active learning.
However, the methods used to build these KGs involve automated components
that could be better, resulting in KGs with high sparsity and incorporating
several inaccuracies and wrong facts. As a result, evaluating the KG
quality plays a significant role, as it serves multiple purposes – e.g.,
gaining insights into the quality of data, triggering the refinement of the
KG construction process, and providing valuable information to downstream
applications. In this regard, the information in the KG must be correct to
ensure an engaging user experience for entity-oriented services like
virtual assistants. Despite its importance, there is little research on
data quality and evaluation for KGs at scale.
In this context, the rise of Large Language Models (LLMs) opens up
unprecedented opportunities – and challenges – to advance KG construction
and evaluation, providing an intriguing intersection between human and
machine capabilities. On the one hand, integrating LLMs within KG
construction systems could trigger the development of more context-aware
and adaptive AI systems. At the same time, however, LLMs are known to
hallucinate and can thus generate mis/disinformation, which can affect the
quality of the resulting KG. In this sense, reliability and credibility
components are of paramount importance to manage the hallucinations
produced by LLMs and avoid polluting the KG. On the other hand,
investigating how to combine LLMs and quality evaluation has excellent
potential, as shown by promising results from using LLMs to generate
relevance judgments in information retrieval.
Thus, this special issue promotes novel research on human-machine
collaboration for KG construction and evaluation, fostering the
intersection between KGs and LLMs. To this end, we encourage submissions
related to using LLMs within KG construction systems, evaluating KG
quality, and applying quality control systems to empower KG and LLM
interactions on both research- and industrial-oriented scenarios.
Topics include but are not limited to:
-
KG construction systems
-
Use of LLMs for KG generation
-
Efficient solutions to deploy LLMs on large-scale KGs
-
Quality control systems for KG construction
-
KG versioning and active learning
-
Human-in-the-loop architectures
-
Efficient KG quality assessment
-
Quality assessment over temporal and dynamic KGs
-
Redundancy and completeness issues
-
Error detection and correction mechanisms
-
Benchmarks and Evaluation
-
Domain-specific applications and challenges
-
Maintenance of industry-scale KGs
-
LLM validation via reliable/credible KG data
Submission guidelines:
Authors are invited to submit original and unpublished papers. All
submissions will be peer-reviewed and judged on originality, significance,
quality, and relevance to the special issue topics of interest. Submitted
papers should not have appeared in or be under consideration for another
journal.
Papers can be submitted *up *to 1 September 2024. The estimated publication
date for the special issue is 15 January 2025.
Papers submission via IP&M electronic submission system:
https://www.editorialmanager.com/IPM
To submit your manuscript to the special issue, please choose the article
type:
"VSI: LLMs and Data Quality for KGs".
More info here:
https://www.sciencedirect.com/journal/information-processing-and-management…
Instructions for authors:
https://www.sciencedirect.com/journal/information-processing-and-management…
Important dates:
-
Submissions close: 1 September 2024
-
Publication date (estimated): 15 January 2025
References:
Weikum G., Dong X.L., Razniewski S., et al. (2021) Machine knowledge:
creation and curation of comprehensive knowledge bases. Found. Trends
Databases, 10, 108–490.
Hogan A., Blomqvist E., Cochez M. et al. (2021) Knowledge graphs. ACM
Comput. Surv., 54, 71:1–71:37.
B. Xue and L. Zou. 2023. Knowledge Graph Quality Management: A
Comprehensive Survey. IEEE Trans. Knowl. Data Eng. 35, 5 (2023), 4969 – 4988
G. Faggioli, L. Dietz, C. L. A. Clarke, G. Demartini, M. Hagen, C. Hauff,
N. Kando, E. Kanoulas, M. Potthast, B. Stein, and H. Wachsmuth. 2023.
Perspectives on Large Language Models for Relevance Judgment. In Proc. of
the 2023 ACM SIGIR International Conference on Theory of Information
Retrieval, ICTIR 2023, Taipei, Taiwan, 23 July 2023. ACM, 39 – 50.
S. MacAvaney and L. Soldaini. 2023. One-Shot Labeling for Automatic
Relevance Estimation. In Proc. of the 46th International ACM SIGIR
Conference on Research and Development in Information Retrieval, SIGIR
2023, Taipei, Taiwan, July 23-27, 2023. ACM, 2230 – 2235.
X. L. Dong. 2023. Generations of Knowledge Graphs: The Crazy Ideas and the
Business Impact. Proc. VLDB Endow. 16, 12 (2023), 4130 – 4137.
S. Pan, L. Luo, Y. Wang, C. Chen, J. Wang, and X. Wu. 2023. Unifying Large
Language Models and Knowledge Graphs: A Roadmap. CoRR abs/2306.08302 (2023).
--
Stefano Marchesin, PhD
Assistant Professor (RTD/a)
Information Management Systems (IMS) Group
Department of Information Engineering
University of Padua
Via Gradenigo 6/a, 35131 Padua, Italy
Home page: http://www.dei.unipd.it/~marches1/
For the full text: https://nllpw.org/workshop/call/
Following the success of the first five editions of the NLLP workshop (NAACL 2019, KDD 2020, EMNLP 2021, EMNLP 2022, EMNLP 2023), we aim to bring researchers and practitioners from NLP, machine learning and other artificial intelligence disciplines together with legal practitioners and researchers. We welcome submissions describing original work on legal data, as well as data with legal relevance, such as:
Applications of NLP to legal tasks including, but not limited to:
Legal Citation Resolution
Case Outcome Analysis and Prediction
Models of Legal Reasoning
E-Discovery
Lexical and other Data Resources for the Legal Domain
Bias and Privacy
Applications of Large Language Models (LLMs) to Legal Data and Tasks
Experimental results using and adapting NLP methods for legal data including, but not limited to:
Classification
Information Retrieval
Anomaly Detection
Clustering
Knowledge Base Population
Multimedia Search
Link Analysis
Entity Recognition and Disambiguation
Training and Using Embeddings
Parsing
Dialogue and Discourse Analysis
Text Summarization and Generation
Relation and Event Extraction
Anaphora Resolution
Question Answering
Query Understanding
Combining Text with Structured Data
Tasks:
Description of new legal tasks for NLP
Structured overviews of a specific task with the goal of identifying new areas for research
Position papers presenting new visions, challenges and changes to existing research practices
Resources:
Creation of curated and/or annotated data sets that can be publicly released and used by the community to advance the field
Demos:
Descriptions of systems which use NLP technologies for legal text;
Industrial Research:
Industrial applications
Papers describing research on proprietary data
Interdisciplinary position papers:
Legal or socio-legal analyses relating to the role NLP can play in the legal field
Critical reflections on the legality and ethics of data collection and processing practices
Critical reflections about the benefits and challenges of Large Language Models (LLMs) from a legal and regulatory perspective
Critical reflections on the legality and ethics of data collection and processing practices
Submission
------------------------------------
We accept papers reporting original (unpublished) research of two types:
Long papers (max 8 pages + references)
Short papers (max 4 pages + references)
Appendices and acknowledgements do not count against the maximum page limit and should be formatted according to the guidelines below.
To submit a paper, please access the submission link https://softconf.com/emnlp2024/nllp/
Conference proceedings will be published on the ACL Anthology.
Shared Task
Together with Darrow.ai we organize the LegalLens Shared task. More information is provided here https://www.codabench.org/competitions/3052/
Participants will be invited to describe their system in a paper for the NLLP workshop proceedings. The task organizers will write an overview paper that describes the task and summarizes the different approaches taken, and analyzes their results.
More information on the submission of description papers will follow.
Ethics section
The NLLP workshop adheres to the same standards regarding ethics as the EMNLP 2024 conference. Authors will be allowed extra space after the 8th page (4th for short papers) for an optional broader impact statement or other discussion of ethics. Note that an ethical considerations section is not required, but papers working with sensitive data or on sensitive tasks that do not discuss these issues will not be accepted.
Non-archival option
The authors have the option of submitting previously unpublished research as non-archival, meaning that only the abstract will be published in the conference proceedings. We expect these submissions to describe the same quality of work as archival submissions. These will be reviewed following the same procedure as archival submissions. This option accommodates publication of the work or a superset at a later date in a conference or journal which does not allow previously archived work and to encourage presentation and feedback on mature, yet unpublished work. Non-archival submissions should adhere to the same formatting and length constraints as archival submissions.
Dual Submission and Pre-print Policy
Papers that have been or will be submitted to workshops, conferences or journals during the review period must indicate so at submission time. Authors of papers accepted for presentation at the NLLP 2024 workshop must notify the organizers by the camera-ready deadline as to whether the paper will be presented or withdrawn.
If the preliminary version of a paper was posted in arXiv, the authors should NOT mention it as their own paper in the submission. Papers that violate the double-blind review requirements will be desk rejected.
Exception: Submissions with the non-archival option are excepted from these requirements.
ACL Rolling Review Submissions
Our workshop also welcomes submissions from ACL Rolling Review (ARR). Authors of any papers that are submitted to ARR and have their meta review ready may submit their papers and reviews for consideration for the workshop until 27 September 2024. This should include submissions to ARR for the 15 August deadline. The decision of publication will be announced by 8 October 2024. The committment should be done via the workshop submission website: https://softconf.com/emnlp2024/nllp/ ("ACL Rolling Review Committment" submission type)
EMNLP 2024 Submissions
Authors of any papers that have been reviewed for EMNLP 2024 and were rejected have the opportunity to send their paper and reviews to be considered for publication in the NLLP workshop proceedings. The deadline for submitting papers and reviews is 27 September 2024. The decision of publication will be announced by 8 October 2024. The submission should be done via the workshop submission website: https://softconf.com/emnlp2024/nllp/ ("EMNLP 2024 Submission with reviews" submission type)
Double-Blind reviewing
The review process is double-blind. Submitted papers must not include author names and affiliations and they must be written in a way so that they do not break the double-blind reviewing process. If the preliminary version of a paper was posted in arXiv, the authors should NOT mention it as their own paper in the submission. Papers that violate the double-blind review requirements will be desk rejected.
Submission Style & Format Guidelines
Paper submissions must use the official ACL style templates, which are available here (Latex and Word). Please follow the paper formatting guidelines general to "*ACL" conferences available here.
Authors may not modify these style files or use templates designed for other conferences. Submissions that do not conform to the required styles, including paper size, margin width, and font size restrictions, will be rejected without review.
All long, short and theme papers must follow the ACL Author Guidelines.
Important deadlines
Submission deadline ― 3 September 2024
Submission of EMNLP papers with reviews and ARR committment ― 27 September 2024
Notification for direct submissions, ARR and EMNLP papers ― 8 October 2024
Camera ready due ― 15 October 2024 (tentative)
Workshop ― 15 or 16 November 2024
All deadlines are 11.59pm UTC -12h
Presentation
Presentation format for each paper and schedule will be announced between acceptance notification and the camera-ready deadline.
At least one author of each accepted paper must register for the NLLP 2024 workshop by the registration deadline in order for the submission to be published in the proceedings.
Welcome to the Tenth Swedish Language Technology Conference (SLTC), to be held in Linköping, Sweden, 27–29 November 2024.
https://sltc2024.github.io/
## Submissions
We invite submissions on all theoretical, practical, and applied aspects of language technology, including natural language processing, computational linguistics, speech technology, and neighbouring areas. Submissions can report on completed or ongoing research and practical applications of language technology and may be combined with system demonstrations.
The conference does not publish proceedings (“non-archival”), but authors can opt to make their accepted contributions available on the conference webpage. Hence, it is possible to submit abstracts related to work that has been or will be published elsewhere as long as this is compatible with the conditions of the respective publication channels.
## Important Dates
* Submission deadline: Wednesday, 4 September
* Notification of acceptance: Monday, 14 October
* Camera-ready version: Friday, 1 November
* Main conference: Wednesday–Thursday, 27–28 November
* Workshops: Friday, 29 November
## Submission formats
Submissions are extended abstracts using style files that we will make available on the conference webpage. They should include author names and affiliations (i.e., they should not be anonymous). Abstracts should be up to four pages, excluding references, and be submitted via OpenReview no later than Wednesday, 4 September. Please see the conference webpage for details. For more information about submissions, see https://sltc2024.github.io/cfp.
## Organisers
SLTC 2024 is organised by Linköping University. The organisation committee is chaired by
* Lars Ahrenberg
* Arne Jönsson
* Marco Kuhlmann
* Jenny Kunz
To inquire about all aspects of the conference, please email “sltc2024(a)googlegroups.com<mailto:sltc2024@googlegroups.com>”.
An opportunity to join the Assessment Research Group at the British Council as Researcher: AI & Data Science in Assessment. Full details and link to application here: https://careers.britishcouncil.org/job/London-Researcher-AI-&-Data-Science-…
For any enquiries, feel free to contact me. Deadline for applications is 1st April.
The British Council is the United Kingdom's international organisation for cultural relations and educational opportunities. A registered charity: 209131 (England and Wales) SC037733 (Scotland). This message is for the use of the intended recipient(s) only and may contain confidential information. If you have received this message in error, please notify the sender and delete it. The British Council accepts no liability for loss or damage caused by viruses and other malware and you are advised to carry out a virus and malware check on any attachments contained in this message.
*** Last Call for Demo and Poster Submissions ***
12th IEEE International Conference on Cloud Engineering (IC2E)
September 24-27, 2024, 5* Aliathon Resort, Paphos, Cyprus
https://conferences.computer.org/IC2E/2024/,
The IEEE International Conference on Cloud Engineering (IC2E) is a premier conference on Cloud Computing,
which in the last two decades has significantly changed the way IT resources are consumed.
###IMPORTANT DATES###
* Demo and poster submission deadline: June 23, 2024 (AoE)
* Author notification: July 15, 2024
* Camera-ready due: August 2, 2024
###TOPICS OF INTEREST###
IC2E 2024 invites submissions of high-quality demo and poster papers describing all aspects of cloud engineering. Some representative topics of interest include, but are not limited to:
* Cloud management and engineering, from single (micro-)services to complete system landscapes.
* Cloud applications, ranging from the Internet of Things (IoT) over Big Data to Machine Learning.
* Cloud systems, including storage, data distribution, and serverless computing.
* Cloud security and privacy concerns on all levels of the Cloud stack.
* Fog and Edge Computing, and all other system types in the Cloud-Edge Compute Continuum.
* Everything as a Service.
* Non-technical aspects of Cloud Computing, e.g., cloud governance or cloud economics.
### SUBMISSION ###
Authors must submit poster/demo papers in PDF at EasyChair,
https://easychair.org/conferences/?conf=ic2e2024, upon selecting the Posters/Demos Track option. Papers
should use the IEEE Manuscript Template for Conference Proceedings for formatting. LaTeX users must use
\documentclass[10pt,conference]{IEEEtran}, without including the compsoc or compsocconf options. Poster and
demo papers may not exceed 2 double-column pages and should be single-blind. Depending on the paper type, the
paper title should start with either "Demo:" or "Poster:".
### REVIEW PROCESS AND PUBLICATION ###
All submitted papers will be peer-reviewed based on technical merit, novelty, and potential to stimulate
interesting discussions at the conference, as well as alignment with the conference theme. The papers must
contain original ideas and must not have been published or under review elsewhere, except for demo papers
which may showcase previously published systems. In this case, authors must clearly state this and include the
original publication as part of their submission. Accepted papers will be published by IEEE Computer Society
Conference Publishing Services (indexed by EI).
### ORGANIZATION & CONTACT ###
Demo and Poster Chairs:
* Thaleia Dimitra Doudali, IMDEA Software Institute, Spain (thaleia.doudali(a)imdea.org)
* Lena Mashayekhy, University of Delaware, USA (mlena(a)udel.edu)
General Chairs:
* George Pallis University of Cyprus, Cyprus
* Weisong Shi University of Delaware, US
International Conference ‘New Trends in Translation and Technology’ (NeTTT’2024)
Varna, Bulgaria, 3-6 July 2024
# Call for Participation
We are pleased to share the accepted papers of NeTTT’2024. To view the full list, please click here - https://nettt-conference.com/accepted-papers/.
To register, please visit https://nettt-conference.com/fees-registration/
We very much hope to welcome you at NeTTT’2024 in Varna!
# The conference
The second edition of the forthcoming International Conference ‘New Trends in Translation and Technology’ (NeTTT’2024) will take place in Varna, Bulgaria, 3-6 July 2024.
The objective of the conference is (i) to bridge the gap between academia and industry in the field of translation and interpreting by bringing together academics in linguistics, translation studies, machine translation and natural language processing, developers, practitioners, language service providers and vendors who work on or are interested in different aspects of technology for translation and interpreting, and (ii) to be a distinctive event for discussing the latest developments and practices. NeTTT’2024 invites all professionals who would like to learn about the new trends, present the latest work or/and share their experience in the field, and who would like to establish business and research contacts, collaborations and new ventures.
The conference will take the form of presentations (peer-reviewed research and user presentations, keynote speeches), and posters; it will also feature panel discussions. The accepted papers will be published as open-access conference e-proceedings.
# Venue
The conference will take place at Conference Hotel Cherno More, Varna, situated only 200 m away from the fine sandy Black Sea beach.
# Keynote speakers
We are delighted to announce the NeTTT’2024 keynote speakers
- Helena Moniz (University of Lisbon and Unbabel), President of the European Association of Machine Translation
- Carla Parra Escartín (RWS Language Weaver)
# Tutorial (3 July 2024)
- Tharindu Ranasinghe (Lancaster University), Quality Estimation for Machine Translation
# Special session - Future of Translation Technology in the Era of LLMs and Generative AI
We are excited to share that NeTTT’2024 will have a special theme with the goal of stimulating discussion around Large Language Models, Generative AI and the Future of Translation and Interpreting Technology. While the new generation of Large Language Models such as CHATGPT and LLAMA showcase remarkable advancements in language generation and understanding, we find ourselves in uncharted territory when it comes to their performance on various Translation and Interpreting Technology tasks with regards to fairness, interpretability, ethics and transparency.
# Sponsors
We are proud to announce the conference sponsors:
OONA - Diamond Sponsor
Pangeanic – Gold Sponsor
MITRA Translations– Silver Sponsor
juremy – Bronze Sponsor
# Further information and contact details
The conference website is https://nettt-conference.comhttps://nettt-conference.com/ and will be updated on a regular basis. For further information, please contact us at nettt2024(a)nettt-conference.com.
Best Regards
Tharindu Ranasinghe