Call for Abstracts
'Towards Linguistically Motivated Computational Models of Framing'
Date: Feb 28 - Mar 1, 2024
Location: Ruhr-University Bochum, Germany
Organizers: Annette Hautli-Janisz (University of Passau), Gabriella
Lapesa (University of Stuttgart), Ines Rehbein (University of Mannheim)
Homepage: https://sites.google.com/view/dgfs2024-framing
Call for Papers:
Framing is a central notion in the study of language use to rhetorically
package information strategically to achieve …
[View More]conversational goals
(Entman, 1993) but also, more broadly, in the study of how we organize
our experience (Goffman, 1974). In his seminal article, Entman (1993)
defines framing as "to select some aspects of a perceived reality and
make them more salient in a communicating text, in such a way as to
promote problem definition, causal interpretation, moral evaluation,
and/or treatment recommendation for the item described." This frame
definition has recently been operationalized in NLP in terms of
coarse-grained topic dimensions (Card et al., 2015), e.g., by modeling
the framing of immigration in the media as a challenge to economy vs. a
human rights issue. But there is more to frames than just topics.
The breadth of the debate on what constitutes a frame and on its (formal
and cognitive) definition naturally correlates to the interdisciplinary
relevance of this phenomenon: a theoretically motivated (computational)
model for framing is still needed, and this is precisely the goal of
this workshop, which will bring together researchers from theoretical,
applied and computational linguistics interested in framing analysis.
Our main interest is in furthering our understanding of how different
linguistic levels contribute to the framing of messages, and to pave the
way for the development of linguistically-driven computational models of
how people use framing to communicate their attitudes, preferences and
opinions.
We thus invite contributions that cover all levels of linguistic
analysis and methods: from phonetics (e.g., euphony: the use of
repetition, alliteration, rhymes and slogans to create persuasive
messages) and syntax (e.g., topicalization, passivization) to semantics
(lexical choices, such as Pro-Life vs. Pro-Choice; the use of pronouns
to create in- vs. out-groups; the use of metaphors; different types of
implicit meaning) to pragmatics (e.g., pragmatic framing through the use
of presupposition-triggering adverbs). We also invite work on
experimental and computational studies on framing which employ
linguistic structure to better understand instances of framing.
The workshop is part of the 46th Annual Conference of the German
Linguistic Society (DGfS 2024), held from 28 Feb - 1 March 2024 at
Ruhr-Universität Bochum, Germany.
Submission instructions:
We invite the submission of anonymous abstracts for 30 min talks
including discussion. Submissions should not exceed one page, 11pt
single spaced (abstract + references), with an optional additional page
for images. The reviewing process is double-blind; please ensure that
the paper does not include the authors' names and affiliations.
Furthermore, self-references that reveal the author's identity, e.g.,
"We previously showed (Smith, 1991) ...", should be avoided. Instead,
use citations such as "Smith previously showed (Smith, 1991) …".
Submissions open: June 1, 2023 - Aug. 18, 2023
Abstract review period: Aug. 21, 2023 - Sept. 5, 2023
Meeting email: dgfs2024-framing(a)fim.uni-passau.de
--
Ines Rehbein
Data and Web Science Group
University of Mannheim, Germany
[View Less]
The Industry Day of CIKM ’23 will be held on Sunday 22nd Oct 2023 in Birmingham, UK. As with the main conference, which will be held on-site, we anticipate that all presentations for the Industry Day will be delivered in person. Exceptions may be made in case of severe travelling restrictions.
We call for technical talks which will cover how topics of interest relevant to the broader CIKM community, including but not limited to knowledge management, information retrieval, efficient data …
[View More]processing, neural and large language models, evaluation, recommender systems, data mining, and others found in the CIKM ‘23 Call for Papers are used in an industrial setting. For example, how machine learning is put to use in practical scenarios, how user behaviour can be observed and interpreted, how to improve systems in practice, how industrial pipelines can be optimised, and how scale is a challenge in more ways than the obvious. We also encourage talk proposals from small companies, such as startups or spin-offs from either a university project or a large company
Talks may address challenges, solutions, and case studies of interesting and innovative systems in areas including but not limited to:
* Innovative approaches used in deployed systems and product
* System design from industry practitioners which identify best practices and design principles for machine learning systems and their scalability aspects
* Metrics and measurement techniques used to understand performance of production systems
* Practical challenges such as data, privacy, integrity, scale, regulation, etc.
* Domain specific challenges and niche focuses
* Connections with academia to solve interesting problems, including talk proposals from academics spending time in industry, or vice-versa, covering insights for other practitioners
The authors of accepted proposals will be invited to submit an abstract to be published in the conference proceeding. Each presentation will be 15-20 minutes long including Q&A. Submissions should include:
Title and abstract
Speaker's bio
Relevance to above themes and CIKM topics
CIKM is a technical conference, so preference will be given to talks describing applied research and technical challenges rather than product presentations.
Speakers will be asked to confirm their presence at the conference if their submission is accepted.
Submission Instructions
Proposals should be at most 2 pages and follow the ACM format. Formatting guidelines are available at the ACM Website (use the ˮsigconf” proceedings template). https://www.acm.org/publications/proceedings-template
Submissions are not anonymous and should contain speaker details. Proposals should be submitted electronically via EasyChair: https://easychair.org/my/conference?conf=cikm23
Important Dates:
- All deadlines are at 11:59pm in the Anywhere on Earth time zone.
- Submissions Due: July 14, 2023
- Notifications: August 11, 2023
- Camera ready for abstracts (no exceptions): August 18, 2023
Industry Day Chairs
Jiyin He, Signal AI, UK
Jeremy Pickens, Redgrave Data, USA
Contact: cikm2023-industry(a)easychair.org
[View Less]
On 6/9/23, Serge Heiden <slh(a)ens-lyon.fr> wrote:
> Hi Albretch,
>
> For some ideas, I made something related to that more than 20 years ago:
> https://shs.hal.science/halshs-00151838v1/document (in French sorry,
> just looking at the graphs at the end should be informative)
> Not as interactive as it could, but UI technologies have evolved now.
>
> Best,
> Serge
>
Nothing to apologize for, last time I checked (speaking or) writing a
paper in French or any …
[View More]other language other than Russian wasn't
illegal or wrong in any way (USG has made "radioactive" even the use
of any English word starting with "Vlad" or "Put"). Also, one of my Ls
is Spanish, I am fairly acquainted with Latin and know or can easily
infer a good chuck of French words. translate.google.com does a
relatively fine job of blunt translation of the general sense of
texts. For some reason I had to download your paper and upload to
google translate to be able to read it in English, since it has even
taken over as my preferred curse language.
I found your paper very interesting, so I gave it a first reading to
think about it and reread it more carefully later when I found the
time to do so. Furthermore (how do you say „darüberhinausgehend" in
English? ;-)), those "face-à-faces" kinds of political debates, seen
as forceful "conversations", -conversations nonetheless-, bring about
a whole host of corpora research issues I am sure you/those studying
such topics must have noticed. There are different kinds of
"conversations". Friar Leo (Saint Francis' fioretti) and Evodius
(Saint Augustine's "De libero arbitrio voluntatis") were both real
persons but the conversation of sort was very one-sided with Friar Leo
pretty much lending an ear to Saint Francis and Evodius who was more
of an equal and Saint Augustine's personal friend. Plato, involved his
teacher Socrates, Alcibiades (even a slave in his Meno which in those
times was quite an affront to mock the establishment), but also the
Egyptian god of writing (Thoth) in his dialogues. Shakespeare's and
Dostoevsky's fictional characters have taken a voice and life of their
own for many generations. However all those conversations have some
aspects in common.
Conversations/dialogues are peculiar in the sense that (kind of like
with music) you have more than one text realizing more or less
hopefully a certain train of thought (Gedankengang), yet all
participating texts reflect explicitly/lexically or implicitly in more
of a connotative way on one another into some sort of more orchestral
narrative, which (as the poet in me sees it) can NOT be fully reduced
in a syntactically operational way to just the participating texts.
Think of how music is played (when we had such thing) each instrument
(in accord with some shared harmonic and rhythmic) doing its own part,
but also as part of a Hegelian whole.
> “Words have no meaning; they only have uses”. Wittgenstein's quip could lead to meaning, if we knew for a term all of its uses. But this set does not exist.
Actually, technically speaking and demonstrably, this is not quite
the case. It hasn't been for at least two decades. A back of the
envelope calculation will show that, as part of the societally-wide
"monitoring" in the breath by breath surveilled societies we live in
these days, recording, tracking ("every piece of tangible information"
(tm) to "its source") and indexing in a cross-correlated way
everything everyone says for good real time is not only feasible, but
it is exactly what the NSAs of the world have been very cheaply doing
with the happily willing cooperation of proles who (to George Orwell's
dismay) can't take their head off from their cell phones' ass.
> Because the words depend on the "situations" of use, causes and conditions of their enunciation, and we know well that these vary ad infinitum, ...
I have no way of knowing if "ad infinitum" is meant in a metaphoric
way. In actual texts, seeing words as nodes in DAGs, "ad infinitum"
would mean what?, six (6) words?
> ... in time as well as in space. The only solution consists in seizing one of these "situations" where the text is stated, precise, clearly circumscribed, dated, controllable in its essential aspects, then to make, about it, the sum of the words to be studied
... and this is what makes corpora research interesting. "These
situations" don't exist ;-). The essentially locked inner- and
outer-intersubjectivity of language makes impossible (and/or
hopelessly senseless) to clearly circumscribe in controllable ways the
essential aspects of what, when, where, how ... a text is stated,
because texts don't have a life of their own, nor do they determine or
are derived from "factual reality" as if they were just an object,
like a piece of stone, in the physically empirical ways we have been
conditioned to "rationally" think since the scientific revolution. In
a Hegelian („im ,Allgemeinen' Sinne"), every one who listened to,
thought about such debates and/or relates to them even in more or less
indirect, marginal ways (as we do right now while we talk about it)
would be part of "those situations". How could you "clearly
circumscribe ..." that?
Niggah isn't really into protagonism, but I could relate a (in an
early sense the most) formative experience I have had with texts (for
whatever reason I have always loved to read alternating more than one
book during the same period of time since I was little). I was born
into and raised in the Cuba of the 60's (during the most determining,
"crazy" periods of "the revolution") as part of a family of high
profile political dissidents (imprisoned before and after for
political reasons), which taxed, busied the hell out of your mind.
From teachers not speaking, not even looking at me during classes
(which other kids, of course, noticed and protested/asked about even
to their own parents (I myself could not understand it either)),
"because" as my mother explained to me, "they had to then write down
all I said during classes for the police to keep as part of my
profile" (in those times they didn't have cameras, sensors everywhere;
people didn't have cell phones); to my mother forcing her children to
go to church "because she didn't like that idea of politicians telling
people what/how to think" (I could not understand what "politicians"
meant, nor could I why they would mess with my wanting to play with my
friends)) I could not make sense of most of the hellish weirdness
happening around me but I kept thinking and asking my mother about it
who would then start telling me about "the banality of evil", ...
which I couldn't quite understand either, forcibly turning me into
some sort child philosopher.
One day I found in a box on its way to the garbage a religious book
("Cien lecciones de historias sagradas") which included the story in
"Saint Francis' fioretti" about "what 'perfect' joy consists in". The
moral of that story conceptually paraphrased by yours truly as a
philosophical statement with a consciousness studies slant to it would
be: "the greatest of all gifts and graces that God grants us with is
the capacity of overcoming oneself". That one liner overwhelmingly
fascinated me (it has to this very day!, its truthfulness, its poetic
import, its liberating hopefulness, …!). Later I understood Saint
Francis was talking about that thing they used to call "virtue"
(which, since you can’t can it and sell it it doesn’t matter much
these days), ... and that the underlying aspect his true statement was
based on had no explanation (kind of "pulling oneself up by one's own
bootstraps" not even using some sort of Archimedean lever, but one's
own spirituality. How on earth is that even possible!?!) and how could
something Saint Francis said to friar Leo during their wintry talk to
Saint Mary's of the Angels in Perugia/Umbria/Central Italy in the XIV
century as part of the medieval Zeitgeist, after going through such a
long sequence of communicative realizations (from Fr. Ugolino
authoring, discussions, reprints, moving the text to another
continent, ...) impress so mightily a little boy in Havana some 500+
years later?
> From this sum, this exploration and this exhaustive comparison of contexts, the statistical research of co-occurrences can draw an objective description. Does it put us on a track that leads to meaning?
Yes, to some extent, but we shouldn't be illusive about that kind of
"objective description". The interesting aspect would be: to which
extent would claims about "objectivity" demonstrably be, in some sense
factual and to which extent we would be, "quite naturally", reading
whatever we want into those texts in order to "logically" justify
whatever we want? We should keep it real because our work is part of
the "AI"-based "social control" (as freedom lovers call -repression-)
thoroughly employed these days in all levels of society and against
individuals they target.
Yet, there must be something to it, because we live and learn through
our conscious communicative exchanges. Society at large is kind of a
corpus, a hyper forest of decentralized texts (not only "texts" in the
way we see them as sequences of characters consciously written on some
permanent media from stone, to reed, to paper, to magnetic excitation
on a hard drive, but our actions are functional "texts" through which
we constantly interacting with one another ...) You buy some veggies
paying with some money which you earned by functionally doing whatever
with a social import, which doesn't mean the same to you as it does to
the farmer growing it, the people who brought it to the store or
selling it to you, ..., but it somehow all happens just fine. "Money",
"texts", ... semiologically serving as multipurpose accommodating and
balancing medium both outer- and inner-intersubjectively, indeed our
mind-body link!
lbrtchx
[View Less]
(apologies for cross-posting)
----------------------------------------------------------------
*Workshop for NLP Open Source Software (NLP-OSS)*
06 Dec 2023, Co-located with EMNLP 2023
https://nlposs.github.io/
Deadline for Long and Short Paper submission: 09 August, 2023
(23:59, GMT-11)
----------------------------------------------------------------
You have tried to use the latest, bestest, fastest LLM models and bore
grievances but found the solution after hours of coffee and computer
…
[View More]staring. Share that at NLP-OSS and suggest how open source could
change for the better (e.g. best practices, documentation, API design
etc.)
You came across an awesome SOTA system on NLP task X and no LLM has
beaten its F1 score. However, the code is now stale and it takes a
dinosaur to understand the code. Share your experience at NLP-OSS and
propose how to "replicate" these forgotten systems.
You see this shiny GPT from a blog post, tried it to reproduce similar
results on a different task and it just doesn't work on your dataset.
You did some magic to the code and now it works. Show us how you did
it! Though they're small tweaks, well-motivated and empirically tested
are valid submissions to NLP-OSS.
You have tried 101 NLP tools and there's none that really do what you
want. So you wrote your own shiny new package and made it open source.
Tell us why your package is better than the existing tools. How did
you design the code? Is it going to be a one-time thing? Or would you
like to see thousands of people using it?
You have heard enough of open-source LLM and pseudo-open-source GPT
but not enough about how it can be used for your use-case or your
commercial product at scale. So you contacted your legal department
and they explained to you about how data, model and code licenses
work. Sharing the knowledge with the NLP-OSS community.
You have a position/opinion to share about free vs open vs closed
source LLMs and have valid arguments, references or survey/data to
support your position. We would want to hear more about it.
At last, you've found the avenue to air these issues in an academic
platform at the NLP-OSS workshop!!!
Sharing your experiences, suggestions and analysis from/of NLP-OSS
P/S:
1st CALL FOR PAPERS
====
----------------------------------------------------------------
*Workshop for NLP Open Source Software (NLP-OSS)*
06 Dec 2023, Co-located with EMNLP 2023
https://nlposs.github.io/
Deadline for Long and Short Paper submission: 09 August, 2023
(23:59, GMT-11)
----------------------------------------------------------------
The Third Workshop for NLP Open Source Software (NLP-OSS) will be co-located
with EMNLP 2023 on 06 Dec 2023.
Focusing more on the social and engineering aspect of NLP software
and less on scientific novelty or state-of-art models, the Workshop for NLP-OSS
is an academic forum to advance open source developments for NLP research,
teaching and application.
NLP-OSS also provides an academic workshop to announce new software/features,
promote the collaborative culture and best practices that go beyond
the conferences.
We invite full papers (8 pages) or short papers (4 pages) on topics related to
NLP-OSS broadly categorized into (i) software development, (ii) scientific
contribution and (iii) NLP-OSS case studies.
- **Software Development**
- Designing and developing NLP-OSS
- Licensing issues in NLP-OSS
- Backwards compatibility and stale code in NLP-OSS
- Growing, maintaining and motivating an NLP-OSS community
- Best practices for NLP-OSS documentation and testing
- Contribution to NLP-OSS without coding
- Incentivizing OSS contributions in NLP
- Commercialization and Intellectual Property of NLP-OSS
- Defining and managing NLP-OSS project scope
- Issues in API design for NLP
- NLP-OSS software interoperability
- Analysis of the NLP-OSS community
- **Scientific Contribution**
- Surveying OSS for specific NLP task(s)
- Demonstration, introductions and/or tutorial of NLP-OSS
- Small but useful NLP-OSS
- NLP components in ML OSS
- Citations and references for NLP-OSS
- OSS and experiment replicability
- Gaps between existing NLP-OSS
- Task-generic vs task-specific software
- **Case studies**
- Case studies of how a specific bug is fixed or feature is added
- Writing wrappers for other NLP-OSS
- Writing open-source APIs for open data
- Teaching NLP with OSS
- NLP-OSS in the industry
Submission should be formatted according to the [EMNLP 2023
templates](https://2023.emnlp.org/call-for-papers) and submitted to
[OpenReview](https://openreview.net/group?id=EMNLP/2023/Workshop/NLP-OSS)
ORGANIZERS
Geeticka Chauhan, Massachusetts Institute of Technology
Dmitrijs Milajevs, Grayscale AI
Elijah Rippeth, University of Maryland
Jeremy Gwinnup, Air Force Research Laboratory
Liling Tan, Amazon
[View Less]
Dear colleagues,
We are happy to invite you to join the *Arabic NER SharedTask 2023*
<https://dlnlp.ai/st/wojood/> which will be organized as part of the WANLP
2023. We will provide you with a large corpus and Google Colab notebooks to
help you reproduce the baseline results.
دعوة للمشاركة في مسابقة استخراج الكيونات المسماه من النصوص العربية. سنزود
المشاركين بمدونة وبرمجيات للحصول على نتائج مرجعية يمكنهم البناء عليها.
*INTRODUCTION*
Named Entity Recognition (NER) is integral to …
[View More]many NLP applications. It is
the task of identifying named entity mentions in unstructured text and
classifying them to predefined classes such as person, organization,
location, or date. Due to the scarcity of Arabic resources, most of the
research on Arabic NER focuses on flat entities and addresses a limited
number of entity types (person, organization, and location). The goal of
this shared task is to alleviate this bottleneck by providing Wojood, a
large and rich Arabic NER corpus. Wojood consists of about 550K tokens (MSA
and dialect, in multiple domains) that are manually annotated with 21
entity types.
*REGISTRATION*
Participants need to register via this form (
*https://forms.gle/UCCrVNZ2LaPviCZS6* <https://forms.gle/UCCrVNZ2LaPviCZS6>).
Participating teams will be provided with common training development
datasets. No external manually labelled datasets are allowed. Blind test
data set will be used to evaluate the output of the participating teams.
Each team is allowed a maximum of 3 submissions. All teams are required to
report on the development and test sets (after results are announced) in
their write-ups.
*FAQ*
For any questions related to this task, please check our *Frequently Asked
Questions*
<https://docs.google.com/document/d/1XE2n89mFLic2P9DO_sAD51vy734BOt0kgtZ6bFf…>
*IMPORTANT DATES*
- March 03, 2023: Registration available
- May 25, 2023: Data-sharing and evaluation on development set Avaliable
- June 10, 2023: Registration deadline
- July 20, 2023: Test set made available
- July 30, 2023: Evaluation on test set (TEST) deadline
- Augest 29, 2023: Shared task system paper submissions due
- October 12, 2023: Notification of acceptance
- October 30, 2023: Camera-ready version
- TBA: WANLP 2023 Conference.
** All deadlines are 11:59 PM UTC-12:00 (Anywhere On Earth).*
*CONTACT*
For any questions related to this task, please contact the organizers
directly using the following email address: *NERShare...(a)gmail.com
<https://groups.google.com/>* or join the google group:
*https://groups.google.com/g/ner_sharedtask2023*
<https://groups.google.com/g/ner_sharedtask2023>.
*SHARED TASK*
As described, this shared task targets both flat and nested Arabic NER. The
subtasks are:
*Subtask 1:* *Flat NER*
In this subtask, we provide the Wojood-Flat train (70%) and development
(10%) datasets. The final evaluation will be on the test set (20%). The
flat NER dataset is the same as the nested NER dataset in terms of
train/test/dev split and each split contains the same content. The only
difference in the flat NER is each token is assigned one tag, which is the
first high-level tag assigned to each token in the nested NER dataset.
*Subtask 2:* *Nestd NER*
In this subtask, we provide the Wojood-Nested train (70%) and development
(10%) datasets. The final evaluation will be on the test set (20%).
*METRICS*
The evaluation metrics will include precision, recall, F1-score. However,
our official metric will be the micro F1-score.
The evaluation of shared tasks will be hosted through CODALAB. Teams will
be provided with a CODALAB link for each shared task.
-*CODALAB link for NER Shared Task Subtask 1 (Flat NER)*
<https://codalab.lisn.upsaclay.fr/competitions/11594>
-*CODALAB link for NER Shared Task Subtask 2 (Nestd NER)*
<https://dlnlp.ai/st/wojood/>
*BASELINES*
Two baseline models trained on Wojood (flat and nested) are provided:
*Nested NER baseline:* is presented in this *article*
<https://aclanthology.org/2022.lrec-1.387/>, and code is available in
*GitHub* <https://github.com/SinaLab/ArabicNER>. The model achieves a micro
F1-score of 0.9059 (note that this baseline does not handle nested entities
of the same type).
*Flat NER baseline:* same code repository for nested NER (*GitHub*
<https://github.com/SinaLab/ArabicNER>) can also be used to train flat NER
task. Our flat NER baseline achieved a micro F1-score of 0.8785.
*GOOGLE COLAB NOTEBOOKS*
To allow you to experiment with the baseline, we authored four Google Colab
notebooks that demonstrate how to train and evaluate our baseline models.
[1] *Train Flat NER*
<https://gist.github.com/mohammedkhalilia/72c3261734d7715094089bdf4de74b4a>:
This notebook can be used to train our ArabicNER model on the flat NER task
using the sample Wojood data found in our repository.
[2] *Evaluate Flat NER*
<https://gist.github.com/mohammedkhalilia/c807eb1ccb15416b187c32a362001665>:
this notebook will use the trained model saved from the notebook above to
perform evaluation on unseen dataset.
[3] *Train Nested NER*
<https://gist.github.com/mohammedkhalilia/a4d83d4e43682d1efcdf299d41beb3da>:
This notebook can be used to train our ArabicNER model on the nested NER
task using the sample Wojood data found in our repository.
[4] *Evaluate Nested NER*
<https://gist.github.com/mohammedkhalilia/9134510aa2684464f57de7934c97138b>:
this notebook will use the trained model saved from the notebook above to
perform evaluation on unseen dataset.
*ORGANIZERS*
- Mustafa Jarrar, Birzeit University
- Muhammad Abdul-Mageed, University of British Columbia & MBZUAI
- Mohammed Khalilia, Birzeit University
- Bashar Talafha, University of British Columbia
- AbdelRahim Elmadany, University of British Columbia
- Nagham Hamad, Birzeit University
- Alaa Omer, Birzeit University
[View Less]
Dear all,
with apologies for cross-postings, I'm sharing again the 2023 call for papers of the Journal of Open Humanities Data. This time we've added an explicit mention of large language model prompts and prompt engineering strategies among the language resources of interest to the journal, plus a reminder that our Covid-19 special collection is still accepting submissions. We've also explicitly included Library Science and Media Studies in the scope.
Kind regards,
Barbara
Call for Papers …
[View More]for 2023
The Journal of Open Humanities Data (JOHD)<https://openhumanitiesdata.metajnl.com/> features peer-reviewed publications describing humanities research objects with high potential for reuse. These might include curated resources like (annotated) linguistic corpora, ontologies, and lexicons, as well as databases, maps, atlases, linked data objects, and other data sets created with qualitative, quantitative, or computational methods, including large language model prompts and prompt engineering strategies.
We are currently inviting submissions of two varieties:
1. Short data papers contain a concise description of a humanities research object with high reuse potential. These are short (1,000 words) highly structured narratives. A data paper does not replace a traditional research article, but rather complements it.
2. Full length research papers discuss and illustrate methods, challenges, and limitations in humanities research data creation, collection, management, access, processing, or analysis. These are intended to be longer narratives (3,000 - 5,000 words), which give authors the ability to contribute to a broader discussion regarding the creation of research objects or methods.
Humanities subjects of interest to the JOHD include, but are not limited to Art History, Classics, History, Library Science, Linguistics, Literature, Media Studies, Modern Languages, Music and musicology, Philosophy, Religious Studies, etc. Research that crosses one or more of these traditional disciplinary boundaries is highly encouraged. Authors are encouraged to publish their data in recommended repositories<https://openhumanitiesdata.metajnl.com/about/#repo>. More information about the submission process<https://openhumanitiesdata.metajnl.com/about/submissions>, editorial policies<https://openhumanitiesdata.metajnl.com/about/editorialpolicies/> and archiving<https://openhumanitiesdata.metajnl.com/about/> is available on the journal’s web pages.
Submissions are still open for our special collection, Humanities Data in the Time of COVID-19<https://openhumanitiesdata.metajnl.com/collections/humanities-data-in-the-t…>. This collection includes data papers that span various areas of enquiry about the COVID-19 pandemic through the lens of the Humanities. Data from this period have far-reaching and impactful reuse potential, so we encourage you to share your data by submitting to this growing collection.
JOHD provides immediate open access to its content on the principle that making research freely available to the public supports a greater global exchange of knowledge.
We accept online submissions via our journal website. See Author Guidelines <https://openhumanitiesdata.metajnl.com/about/submissions/> for further information. Alternatively, please contact the editor<https://openhumanitiesdata.metajnl.com/contact/> if you are unsure as to whether your research is suitable for submission to the journal.
Authors remain the copyright holders and grant third parties the right to use, reproduce, and share the article according to the Creative Commons<http://creativecommons.org/licenses/by/4.0/> licence agreement.
[View Less]
Thank you! igraph seems to be more Linux/Debian friendly. There is a
"GNU R network analysis and visualization" package: r-cran-igraph
So far I have found:
https://cran.r-project.org/web/packages/igraph/https://cran.r-project.org/web/packages/igraph/igraph.pdf
and a bunch of videos/tutorials, which I will have a better opinion
about after I watch them.
I will keep publicly posting my experiences to help those running
against the same kinds of problems.
$ time apt-cache search gephi
…
[View More]real 0m0.267s
user 0m0.255s
sys 0m0.012s
$ time apt-cache search igraph
karbon - vector graphics application for the Calligra Suite
cl-graph - simple graph data structure and algorithms
libdirgra-java - Java library providing a simple directed graph implementation
libdirgra-java-doc - Documentation for dirgra
fonts-bajaderka - Warsaw's sign painters styled font
fonts-gfs-neohellenic - modern Greek font family with matching Latin
fonts-gfs-solomos - ancient Greek oblique font
fonts-isabella - Isabella free TrueType font
fonts-sil-annapurna - smart font for languages using Devanagari script
fonts-uralic - Truetype fonts for Cyrillic-based Uralic languages
golang-github-guptarohit-asciigraph-dev - Make lightweight ASCII line
graph in CLI apps with no other dependencies
golang-github-jesseduffield-asciigraph-dev - Go package to make
lightweight ASCII line graph without dependencies
golang-github-steveyen-gtreap-dev - gtreap is an immutable treap
implementation in the Go Language
gpw - Trigraph Password Generator
libigraph-dev - library for creating and manipulating graphs - development files
libigraph-examples - library for creating and manipulating graphs -
example files
libigraph1 - library for creating and manipulating graphs
libjgrapht0.6-java - mathematical graph theory library for Java
libjgrapht0.8-java - mathematical graph theory library for Java
libtext-password-pronounceable-perl - Perl module to generate
pronounceable passwords
liwc - Tools for manipulating C source code
msort - utility for sorting records in complex ways
libnauty2 - library for graph automorphisms -- library package
libnauty2-dev - library for graph automorphisms -- development package
nauty - library for graph automorphisms -- interface and tools
nauty-doc - library for graph automorphisms -- user guide
otp - Generator for One Time Pads or Passwords
perl-tk - Perl module providing the Tk graphics library
python3-igraph - High performance graph data structures and algorithms
(Python 3)
r-cran-graphlayouts - GNU R additional layout algorithms for network
visualizations
r-cran-gwidgets - gWidgets API for Toolkit-Independent, Interactive GUIs
r-cran-igraph - GNU R network analysis and visualization
r-cran-propclust - Propensity Clustering and Decomposition
scalable-cyrfonts-tex - Scalable Cyrillic fonts for TeX
texlive-pictures - TeX Live: Graphics, pictures, diagrams
texlive-fonts-extra - TeX Live: Additional fonts
texlive-latex-extra - TeX Live: LaTeX additional packages
tran - transcribe between character scripts (alphabets)
vis - Modern, legacy free, simple yet efficient vim-like editor
real 0m0.303s
user 0m0.283s
sys 0m0.020s
$
On 6/9/23, David Chartash <dchartas(a)ieee.org> wrote:
> Hi Albretch,
> I would start off with Gephi <https://gephi.org/> or try the R/C/Python...
> package igraph <https://igraph.org/>.
> Cheers,
>
> David
> ---
> Please forgive any spelling errors, sent from a poorly implemented software
> keyer
>
> On Fri, Jun 9, 2023, 02:40 Albretch Mueller via Corpora <
> corpora(a)list.elra.info> wrote:
>
>> I could imagine, as John Lennon used to sing, that "I am not the only
>> one" in need of such an application.
>>
>> At times you get ten of thousand lines which you would like to
>> quickly “visually parse” to gain a general sense of what you've got.
>> Ideally, you should be able to play with it to select the records you
>> need.
>>
>> Think for example, of the many links to texts you would get from
>> archive.org (which also includes some metadata) or *.pub (each site
>> using their own quirkiness)
>>
>> Based on some sort of GUI, you would see weighted terms (coloured or
>> not based on a user's preference) with all other terms preceding (as
>> some sort of tree-like structure confluent on that term) and following
>> it ( ... branching off of it).
>>
>> Which kind of applications people use to do such thing?
>>
>> lbrtchx
>> _______________________________________________
>> Corpora mailing list -- corpora(a)list.elra.info
>> https://list.elra.info/mailman3/postorius/lists/corpora.list.elra.info/
>> To unsubscribe send an email to corpora-leave(a)list.elra.info
>>
>
[View Less]
I could imagine, as John Lennon used to sing, that "I am not the only
one" in need of such an application.
At times you get ten of thousand lines which you would like to
quickly “visually parse” to gain a general sense of what you've got.
Ideally, you should be able to play with it to select the records you
need.
Think for example, of the many links to texts you would get from
archive.org (which also includes some metadata) or *.pub (each site
using their own quirkiness)
Based on some …
[View More]sort of GUI, you would see weighted terms (coloured or
not based on a user's preference) with all other terms preceding (as
some sort of tree-like structure confluent on that term) and following
it ( ... branching off of it).
Which kind of applications people use to do such thing?
lbrtchx
[View Less]