Hi, One great resource is Chris Callison-Burch's class on Crowdsourcing and Human Computation:
http://crowdsourcing-class.org/
Sam Bowman's group has also published several informative papers in this space:
https://aclanthology.org/2021.findings-emnlp.421.pdf https://aclanthology.org/2021.naacl-main.385.pdf
Good luck with the project!
Jonathan Kummerfeld
-- Senior Lecturer (ie. research tenure-track Asst. Prof.) University of Sydney
e: j.k.kummerfeld@gmail.com w: www.jkk.name
On Fri, 14 Oct 2022 at 05:23, Hugh Paterson III via Corpora < corpora@list.elra.info> wrote:
I find sets of 10 translations to be sufficiently large enough to start to become boring. I have an interest in deontic expressions. What languages are you targeting?
- Hugh Paterson III
On Thu, Oct 13, 2022 at 6:17 AM Diana G Maynard via Corpora < corpora@list.elra.info> wrote:
Hi Robert One of my students published this paper recently which looked into some of these issues - in particular, how to ensure annotator quality, how to evaluate it, how different kinds of annotation error might impact the result (random vs consistent) and how to figure out what level of IAA is good enough for a task. We had a good experience with Amazon Mechanical Turk, which again is discussed in the paper. For example, you can set a preliminary test they have to pass first, and there are ways to incentivise them to actually do a good job.
https://aclanthology.org/2022.lrec-1.128/
Diana
On 12 Oct 2022, at 23:44, Robert Fuchs via Corpora <
corpora@list.elra.info> wrote:
Dear all
I'm looking for a guide or advice for crowd-sourcing linguistic
annotations via platforms such as Mechanical Turk. I'm thinking of rating tasks such as evaluating positive and negative sentiment in sentences or annotating concordances from a corpus for a certain property (e.g. deontic v epistemic meaning in modal verbs).
Specifically, I'm wondering
- How can I ensure that the annotations are of sufficient quality? I
don't have a gold standard for all the data, after all this is why I need the annotations. If I get all the data annotated by two or three independent annotators, I can ensure adequate quality. But then I might still get annotators who more or less submit random annotations (or start doing so after a while), or at least it would take me very long to find out who is doing so.
- How do I find out what remuneration is adequate?
- What is a good way to split up the data for annotation? Single
annotation units or, say, 50 or 100 at a time? How do I deliver them effectively to the annotators?
Many thanks and best wishes Robert
-- Prof. Dr. Robert Fuchs (JP) | Department of English Language and
Literature/Institut für Anglistik und Amerikanistik | University of Hamburg | Überseering 35, 22297 Hamburg, Germany | Room 07076 | https://uni-hamburg.academia.edu/RobertFuchs | https://sites.google.com/view/rflinguistics/
Mailing list on varieties of English/World Englishes/ENL-ESL-EFL.
Subscribe here: https://groups.google.com/forum/#!forum/var-eng/join
Are you a non-native speaker of English? Please help us by taking this
short survey on when and how you use the English language: https://lamapoll.de/englishusageofnonnativespeakers-1/
Corpora mailing list -- corpora@list.elra.info https://list.elra.info/mailman3/postorius/lists/corpora.list.elra.info/ To unsubscribe send an email to corpora-leave@list.elra.info
Corpora mailing list -- corpora@list.elra.info https://list.elra.info/mailman3/postorius/lists/corpora.list.elra.info/ To unsubscribe send an email to corpora-leave@list.elra.info
-- All the best, -Hugh
Sent from my iPhone _______________________________________________ Corpora mailing list -- corpora@list.elra.info https://list.elra.info/mailman3/postorius/lists/corpora.list.elra.info/ To unsubscribe send an email to corpora-leave@list.elra.info