Can you force a single-layer transformer with attention to emit a token given an input sequence?
If you can, YOU are the right person to work with us! Drop me an e-mail. 
We are hiring up to two PostDocs with a competitive salary for Italian PostDoc standards (assegno di ricerca 4 fascia).
Here's the challenge:
UnboxingChallenge.pdf

Here's the supporting Excel file simulating a single-layer transformer with attention:
Challenge_1.xlsx

Research Group: Human-centric ART
Institution: University of Rome Tor Vergata
Location: Rome
Required: a PhD in CS or a competitive publications' record track 
Desired: Willingness to work in team 


To stay up-to-date:
X:  https://x.com/HumanCentricArt
LinkedIn: www.linkedin.com/in/fabio-massimo-zanzotto-b027831
Chek what we do: ‪Fabio Massimo Zanzotto‬ - ‪Google Scholar‬


Prof. Fabio Massimo Zanzotto
Dipartimento di Ingegneria dell'Impresa "Mario Lucertini"
University of Rome Tor Vergata