When Your Language Model Cannot Even Do Determiners Right: Probing for Anti-Presuppositions and the Maximize Presupposition! Principle
Sieker J, Zarrieß S (2023)
In: Proceedings of the 6th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP. Belinkov Y, Hao S, Jumelet J, Kim N, McCarthy A, Mohebbi H, Association for Computational Linguistics (Eds); 180–198.
Konferenzbeitrag
| Veröffentlicht | Englisch
Download
2023.blackboxnlp-1.14.pdf
3.68 MB
Autor*in
Herausgeber*in
Belinkov, Yonatan;
Hao, Sophie;
Jumelet, Jaap;
Kim, Najoung;
McCarthy, Arya;
Mohebbi, Hosein
herausgebende Körperschaft
Association for Computational Linguistics
Abstract / Bemerkung
The increasing interest in probing the linguistic capabilities of large language models (LLMs) has long reached the area of semantics and pragmatics, including the phenomenon of presuppositions. In this study, we investigate a phenomenon that, however, has not yet been investigated, i.e., the phenomenon of anti-presupposition and the principle that accounts for it, the Maximize Presupposition! principle (MP!). Through an experimental investigation using psycholinguistic data and four open-source BERT model variants, we explore how language models handle different anti-presuppositions and whether they apply the MP! principle in their predictions. Further, we examine whether fine-tuning with Natural Language Inference data impacts adherence to the MP! principle. Our findings reveal that LLMs tend to replicate context-based n-grams rather than follow the MP! principle, with fine-tuning not enhancing their adherence. Notably, our results further indicate a striking difficulty of LLMs to correctly predict determiners, in relatively simple linguistic contexts.
Erscheinungsjahr
2023
Titel des Konferenzbandes
Proceedings of the 6th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP
Seite(n)
180–198
Urheberrecht / Lizenzen
Konferenz
BlackboxNLP
Konferenzort
Singapore
Konferenzdatum
2023-12-07 – 2023-12-07
Page URI
https://pub.uni-bielefeld.de/record/2985222
Zitieren
Sieker J, Zarrieß S. When Your Language Model Cannot Even Do Determiners Right: Probing for Anti-Presuppositions and the Maximize Presupposition! Principle. In: Belinkov Y, Hao S, Jumelet J, et al., eds. Proceedings of the 6th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP. 2023: 180–198.
Sieker, J., & Zarrieß, S. (2023). When Your Language Model Cannot Even Do Determiners Right: Probing for Anti-Presuppositions and the Maximize Presupposition! Principle. In Y. Belinkov, S. Hao, J. Jumelet, N. Kim, A. McCarthy, H. Mohebbi, & Association for Computational Linguistics (Eds.), Proceedings of the 6th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP (p. 180–198).
Sieker, Judith, and Zarrieß, Sina. 2023. “When Your Language Model Cannot Even Do Determiners Right: Probing for Anti-Presuppositions and the Maximize Presupposition! Principle”. In Proceedings of the 6th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP, ed. Yonatan Belinkov, Sophie Hao, Jaap Jumelet, Najoung Kim, Arya McCarthy, Hosein Mohebbi, and Association for Computational Linguistics, 180–198.
Sieker, J., and Zarrieß, S. (2023). “When Your Language Model Cannot Even Do Determiners Right: Probing for Anti-Presuppositions and the Maximize Presupposition! Principle” in Proceedings of the 6th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP, Belinkov, Y., Hao, S., Jumelet, J., Kim, N., McCarthy, A., Mohebbi, H., and Association for Computational Linguistics eds. 180–198.
Sieker, J., & Zarrieß, S., 2023. When Your Language Model Cannot Even Do Determiners Right: Probing for Anti-Presuppositions and the Maximize Presupposition! Principle. In Y. Belinkov, et al., eds. Proceedings of the 6th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP. pp. 180–198.
J. Sieker and S. Zarrieß, “When Your Language Model Cannot Even Do Determiners Right: Probing for Anti-Presuppositions and the Maximize Presupposition! Principle”, Proceedings of the 6th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP, Y. Belinkov, et al., eds., 2023, pp.180–198.
Sieker, J., Zarrieß, S.: When Your Language Model Cannot Even Do Determiners Right: Probing for Anti-Presuppositions and the Maximize Presupposition! Principle. In: Belinkov, Y., Hao, S., Jumelet, J., Kim, N., McCarthy, A., Mohebbi, H., and Association for Computational Linguistics (eds.) Proceedings of the 6th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP. p. 180–198. (2023).
Sieker, Judith, and Zarrieß, Sina. “When Your Language Model Cannot Even Do Determiners Right: Probing for Anti-Presuppositions and the Maximize Presupposition! Principle”. Proceedings of the 6th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP. Ed. Yonatan Belinkov, Sophie Hao, Jaap Jumelet, Najoung Kim, Arya McCarthy, Hosein Mohebbi, and Association for Computational Linguistics. 2023. 180–198.
Alle Dateien verfügbar unter der/den folgenden Lizenz(en):
Creative Commons Namensnennung-Nicht kommerziell 4.0 International (CC BY-NC 4.0):
Volltext(e)
Name
2023.blackboxnlp-1.14.pdf
3.68 MB
Access Level
Open Access
Zuletzt Hochgeladen
2023-12-13T13:42:02Z
MD5 Prüfsumme
359b37e6c219b2503129d8640e5b170c
Link(s) zu Volltext(en)
Access Level
Open Access