12 Publikationen
-
-
-
2025 | Konferenzbeitrag | PUB-ID: 3000275Bunzeck, B., et al., 2025. Small Language Models Also Work With Small Vocabularies: Probing the Linguistic Abilities of Grapheme- and Phoneme-Based Baby Llamas. In O. Rambow, et al., eds. Proceedings of the 31st International Conference on Computational Linguistics. Abu Dhabi, UAE: Association for Computational Linguistics, pp. 6039-6048.PUB | PDF | Download (ext.)
-
2024 | Konferenzbeitrag | PUB-ID: 3001254Bunzeck, B., et al., 2024. Graphemes vs. phonemes: battling it out in character-based language models. In M. Y. Hu, et al., eds. The 2nd BabyLM Challenge at the 28th Conference on Computational Natural Language Learning. Miami, FL, USA: Association for Computational Linguistics, pp. 54-64.PUB | PDF | Download (ext.)
-
2024 | Konferenzbeitrag | PUB-ID: 2993430Bunzeck, B., & Zarrieß, S., 2024. Fifty shapes of BLiMP: syntactic learning curves in language models are not uniform, but sometimes unruly. In A. Qiu, et al., eds. Proceedings of the 2024 CLASP Conference on Multimodality and Interaction in Language Learning. Kerrville, TX: Association for Computational Linguistics, pp. 39-55.PUB | PDF | Download (ext.)
-
-
2024 | Konferenzbeitrag | PUB-ID: 2994136Bunzeck, B., & Zarrieß, S., 2024. The SlayQA benchmark of social reasoning: testing gender-inclusive generalization with neopronouns. In D. Hupkes, et al., eds. Proceedings of the 2nd GenBench Workshop on Generalisation (Benchmarking) in NLP. Miami, Florida, USA: Association for Computational Linguistics, pp. 42-53.PUB | Download (ext.)
-
2023 | Datenpublikation | PUB-ID: 2993810Wojcik, P., Bunzeck, B., & Zarrieß, S., 2023. Replication Data for: "The Wikipedia Republic of Literary Characters", Harvard Dataverse.PUB | Dateien verfügbar | DOI
-
-
2023 | Konferenzbeitrag | Veröffentlicht | PUB-ID: 2985109Bunzeck, B., & Zarrieß, S., 2023. GPT-wee: How Small Can a Small Language Model Really Get? In A. Warstadt, et al., eds. Proceedings of the BabyLM Challenge at the 27th Conference on Computational Natural Language Learning. Stroudsburg, PA: Association for Computational Linguistics, pp. 35-46.PUB | PDF | DOI | Download (ext.)
-
-
2023 | Konferenzbeitrag | Veröffentlicht | PUB-ID: 2982902Bunzeck, B., & Zarrieß, S., 2023. Entrenchment Matters: Investigating Positional and Constructional Sensitivity in Small and Large Language Models. In E. Breitholtz, et al., eds. Proceedings of the 2023 CLASP Conference on Learning with Small Data (LSD). Stroudsburg, PA: Association for Computational Linguistics, pp. 25-37.PUB | PDF