4 Publikationen
-
2023 | Konferenzbeitrag | Veröffentlicht | PUB-ID: 2985109Bunzeck, B., & Zarrieß, S. (2023). GPT-wee: How Small Can a Small Language Model Really Get? In A. Warstadt, A. Mueller, L. Choshen, E. Wilcox, C. Zhuang, J. Ciro, R. Mosquera, et al. (Eds.), Proceedings of the BabyLM Challenge at the 27th Conference on Computational Natural Language Learning (pp. 35-46). Stroudsburg, PA: Association for Computational Linguistics. https://doi.org/10.18653/v1/2023.conll-babylm.2PUB | PDF | DOI | Download (ext.)
-
2023 | Zeitschriftenaufsatz | Veröffentlicht | PUB-ID: 2980943Druskat, S., Krause, T., Lachenmaier, C., & Bunzeck, B. (2023). Hexatomic: An extensible, OS-independent platform fordeep multi-layer linguistic annotation of corpora. Journal of Open Source Software, 8(86), 4825. https://doi.org/10.21105/joss.04825PUB | PDF | DOI
-
2023 | Zeitschriftenaufsatz | Veröffentlicht | PUB-ID: 2980942Wojcik, P., Bunzeck, B., & Zarrieß, S. (2023). The Wikipedia Republic of Literary Characters. Journal of Cultural Analytics, 8(2). https://doi.org/10.22148/001c.70251PUB | PDF | DOI
-
2023 | Konferenzbeitrag | Veröffentlicht | PUB-ID: 2982902Bunzeck, B., & Zarrieß, S. (2023). Entrenchment Matters: Investigating Positional and Constructional Sensitivity in Small and Large Language Models. In E. Breitholtz, S. Lappin, S. Loaiciga, N. Ilinykh, & S. Dobnik (Eds.), Proceedings of the 2023 CLASP Conference on Learning with Small Data (LSD) (pp. 25-37). Stroudsburg, PA: Association for Computational Linguistics.PUB | PDF