GPT-wee: How Small Can a Small Language Model Really Get?
Bunzeck B, Zarrieß S (2023)
In: Proceedings of the BabyLM Challenge at the 27th Conference on Computational Natural Language Learning. Warstadt A, Mueller A, Choshen L, Wilcox E, Zhuang C, Ciro J, Mosquera R, Paranjabe B, Williams A, Linzen T, Cotterell R (Eds); Stroudsburg, PA: Association for Computational Linguistics: 35-46.
Konferenzbeitrag
| Veröffentlicht | Englisch
Download
2023.conll-babylm.2.pdf
173.45 KB
Autor*in
Herausgeber*in
Warstadt, Alex;
Mueller, Aaron;
Choshen, Leshem;
Wilcox, Ethan;
Zhuang, Chengxu;
Ciro, Juan;
Mosquera, Rafael;
Paranjabe, Bhargavi;
Williams, Adina;
Linzen, Tal;
Cotterell, Ryan
Erscheinungsjahr
2023
Titel des Konferenzbandes
Proceedings of the BabyLM Challenge at the 27th Conference on Computational Natural Language Learning
Seite(n)
35-46
Urheberrecht / Lizenzen
Konferenz
BabyLM Challenge at the Conference on Computational Natural Language Learning
Konferenzort
Singapore
Konferenzdatum
2023-12-06 – 2023-12-07
eISBN
978-1-952148-02-6
Page URI
https://pub.uni-bielefeld.de/record/2985109
Zitieren
Bunzeck B, Zarrieß S. GPT-wee: How Small Can a Small Language Model Really Get? In: Warstadt A, Mueller A, Choshen L, et al., eds. Proceedings of the BabyLM Challenge at the 27th Conference on Computational Natural Language Learning. Stroudsburg, PA: Association for Computational Linguistics; 2023: 35-46.
Bunzeck, B., & Zarrieß, S. (2023). GPT-wee: How Small Can a Small Language Model Really Get? In A. Warstadt, A. Mueller, L. Choshen, E. Wilcox, C. Zhuang, J. Ciro, R. Mosquera, et al. (Eds.), Proceedings of the BabyLM Challenge at the 27th Conference on Computational Natural Language Learning (pp. 35-46). Stroudsburg, PA: Association for Computational Linguistics. https://doi.org/10.18653/v1/2023.conll-babylm.2
Bunzeck, Bastian, and Zarrieß, Sina. 2023. “GPT-wee: How Small Can a Small Language Model Really Get?”. In Proceedings of the BabyLM Challenge at the 27th Conference on Computational Natural Language Learning, ed. Alex Warstadt, Aaron Mueller, Leshem Choshen, Ethan Wilcox, Chengxu Zhuang, Juan Ciro, Rafael Mosquera, et al., 35-46. Stroudsburg, PA: Association for Computational Linguistics.
Bunzeck, B., and Zarrieß, S. (2023). “GPT-wee: How Small Can a Small Language Model Really Get?” in Proceedings of the BabyLM Challenge at the 27th Conference on Computational Natural Language Learning, Warstadt, A., Mueller, A., Choshen, L., Wilcox, E., Zhuang, C., Ciro, J., Mosquera, R., Paranjabe, B., Williams, A., Linzen, T., et al. eds. ( Stroudsburg, PA: Association for Computational Linguistics), 35-46.
Bunzeck, B., & Zarrieß, S., 2023. GPT-wee: How Small Can a Small Language Model Really Get? In A. Warstadt, et al., eds. Proceedings of the BabyLM Challenge at the 27th Conference on Computational Natural Language Learning. Stroudsburg, PA: Association for Computational Linguistics, pp. 35-46.
B. Bunzeck and S. Zarrieß, “GPT-wee: How Small Can a Small Language Model Really Get?”, Proceedings of the BabyLM Challenge at the 27th Conference on Computational Natural Language Learning, A. Warstadt, et al., eds., Stroudsburg, PA: Association for Computational Linguistics, 2023, pp.35-46.
Bunzeck, B., Zarrieß, S.: GPT-wee: How Small Can a Small Language Model Really Get? In: Warstadt, A., Mueller, A., Choshen, L., Wilcox, E., Zhuang, C., Ciro, J., Mosquera, R., Paranjabe, B., Williams, A., Linzen, T., and Cotterell, R. (eds.) Proceedings of the BabyLM Challenge at the 27th Conference on Computational Natural Language Learning. p. 35-46. Association for Computational Linguistics, Stroudsburg, PA (2023).
Bunzeck, Bastian, and Zarrieß, Sina. “GPT-wee: How Small Can a Small Language Model Really Get?”. Proceedings of the BabyLM Challenge at the 27th Conference on Computational Natural Language Learning. Ed. Alex Warstadt, Aaron Mueller, Leshem Choshen, Ethan Wilcox, Chengxu Zhuang, Juan Ciro, Rafael Mosquera, Bhargavi Paranjabe, Adina Williams, Tal Linzen, and Ryan Cotterell. Stroudsburg, PA: Association for Computational Linguistics, 2023. 35-46.
Alle Dateien verfügbar unter der/den folgenden Lizenz(en):
Creative Commons Namensnennung 4.0 International Public License (CC-BY 4.0):
Volltext(e)
Name
2023.conll-babylm.2.pdf
173.45 KB
Access Level
Open Access
Zuletzt Hochgeladen
2024-03-10T12:24:47Z
MD5 Prüfsumme
1d14821eb4f5401bc4249d03e93289e8
Link(s) zu Volltext(en)
Access Level
Open Access