ChatGPT: If Scale is the Answer, What is Left to be Asked?

If you’ve been impressed by the capabilities of Siri or Alexa, you’ll be more than excited about the amazing possibilities of Large Language Models (LLMs) like ChatGPT or GPT4: The Artificial Intelligences (AI) in these models are already capable of passing complex tests like GRE, SAT or even bar exams! As part of our seminar series »Machine and Deep Learning« Prof. Dr. Goran Glavaš from the Julius-Maximilians-Universität Würzburg will present a talk on the topic »ChatGPT: If Scale is the Answer, What is Left to be Asked?« The hybrid lecture will take place on 04.04.2023, 5:00 – 6:00 PM. You can take part in the event via MSTeams or visit us in the lecture hall of the Fraunhofer ITWM in Kaiserslautern.

This event is the starting point for the new edition of our seminar series »Machine and Deep Learning«. The series is intended to give interested persons an insight into this large field of research and a deeper understanding. Everyone who wants to learn more about Deep Learning, Machine Learning or AI in general is invited – no matter if students, PhD students, professors, or software developers. No registration is necessary if you would like to participate digitally.

Abstract of the Talk 

Large Language Models (LLMs) such as Chat-GPT, GPT-4, Bard, PaLM have recently demonstrated an almost shocking level of language understanding and generation abilities, passing a wide variety of complex tests from GRE and SAT to Bar Examination.

Even more impressively, the latest of these models have demonstrated understanding (and ability to manipulate) complex artifacts of other modalities, such as images and code. Despite the fact that, as proprietary models, details of their neural architectures and training objectives are not disclosed, all evidence suggests that it is in fact the sheer scale of these models (e.g. GPT-4 is speculated to have tens of trillions of parameters) and the data on which they were trained the key factor to their unprecedented abilities. In fact, even in controlled experiments with smaller language models, certain abilities have been shown to emerge (hence dubbed »emerging abilities«) only at a certain scale.

In this talk, I will first cover the (known) technical details of LLMs and their training procedures. In the second part, I will focus on emerging abilities (at different scales) as well as cases on which LLMs still fail. Finally, I will conclude with a discussion of implications that the observation that »scale is all that matters« has on future AI research, and NLP research in particular.

Bio of the Speaker Prof. Dr. Goran Glavaš

Goran Glavaš is a Full Professor for Natural Language Processing at University of Würzburg, Faculty of Mathematics and Computer Science and the Center for AI and Data Science (CAIDAS). He obtained his PhD. at the Text Analysis and Knowledge Engineering Lab (TakeLab), University of Zagreb. His research is in the areas of natural language processing (NLP) and information retrieval (IR), with focus on lexical and computational semantics, multilingual and cross-lingual NLP and IR, information extraction and NLP applications (for social sciences and humanities).

Detailed information about the event can be found on our website:

%d bloggers like this: