MEDAL Summer School in Computational Modelling
The 2025 summer school will take place in Birmingham between the 23rd and 27th of June.
As part of the MEDAL Summer School in Computational Modelling, we’re excited to offer open-access plenary lectures streamed live on Zoom. Whether you're a student, researcher, or just curious about the latest in computational approaches to multilingual and multimodal language data, you're welcome to tune in and learn from leading experts in the field.
Event structure
🔹Keynote Lectures
Start your mornings with inspiring keynotes from leading researchers in computational linguistics and cognitive modelling.
🔹 Expert-Led Workshops
Choose from parallel workshop tracks, including:
- LLMs for Linguists
- LDL Models
- Construction Grammar & LLMs
- Computational Models for L1/L2
- Multimodal Interaction
- Language Evolution
🔹 Hands-On Labs
Apply what you’ve learned in practical lab sessions where you’ll work directly with computational tools and linguistic data.
🔹 Individual Consultations
Book a 1-on-1 session with instructors for tailored feedback and support on your own research questions or technical challenges.
🔹 Beginner-Friendly Setup Day
New to programming or modelling? We’ve got you covered! The first day is dedicated to helping beginners install software, set up tools, and get ready to participate fully.
🔹 Daily Format
Each day includes:
- A keynote lecture
- A choice between parallel workshop sessions
- Consultations and poster presentations
- Informal discussion and networking time
Birmingham info
The summer school will take place in the following teaching buildings :
• Alan Walters Building (R29)
• TLB (Teaching and Learning Building, R32)
• Law Building (R1)
• Aston Webb (R5)
A map of the campus:
For the lunch breaks, the campus has several lunch options. These can be found on the map below:
Workshops and social events
Detailed information about the workshops and social events will be published here.
Plenaries
Jack Grieve
Who is Satoshi Nakamoto? Using Computational Authorship Analysis to Help Resolve the Bitcoin Authorship Problem
Marco Marelli & Marco Ciapparelli
Understanding the unknown: how to make sense of unfamiliar words, in a computational psycholinguistic perspective
Gary Jones & Francesco Cabiddu
A CLASSIC explanation of early language acquisition
Raquel Fernandez & Esam Ghaleb
Co-Speech Gestures in Face-to Face Dialogue: A Representation Learning Perspective
Simon Kirby
Cultural Evolution builds the Statistical Structure of Language: evidence from human and whale song datasets
Harald Baayen & Melanie Bell
How does language work? Challenges and opportunities in the age of deep learning
Florent Perek
Constructions and the company they keep: Studies of constructional change with distributional semantics
Harish Tayyar Madabushi
Every Time We Hire an LLM, the Reasoning Performance of the Linguists Goes Up
Workshops
You will explore the barriers that have limited the uptake of computational methods in cognitive linguistics, such as steep learning curves, reliance on Big Data, and the demand for exact instruction. Through a rich mix of the “history of ideas” and hands-on examples, the workshop will demonstrate how computational models can be tailored to address linguistic complexity, delivering empirically testable predictions and actionable insights. During the hands-on sessions you will learn to use computational models based on research on learning from psychology. After an introduction to the principles of learning, we will demonstrate how error-correction models can be used for the analysis of linguistic data, using some of our own work. Using findings from work on L1 and L2, we will teach you how annotated corpus data can be used in the computational models, and how computational modelling can be used to generate hypotheses that can be tested in experimental or classroom settings. The workshop is tailored for participants with no programming experience, and make use of our cloud computation infrastructure. It includes hands-on work with existing data while also setting aside time for participants to apply what they have learned to their own data.
Level of participation: Beginners to Advanced
Software requirements: None, we will be using our cloud computing interface but the training can also be run in Python
Given a text of disputed or questioned authorship -- as is common in historical, literary, political, and forensic contexts -- a range of methods have been developed for inferring information about the author of that text through computational linguistic analysis. In this workshop, we introduce computational methods for linguistic authorship analysis. On day one, we introduce the field of linguistic authorship analysis, including defining different types of authorship problems, including attribution, verification, and profiling, and discussing the differences between manual and computational approaches to resolving these tasks. We then introduce methods for geolinguistic profiling, which involves predicting the geographic background of an author, with a focus on the German language. On day two, we discuss the task of authorship attribution, which involves selecting the most likely author of an anonymous text from a set of candidate authors, using large language models. We discuss how to fine-tune authorial large language models for authorship analysis and demonstrate this methodology using a range of standard English-language benchmarking corpora.
Level: Postgraduate students and higher
Software: We will be presenting techniques implemented in R/R Studio (geolinguistic profiling) and Python (authorship attribution) and will provide notebooks that students are welcome to follow along with during our sessions; however, students will not be required to run code themselves.
The workshop will present an overview of the (psycho)linguistic application of data-driven models trained on data of language usage. The issue will be addressed from both a historical and a methodological perspective, with a discussion ranging from traditional distributional approaches to modern large language models. The talk will highlight the continuity between such different modelling traditions, as well as their viability as instruments for scientific investigation and their degree of cognitive plausibility. The talk will mostly focus on systems trained on text corpora, but the possibility of using other types of data sources (such as databases of annotated images) will be also explored. Examples of empirical works will be provided, combining the analysis of such models with psychological and linguistic issues.
Level: Suitable for everyone
Software: N/A
Large language models (LLMs) based on the transformer architecture are deep neural networks trained on textual corpora for general-purpose language understanding and generation. This workshop will introduce attendees to popular Python libraries devoted to interrogating LLMs on measures of language processing and representation. Specifically, attendees will learn to obtain representations of linguistic units at various level of granularity (sub-word tokens, words, sentences) and to probe the impact of context on these representations (e.g., how the meaning of ambiguous words is modulated by the sentences in which they occur). Then, attendees will learn to extract LLMs’ predictability measures, which will be applied to obtain estimates of sentence grammaticality and semantic plausibility. Finally, the workshop will introduce the basic tools to probe the connection between language and vision with vision-language models. The workshop will cover foundational LLMs of different families (i.e., BERT encoder language models, GPT decoder language models, CLIP multimodal models). After the workshop, attendees will know how to use basic Python resources to start working independently with LLMs.
Level: Theoretically, the workshop assumes a very basic knowledge of large language models. Thus, while some theoretical concepts will be reviewed throughout the workshop, we highly recommend attendees to follow the theoretical lectures on LLMs. Practically, the workshop will not assume experience with Python; however, unexperienced attendees are encouraged to participate in the “Python for linguists” sessions in order to familiarize themselves with the coding environment.
Software: The workshop will be carried out on popular cloud-based coding platforms and thus will not require installing software on attendees' local machines. Since Google Colab will most likely be the cloud-based platform of choice, a Google account will be required.
This hands-on workshop is designed for students and researchers in psychology, linguistics, cognitive science, and related fields who are interested in applying computational methods to language research. The workshop introduces key computational approaches for testing different theories of child word learning using Python. By the end of the workshop, participants will have a foundation for integrating computational modelling into their research workflow, along with Jupyter notebooks covering all workshop activities (input preprocessing, implementation, output visualisation, and evaluation using real and simulated data). The workshop focuses on two influential theories in language acquisition: transitional probability, which explains how infants discover words in fluent speech using statistical cues, and chunking, which captures how children build a vocabulary through exposure to parental input. Participants will implement these theories using two major approaches: first, by testing a transitional probability model and aligning its output with infant behavioural data, and second, by running simulations on conversational corpus data to examine how manipulating a chunking-based learning system and its input affects vocabulary acquisition. Participants should bring their own laptop and can use Google Colab without installing any software. All that is needed is a modern web browser (e.g., Chrome, Safari), a Google account, and a stable internet connection. However, in case of internet dropouts, we recommend that participants install Python (version 3.8+) and Jupyter Notebook. A list of required packages, setup instructions, and further details about the workshop are available here.
Level: This hands-on workshop is designed for students and researchers in psychology, linguistics, cognitive science, and related fields who are interested in applying computational methods to language research.
Software: Participants should bring their own laptop and can use Google Colab without installing any software. All that is needed is a modern web browser (e.g., Chrome, Safari), a Google account, and a stable internet connection. However, in case of internet dropouts, we recommend that participants install Python (version 3.8+) and Jupyter Notebook. A list of required packages, setup instructions, and further details about the workshop are available here.
This workshop will introduce fundamental methods for analysing multimodal signals in conversation. On the first day, we will discuss how to process kinematic information (i.e., how to extract key body points) and automatically transcribe and align speech from dialogue video recordings. On the second day, we will build on this knowledge to develop methods that allow us to automatically detect gestures using speech and kinematic features. Each workshop day will consist of a short presentation of at most 45 minutes, followed by hands-on practical exercises and discussion.
Level: The workshop is suitable for anyone with an interest in multimodality. We expect students to have basic programming skills, preferably in Python.
Software: We plan to use the following software: Vscode, Python (e.g., Miniconda), MediaPipe, and Whisper-X. You do not need to install this software beforehand: we will help you out during the workshop.
In this workshop we will play with simulation models of the processes implicated in the emergence of language structure: individual learning, cultural transmission, and genetic evolution. I will provide simple recreations in python of two key models in the literature: a model showing that compositional structure in language arises as a trade-off between simplicity and expressivity; and a model that shows that strong linguistic nativism cannot evolve. Both are based on very simple Bayesian models of individuals that are placed in simulated populations that interact and learn from one another. We will explore the parameter space of the models and talk about how they might be extended.
Level: Beginners
Software: Jupyter notebooks with matplotlib, scipy, and numpy installed.
This workshop will introduce the theory of discriminative learning, as applied to the mental lexicon, and its computational implementation, the Discriminative Lexicon Model (DLM). In discriminative learning, a learning system — which could be animal, human or computer — establishes associations between different input stimuli and corresponding outputs or behaviours. In language comprehension, the inputs are linguistic forms, and the outputs are semantic representations. In language production, the inputs are semantic representations, and the outputs are linguistic forms. We will cover how to get the DLM running using the Julia programming language, different options for representing form and meaning, and how to model comprehension and production using simple mappings between these representations. Unless you are already familiar with the DLM, this introductory workshop will provide essential preparation for the workshop offered by Harald Baayen later in the week.
Level: PhD students, postdocs; prior knowledge of R would be an advantage
I will present an error-driven computational model for the mental lexicon that provides a set of algorithms for probing visual and auditory comprehension, as well as speech production. The first half of the workshop will introduce basic concepts and key elements of the DLM theory. The second half of the workshop will provide participants with hands-on experience with the open-source implementation of the model, the JudiLing package for the julia programming language. Participants will be guided through a jupyter notebook that illustrates how the DLM can be used both as a linguistic model and as a cognitive model generating detailed predictions for lexical processing.
Level: PhD students, postdocs
Software: R julia
Distributional semantics is an approach to meaning that seeks to derive semantic information from the contexts in which words are used, drawing on the intuition that words with similar meaning are used in similar ways. In a distributional semantic model (DSM), also called word embedding, the meaning of a word is represented by an array of numerical values derived from its co-occurrences in large corpora, turning the informal notion of meaning into a more precise quantification which is built from usage data and lends itself well to quantitative studies. This course will provide an introduction to distributional semantics and how it can be applied to linguistic research. I will first introduce the distributional semantic approach and describe various ways in which it has been computationally implemented. I will discuss some off-the-shelf DSMs as well as tools to create tailormade DSMs from your own corpus data. In a hands-on session, I will then demonstrate various ways in which DSMs can be reliably used as a source of lexical semantic information, notably through measures of semantic distance and semantic spread, and clustering into semantic classes. Examples to be discussed may include research in syntactic productivity, language change, language development, and descriptive grammar. Prior knowledge of R is advised for the hands-on session.
Level: Any level, but at least some basic knowledge of R is advised.
Software: R
The data-driven foundations of Large Language Models (LLMs), which are similar with usage-based linguistic theories, make them a fascinating new frontier for research. Given their demonstrated access to both syntax and semantics, it is interesting to examine the constructional information they encode.
This two-day workshop provides a hands-on exploration of the powerful intersection between Construction Grammar and LLMs. We will investigate the linguistic structures these models learn and test their capacity for functional reasoning.
Day 1 (Thursday Afternoon): From Construction Grammar to Model Probing
The first day will provide an overview of Construction Grammar's core principles. We will then move into a hands-on session where you will learn to probe a large language model to identify sentences that are instances of a specific construction.
Although this is a Construction Grammar-specific task, we will build the methodology in a way that provides a generalizable framework you can adapt to probe virtually any linguistic phenomenon in a model.
Day 2 (Friday Afternoon): Build Your Own Experiment!
On the second day, we evaluating on a meta linguistic task (i.e., are these sentences instances of the same construction) to the ability of LLMs
applying
constructional information on a downstream task. We will explore do this using Natural Language Inference (NLI). The session will then focus on helping you to build your own custom task to evaluate a model's capabilities. This process will help highlight the kinds of reasoning LLMs can and cannot perform, offering insights into their successes and failures from a usage-based perspective.
Level: Beginners. No previous experience in either construction grammar of programming is required.
Software: No installation is required. The workshop will be carried out on Google Colab, so all you will need is a web browser and a Google account
Extra workshop
Natural Language Processing (NLP) plays a central role in corpus linguistics, text mining, machine learning, and related scientific applications. This workshop will introduce attendees to the basics of NLP using the Python programming language and the Natural Language Toolkit (NLTK) library. Specifically, attendees will learn how to explore and analyze textual data by computing word frequencies, extracting collocations, and generating concordances, providing a foundation for more advanced topics such as distributional semantics. No prior coding knowledge is required. The Python programming language and the cloud computing environment (e.g., Google Colaboratory) will be introduced along with examples motivated by NLP. After the workshop, attendees will know how to use basic Python resources to interrogate textual data.


Social Events
We will walk through the city and follow the footsteps of J.R.R. Tolkien to see the buildings he was inspired by.
Meeting point: Alan Walters main entrance
Time: 17:30
We’re going on a ~40-minute walk to visit a famous Tolkien landmark, so please wear comfortable shoes and bring your MEDAL water flask!
We'll be visiting landmarks 6 and 7 — you can look up the map and download it from the Birmingham City Council website.
We will end with a drink at Plough & Harrow Hotel!
Reception with finger buffet, group photo and MEDAL summer school recognition for returning participants. Starts at 17:00 at the Alan Walters Atrium.
A visit to the Cadbury Research Library, home of the University of Birmingham's extensive Special Collections of rare books, manuscripts, archives, photographs and associated artefacts.
Meeting point: Alan Walters main entrance
Time: 17:05
The library is located close to Alan Walters, but please make sure to arrive at the meeting point on time!
To find out more about the collections, please visit the library's website.
You can still join the canal walk later the same day. Our guide will take you to the meeting point in the city centre — we need to be there by 18:30.
We're going for a relaxing walk along the city's canals.
Meeting point: Main Library in the town centre
Time: 18:30
Feel free to make your own way there, but for those who would like to go with a guide from campus, the meeting point is at 17:50 at the main entrance of the Alan Walters Building.



Programme
This years summer school starts with getting set up Monday. From Tuesday-Friday every day starts with two keynotes. Afterwards there will be workshop classes on Python for linguists, Consultations and Poster Presentations each day that are followed by a lunch break. On the first day a opening roundtable is hosted by Petar Milin and a MEDAL projects update is given. On the remaining days different parallel workshop sessions are organized.
Last but not least, every day is closed by social activities!
The workshop titles in this programme are short forms of the official titles; for more detailed information about the content of the workshop, please check the Workshops and social events page. Please be also aware that the exact locations will follow.
Please note the following information regarding course duration and structure:
- The following courses will run for four days:
• LLMs for Linguists (Marco Marelli and Marco Ciapparelli)
• LDL (Melanie Bell and Harald Baayen) - The following course will run for three days:
• Construction Grammar (Florent Perek and Harish Tayyar Madabushi) - The first two sessions of LLMs and LDL, as well as the first session of Construction Grammar, are introductory. Attending these is likely necessary before joining the more advanced final sessions.
- All remaining courses are two days long.
- The length and sequence of each workshop are indicated in brackets (e.g. 2/4 = second session in a four-day course).
Click on the sentence below to see the program as a downloadable pdf file, or scroll down for the browser version:
Click here to see the program as a downloadable pdf file!
Monday June 23th
12:00
Registration (desk outside of the LT1)
Location:
Alan Walters G03 (LT1)
13:30-15:00
Opening Roundtable:
Computational Modelling in the Language Sciences – Quo Vadis?
Moderator: Petar Milin
University of Birmingham Panelists:
Dr Hyojin Park (Psychology)
Dr Hazel Wilkinson (English Literature)
Prof Mark Lee (Artificial Intelligence / Natural Language Processing)
Guest Panelists:
Prof Melanie Bell (Language and Linguistics; Anglia Ruskin University, Cambridge)
Dr Harish Tayyar Madabushi (Artificial Intelligence / Large Language Models; University of Bath)
Dr Marco Ciapparelli (Psychology; University of Milano-Bicocca)
Location:
Alan Walters G03 (LT1)
15:00-15:30
Coffee break
Location:
Alan Walters G03 (LT1)
15:30-16:30
Poster flash talks
Location:
Alan Walters G03 (LT1)
Tuesday June 24th
08:30-09:00
Registration (desk outside of the LT1)
Location:
Alan Walters G03 (LT1)
09:00-10:00
Plenary 1:
Every time we hire an LLM, the reasoning performance of the linguists goes up
Harish Tayyar Madabushi
Location:
Alan Walters G03 (LT1)
10:15 – 11:15
Plenary 2:
Understanding the unknown: How to make sense of unfamiliar words,
from a computational psycholinguistic perspective
Marco Marelli & Marco Ciapparelli
Location:
Alan Walters G03 (LT1)
11:15-11:30
Coffee break
Location:
Alan Walters Atrium
11:30-12:30
Python for linguists (optional workshop)
Marco Ciapparelli
OR
Time slot for mentoring consultations (optional)
Location:
Alan Walters 103
Alan Walters
11:30-14:00 (parallel with Python workshop, consultations, and lunch break)
Poster presentations
Location:
Alan Walters
12:30-14:00
Lunch break
14:00-15:30 (parallel sessions)
Session A: From distributional semantics to LLMs (1/4)
Marco Marelli
Session B: Discriminative Lexicon Model (1/4)
Melanie Bell
Session C: Building computational models of child word learning (1/2)
Gary Jones & Francesco Cabiddu
Session D: Methods for the automatic processing of multimodal interaction (1/2)
Raquel Fernandez & Esam Ghaleb
Session E: Computational simulations of error-driven learning in L1 and L2 (1/2)
Dagmar Divjak & Petar Milin
A: Alan Walters 103
B: Alan Walters 111
C: Alan Walters 112
D: Alan Walters G11
E: TLB 208
15:30-15:45
Coffee break
15:45-17:00 (parallel sessions continued)
Session A: From distributional semantics to LLMs (1/4)
Marco Marelli
Session B: Discriminative Lexicon Model (1/4)
Melanie Bell
Session C: Building computational models of child word learning (1/2)
Gary Jones & Francesco Cabiddu
Session D: Methods for the automatic processing of multimodal interaction (1/2)
Raquel Fernandez & Esam Ghaleb
Session E: Computational simulations of error-driven learning in L1 and L2 (1/2)
Dagmar Divjak & Petar Milin
A: Alan Walters 103
B: Alan Walters 111
C: Alan Walters 112
D: Alan Walters G11
E: TLB 208
17:30-20:00
Social: City Walk and Tolkien Trail
Alan Walters main entrance
Wednesday June 25th
08:30-09:00
Registration (desk outside of the LT1)
Location:
Alan Walters G03 (LT1)
09:00-10:00
Plenary 1:
A CLASSIC explanation of early language acquisition
Gary Jones & Francesco Cabiddu
Location:
Alan Walters G03 (LT1)
10:15 – 11:15
Plenary 2:
Co-speech gestures in face-to-face dialogue: A representation learning perspective
Raquel Fernandez & Esam Ghaleb
Location:
Alan Walters G03 (LT1)
11:15 – 11:30
Coffee break
Location:
Alan Walters Atrium
11:30-12:30
Python for linguists (optional workshop)
Marco Ciapparelli
OR
Time slot for mentoring consultations (optional)
Location:
Alan Walters 103
Alan Walters seating area
12:30-14:00
Lunch break
14:00-15:30 (parallel sessions)
Session A: From distributional semantics to LLMs (2/4)
Marco Marelli
Session B: Discriminative Lexicon Model (2/4)
Melanie Bell
Session C: Building computational models of child word learning (2/2)
Gary Jones & Francesco Cabiddu
Session D: Methods for the automatic processing of multimodal interaction (2/2)
Raquel Fernandez & Esam Ghaleb
Session E: Computational simulations of error-driven learning in L1 and L2 (2/2)
Dagmar Divjak & Petar Milin
Session F: Using distributional semantics in linguistic research (1/3)
Florent Perek
A: Alan Walters 103
B: Alan Walters 111
C: Alan Walters 112
D: TLB 208
E: TLB 209
F: TLB 218
15:30-15:45
Coffee break
15:45-17:00 (parallel sessions continued)
Session A: From distributional semantics to LLMs (2/4)
Marco Marelli
Session B: Discriminative Lexicon Model (2/4)
Melanie Bell
Session C: Building computational models of child word learning (2/2)
Gary Jones & Francesco Cabiddu
Session D: Methods for the automatic processing of multimodal interaction (2/2)
Raquel Fernandez & Esam Ghaleb
Session E: Computational simulations of error-driven learning in L1 and L2 (2/2)
Dagmar Divjak & Petar Milin
Session F: Using distributional semantics in linguistic research (1/3)
Florent Perek
A: Alan Walters 103
B: Alan Walters 111
C: Alan Walters 112
D: TLB 208
E: TLB 209
F: TLB 218
17:00-20:00
Reception
Alan Walters
Atrium
Thursday June 26th
08:30-09:00
Registration (desk in the Law building)
Location:
Law, 1st floor break area
09:00-10:00
Plenary 1:
How does language work? Challenges and opportunities in the age of deep learning
Harald Baayen & Melanie Bell
Location:
Law LT1 (303)
10:15 – 11:15
Plenary 2:
Constructions and the company they keep: Studies of constructional change with distributional semantics
Florent Perek
Location:
Law LT1 (303)
11:15 – 11:30
Coffee break
Location:
Law, 1st floor break area
11:30-12:30
Python for linguists (optional workshop)
Marco Ciapparelli
OR
Time slot for mentoring consultations (optional)
Location:
Law 203
Alan Walters seating area
12:30-14:00
Lunch break
14:00-15:30 (parallel sessions)
Session A: From distributional semantics to LLMs (3/4)
Marco Ciapparelli
Session B: Discriminative Lexicon Model (3/4)
Harald Baayen
Session C: Simulating the evolution of language (1/2)
Simon Kirby
Session D: Probing Large Language Models for Linguistic and Constructional Information: A Hands-On Approach (continuation of Using distributional semantics in linguistic research) (2/3)
Harish Tayyar Madabushi & Melissa Torgbi
Session E: Computational Authorship Analysis (1/2)
J. Grieve, Dana Roemling & Weihang Huang
A: Law 111
B: Law 112
C: Law 203
D: Law 219
E: Law G16
15:30-15:45
Coffee break
Location:
Law, 1st floor break area
15:45-17:00 (parallel sessions continued)
Session A: From distributional semantics to LLMs (3/4)
Marco Ciapparelli
Session B: Discriminative Lexicon Model (3/4)
Harald Baayen
Session C: Simulating the evolution of language (1/2)
Simon Kirby
Session D: Probing Large Language Models for Linguistic and Constructional Information: A Hands-On Approach (continuation of Using distributional semantics in linguistic research) (2/3)
Harish Tayyar Madabushi & Melissa Torgbi
Session E: Computational Authorship Analysis (1/2)
Jack Grieve, Dana Roemling & Weihang Huang
A: Law 111
B: Law 112
C: Law 203
D: Law 219
E: Law G16
17:15-17:45
Social: visit to the Cadbury Research Library
Alan Walters main entrance
18:30-20:00
Social: canal walk
Start at the Main Library in the city centre
Friday June 27th
08:30-09:00
Registration (desk in the Law building)
Location:
Law, 1st floor break area
09:00-10:00
Plenary 1:
Cultural evolution builds the statistical structure of language: Evidence from human and whale song datasets
Simon Kirby
Location:
Aston Webb Dome LT
10:15 – 11:15
Plenary 2:
Who is Satoshi Nakamoto? Using computational authorship analysis to help resolve the bitcoin authorship problem
Jack Grieve
Location:
Aston Webb Dome LT
11:15 – 11:30
Closing words and coffee break
Location:
Law, 1st floor break area
11:30-12:30
Python for linguists (optional workshop)
Marco Ciapparelli
OR
Time slot for mentoring consultations (optional)
Location:
Law 203
Alan Walters seating area
12:30-14:00
Lunch break
14:00-15:30 (parallel sessions)
Session A: From distributional semantics to LLMs (4/4)
Marco Ciapparelli
Session B: Discriminative Lexicon Model (4/4)
Harald Baayen
Session C: Simulating the evolution of language (2/2)
Simon Kirby
Session D: Probing Large Language Models for Linguistic and Constructional Information: A Hands-On Approach (continuation of Using distributional semantics in linguistic research) (3/3)
Harish Tayyar Madabushi & Melissa Torgbi
Session E: Computational Authorship Analysis (2/2)
Jack Grieve, Dana Roemling & Weihang Huang
A: Law 111
B: Law 112
C: Law 203
D: Law 219
E: Law G16
15:30-15:45
Coffee break
Location:
Law, 1st floor break area
15:45-17:00 (parallel sessions continued)
Session A: From distributional semantics to LLMs (4/4)
Marco Ciapparelli
Session B: Discriminative Lexicon Model (4/4)
Harald Baayen
Session C: Simulating the evolution of language (2/2)
Simon Kirby
Session D: Probing Large Language Models for Linguistic and Constructional Information: A Hands-On Approach (continuation of Using distributional semantics in linguistic research) (3/3)
Harish Tayyar Madabushi & Melissa Torgbi
Session E: Computational Authorship Analysis (2/2)
Jack Grieve, Dana Roemling & Weihang Huang
A: Law 111
B: Law 112
C: Law 203
D: Law 219
E: Law G16