{"id":2,"date":"2020-01-22T16:33:36","date_gmt":"2020-01-22T16:33:36","guid":{"rendered":"http:\/\/lig-alps.imag.fr\/?page_id=2"},"modified":"2026-03-24T09:59:13","modified_gmt":"2026-03-24T09:59:13","slug":"speakers","status":"publish","type":"page","link":"https:\/\/lig-alps.imag.fr\/index.php\/speakers\/","title":{"rendered":"Speakers"},"content":{"rendered":"\n<div class=\"wp-block-media-text alignwide\" style=\"grid-template-columns:15% auto\"><figure class=\"wp-block-media-text__media\"><img loading=\"lazy\" decoding=\"async\" width=\"958\" height=\"958\" src=\"http:\/\/lig-alps.imag.fr\/wp-content\/uploads\/2020\/02\/Isabelle.jpg\" alt=\"\" class=\"wp-image-121 size-full\" srcset=\"https:\/\/lig-alps.imag.fr\/wp-content\/uploads\/2020\/02\/Isabelle.jpg 958w, https:\/\/lig-alps.imag.fr\/wp-content\/uploads\/2020\/02\/Isabelle-150x150.jpg 150w, https:\/\/lig-alps.imag.fr\/wp-content\/uploads\/2020\/02\/Isabelle-300x300.jpg 300w, https:\/\/lig-alps.imag.fr\/wp-content\/uploads\/2020\/02\/Isabelle-768x768.jpg 768w\" sizes=\"auto, (max-width: 958px) 100vw, 958px\" \/><\/figure><div class=\"wp-block-media-text__content\">\n<p class=\"has-large-font-size\"><a href=\"https:\/\/isabelleaugenstein.github.io\">Isabelle Augenstein<\/a> (University of Copenhagen). <em>Understanding the Interplay between LLMs&#8217; Utilisation of Parametric and Contextual Knowledge<\/em><\/p>\n<p>\n<details><summary>[Abstract]<\/summary> \nLanguage Models (LMs) acquire parametric knowledge from their training process, embedding it within their weights. The increasing scalability of LMs, however, poses significant challenges for understanding a model&#8217;s inner workings and further for updating or correcting this embedded knowledge without the significant cost of retraining. Moreover, when using these language models for knowledge-intensive language understanding tasks, LMs have to integrate relevant context, mitigating their inherent weaknesses, such as incomplete or outdated knowledge. Nevertheless, studies indicate that LMs often ignore the provided context as it can be in conflict with the pre-existing LM&#8217;s memory learned during pre-training. Conflicting knowledge can also already be present in the LM&#8217;s parameters, termed intra-memory conflict. This underscores the importance of understanding the interplay between how a language model uses its parametric knowledge and the retrieved contextual knowledge.\n\nIn this talk, I will aim to shed light on this important issue by presenting our research on evaluating the knowledge present in LMs, diagnostic tests that can reveal knowledge conflicts, as well as on understanding the characteristics of successfully used contextual knowledge.\n<\/details>\n<\/p>\n<\/div><\/div>\n\n\n\n<div class=\"wp-block-media-text alignwide\" style=\"grid-template-columns:15% auto\"><figure class=\"wp-block-media-text__media\"><img loading=\"lazy\" decoding=\"async\" width=\"684\" height=\"1024\" src=\"https:\/\/lig-alps.imag.fr\/wp-content\/uploads\/2026\/02\/DSC00536-684x1024.jpg\" alt=\"\" class=\"wp-image-941 size-full\" srcset=\"https:\/\/lig-alps.imag.fr\/wp-content\/uploads\/2026\/02\/DSC00536-684x1024.jpg 684w, https:\/\/lig-alps.imag.fr\/wp-content\/uploads\/2026\/02\/DSC00536-200x300.jpg 200w, https:\/\/lig-alps.imag.fr\/wp-content\/uploads\/2026\/02\/DSC00536-768x1149.jpg 768w, https:\/\/lig-alps.imag.fr\/wp-content\/uploads\/2026\/02\/DSC00536-1027x1536.jpg 1027w, https:\/\/lig-alps.imag.fr\/wp-content\/uploads\/2026\/02\/DSC00536.jpg 1080w\" sizes=\"auto, (max-width: 684px) 100vw, 684px\" \/><\/figure><div class=\"wp-block-media-text__content\">\n<p class=\"has-large-font-size\"><a href=\"https:\/\/juliakreutzer.github.io\">Julia Kreutzer<\/a> (Cohere Labs) <em>Multilingual-First Language Modeling<\/em><\/p>\n<p>\n<details><summary>[Abstract]<\/summary> \nLarge Language Models (LLMs) have traditionally been dominated by English-centric approaches, often overlooking the rich diversity of languages around the globe. In this talk, we explore the transformative potential of multilinguality across every stage of LLM development, from tokenization to inference and evaluation. We demonstrate how multilingual models not only enhance performance across diverse languages but also drive innovation. We will discuss how to prioritize multilinguality from the ground up, and to design evaluation to measure equitable performance across languages. By shifting the focus beyond English, we highlight how research in non-English languages can lead to breakthroughs that benefit the entire LLM community. This talk underscores the importance of embracing linguistic and cultural diversity as a catalyst for advancing LLMs and fostering global inclusivity in AI.\n<\/details>\n<\/p>\n<\/div><\/div>\n\n\n\n<div class=\"wp-block-media-text alignwide\" style=\"grid-template-columns:15% auto\"><figure class=\"wp-block-media-text__media\"><img loading=\"lazy\" decoding=\"async\" width=\"986\" height=\"1024\" src=\"https:\/\/lig-alps.imag.fr\/wp-content\/uploads\/2026\/01\/Photo-Timothee-Lacroix-Intervention--986x1024.jpg\" alt=\"\" class=\"wp-image-928 size-full\" srcset=\"https:\/\/lig-alps.imag.fr\/wp-content\/uploads\/2026\/01\/Photo-Timothee-Lacroix-Intervention--986x1024.jpg 986w, https:\/\/lig-alps.imag.fr\/wp-content\/uploads\/2026\/01\/Photo-Timothee-Lacroix-Intervention--289x300.jpg 289w, https:\/\/lig-alps.imag.fr\/wp-content\/uploads\/2026\/01\/Photo-Timothee-Lacroix-Intervention--768x797.jpg 768w, https:\/\/lig-alps.imag.fr\/wp-content\/uploads\/2026\/01\/Photo-Timothee-Lacroix-Intervention--1479x1536.jpg 1479w, https:\/\/lig-alps.imag.fr\/wp-content\/uploads\/2026\/01\/Photo-Timothee-Lacroix-Intervention--1568x1628.jpg 1568w, https:\/\/lig-alps.imag.fr\/wp-content\/uploads\/2026\/01\/Photo-Timothee-Lacroix-Intervention-.jpg 1674w\" sizes=\"auto, (max-width: 986px) 100vw, 986px\" \/><\/figure><div class=\"wp-block-media-text__content\">\n<p class=\"has-large-font-size\"><a href=\"https:\/\/www.linkedin.com\/in\/timothee-lacroix-59517977\/?originalSubdomain=fr\">Timoth\u00e9e Lacroix<\/a> (Mistral AI). <em>Large language models: experiences from the field<\/em><\/p>\n<p>\n<details><summary>[Abstract]<\/summary>\nTBA\n<\/details>\n<\/p>\n<\/div><\/div>\n\n\n\n<div class=\"wp-block-media-text alignwide\" style=\"grid-template-columns:15% auto\"><figure class=\"wp-block-media-text__media\"><img loading=\"lazy\" decoding=\"async\" width=\"600\" height=\"600\" src=\"http:\/\/lig-alps.imag.fr\/wp-content\/uploads\/2025\/06\/Roger-k-moore.jpg\" alt=\"\" class=\"wp-image-872 size-full\" srcset=\"https:\/\/lig-alps.imag.fr\/wp-content\/uploads\/2025\/06\/Roger-k-moore.jpg 600w, https:\/\/lig-alps.imag.fr\/wp-content\/uploads\/2025\/06\/Roger-k-moore-300x300.jpg 300w, https:\/\/lig-alps.imag.fr\/wp-content\/uploads\/2025\/06\/Roger-k-moore-150x150.jpg 150w\" sizes=\"auto, (max-width: 600px) 100vw, 600px\" \/><\/figure><div class=\"wp-block-media-text__content\">\n<p class=\"has-large-font-size\"><a href=\"https:\/\/www.sheffield.ac.uk\/cs\/people\/academic\/roger-k-moore\">Roger K. Moore<\/a> (University of Sheffield). <em>Machines aren&#8217;t people &#8211; so are there limits on how we converse with them?<\/em><\/p>\n<p>\n<details><summary>[Abstract]<\/summary>\nHuman beings have been talking with machines since the appearance of the first speech-based command-and-control systems in the 1980s. Over the intervening 40+ years, the capabilities of the core speech technology components have steadily improved such that high-accuracy automatic speech recognition, high-quality text-to-speech synthesis and LLM-powered spoken language dialogue are now readily available off-the-shelf.  However, machines aren&#8217;t people; they have no lived experience of their own, their understanding is superficial rather than communicative, they struggle with continuous multimodal interactive engagement, and they present confusing affordances to potential interlocutors.  This talk will discuss some of these challenges and ask whether they represent a limit on how we might converse with machines.\n<\/details>\n<\/p>\n<\/div><\/div>\n\n\n\n<div class=\"wp-block-media-text alignwide\" style=\"grid-template-columns:15% auto\"><figure class=\"wp-block-media-text__media\"><img loading=\"lazy\" decoding=\"async\" width=\"495\" height=\"870\" src=\"https:\/\/lig-alps.imag.fr\/wp-content\/uploads\/2026\/01\/carlos_manon.jpg\" alt=\"\" class=\"wp-image-936 size-full\" srcset=\"https:\/\/lig-alps.imag.fr\/wp-content\/uploads\/2026\/01\/carlos_manon.jpg 495w, https:\/\/lig-alps.imag.fr\/wp-content\/uploads\/2026\/01\/carlos_manon-171x300.jpg 171w\" sizes=\"auto, (max-width: 495px) 100vw, 495px\" \/><\/figure><div class=\"wp-block-media-text__content\">\n<p class=\"has-large-font-size\"><a href=\"https:\/\/pageperso.lis-lab.fr\/carlos.ramisch\/\">Carlos Ramisch<\/a> (Aix Marseille University) and <a href=\"https:\/\/fr.linkedin.com\/in\/manon-scholivet-b15aa2377\">Manon Scholivet<\/a> (LISN). <em>Zen Research &#8211; Methodology in Natural Language Processing<\/em><\/p>\n<p>\n<details><summary>[Abstract]<\/summary>\nExperiments and empirical evaluation are at the core of NLP research. However, experimental design is often neglected: we do things without really understanding, leading to shaky conclusions, stressful paper writing, and unfavourable reviews. Our course focuses on scientific methodology: key notions, recommended practices and avoidable traps. As methodology is often considered boring, we focus on concrete examples and propose practical exercises to make the course more fun. Thus, we collaboratively build an ideal of scientific methodology, encouraging us to move our habits towards this ideal. First, we reflect on the scientific approach and experiments in NLP, discuss research questions and hypotheses, provide criteria to define them, and highlight their importance. Second, once we find our research question, we have to justify its potential impact and plausibility with respect to the literature. Third, the data  we use, collect and annotate plays a crucial role to avoid regretting (hasty) experimental choices, so we discuss the creation and use of datasets. Fourth, scientific communication requires experience, but some tools can help: we study the structure of documents, presentations, tables and figures. We also cover reproducibility and open science.\n<\/details>\n<\/p>\n<\/div><\/div>\n\n\n\n<div class=\"wp-block-media-text alignwide is-stacked-on-mobile\" style=\"grid-template-columns:15% auto\"><figure class=\"wp-block-media-text__media\"><img loading=\"lazy\" decoding=\"async\" width=\"453\" height=\"575\" src=\"http:\/\/lig-alps.imag.fr\/wp-content\/uploads\/2022\/05\/FY-2020.jpg\" alt=\"\" class=\"wp-image-455 size-full\" srcset=\"https:\/\/lig-alps.imag.fr\/wp-content\/uploads\/2022\/05\/FY-2020.jpg 453w, https:\/\/lig-alps.imag.fr\/wp-content\/uploads\/2022\/05\/FY-2020-236x300.jpg 236w\" sizes=\"auto, (max-width: 453px) 100vw, 453px\" \/><\/figure><div class=\"wp-block-media-text__content\">\n<p class=\"has-large-font-size\"><a href=\"https:\/\/fyvo.github.io\/\">Fran\u00e7ois Yvon<\/a> (CNRS, ISIR). <em>Text Generation: Decoding with Constraints<\/em><\/p>\n<p>\n<details><summary>[Abstract]<\/summary>\nText generation, contextual or non-contextual, is ubiquitous in the\ncurrent LLM era, as it serves as the most basic block in multiple\napplication contexts, from question answering and dialog systems to text\nsummarization and machine translation, and many more. Generation is thus\nequally useful to compute deterministic and highly non-deterministic\nmappings with various level of output constraints. Furthermore, text\ngeneration is also used as a sub-routine of more complex generation\nstrategies, aiming to produce syntactically well-formed (e.g. for code\ngeneration) or semantically consistent outputs, possibility through\nmultiple steps of generation (e.g, in chain-of-thoughts generation) or\nto collect diverse samples from the generating distribution. In this\nclass, I will discuss generation algorithms with a focus on the\nintegration of various types of constraints in the decoder outputs.\n\nTo cover this considerable diversity of uses, multiple text generation\nstrategies have been proposed, some less well-known than others. In this\ntalk I will review various families of generation algorithms, from the\nmost basic ones to the more sophisticated approaches, so as to document,\nas much as possible, the possible options that are available to text\ngeneration users. The main focus for this year will be on decoding with\nconstraints.\n<\/details>\n<\/p>\n\n\n\n<p><\/p>\n<\/div><\/div>\n\n\n\n<p><br>Speakers from previous editions are <a href=\"http:\/\/alps-2025.imag.fr\/index.php\/speakers\/index.html\" data-type=\"link\" data-id=\"https:\/\/lig-alps.imag.fr\/index.php\/speakers\/\">here (2025)<\/a>&nbsp; <a href=\"http:\/\/alps-2024.imag.fr\/index.php\/speakers\/index.html\" target=\"_blank\" rel=\"noreferrer noopener\">here (2024)<\/a>,&nbsp;<a href=\"http:\/\/alps-2023.imag.fr\/index.php\/speakers\/index.html\" target=\"_blank\" rel=\"noreferrer noopener\">here (2023)<\/a>,&nbsp;<a href=\"http:\/\/alps-2022.imag.fr\/index.php\/speakers\/index.html\">here (2022)<\/a>&nbsp;and&nbsp;<a href=\"http:\/\/alps-2021.imag.fr\/index.php\/sample-page\/index.html\" target=\"_blank\" rel=\"noreferrer noopener\">here (2021)<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Isabelle Augenstein (University of Copenhagen). Understanding the Interplay between LLMs&#8217; Utilisation of Parametric and Contextual Knowledge [Abstract] Language Models (LMs) acquire parametric knowledge from their training process, embedding it within their weights. The increasing scalability of LMs, however, poses significant challenges for understanding a model&#8217;s inner workings and further for updating or correcting this embedded &hellip; <\/p>\n<p class=\"link-more\"><a href=\"https:\/\/lig-alps.imag.fr\/index.php\/speakers\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;Speakers&#8221;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"open","template":"","meta":{"footnotes":""},"class_list":["post-2","page","type-page","status-publish","hentry","entry"],"_links":{"self":[{"href":"https:\/\/lig-alps.imag.fr\/index.php\/wp-json\/wp\/v2\/pages\/2","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lig-alps.imag.fr\/index.php\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/lig-alps.imag.fr\/index.php\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/lig-alps.imag.fr\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/lig-alps.imag.fr\/index.php\/wp-json\/wp\/v2\/comments?post=2"}],"version-history":[{"count":151,"href":"https:\/\/lig-alps.imag.fr\/index.php\/wp-json\/wp\/v2\/pages\/2\/revisions"}],"predecessor-version":[{"id":961,"href":"https:\/\/lig-alps.imag.fr\/index.php\/wp-json\/wp\/v2\/pages\/2\/revisions\/961"}],"wp:attachment":[{"href":"https:\/\/lig-alps.imag.fr\/index.php\/wp-json\/wp\/v2\/media?parent=2"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}