Publications

Below a selection of works I like most. For the full list of publications please see the Publications menu at the TTLab page.

Andy Lücking and Jonathan Ginzburg. Leading voices: Dialogue semantics, cognitive science, and the polyphonic structure of multimodal interaction. Language and Cognition , Volume 15 , Issue 1 , January 2023 , pp. 148–172, DOI: 10.1017/langcog.2022.30. (Preprint)

Multimodal communication from a dialogue semantic point of view, formulates the multimodal serialization hypothesis and questions turn organization.

Andy Lücking and Jonathan Ginzburg. Referential transparency as the proper treatment of quantification. In: Semantics and Pragmatics 15, 4 (2022). DOI: 10.3765/sp.15.4. (Early access)

A dialogue- and gesture-friendly theory of quantification; with new perspectives on noun phrase negation and complement set anaphora.

Andy Lücking. Gesture. In: Head Driven Phrase Structure Grammar: The handbook. Ed. by Stefan Müller, Anne Abeillé, Robert D. Borsley and Jean-Pierre Koenig. Empirically Oriented Theoretical Morphology and Syntax 9. Berlin: Language Science Press, 2021. Chap. 27, pp. 1201–1250. DOI: 10.5281/zenodo.5543318. URL: https://langsci-press.org/catalog/book/259.

Jonathan Ginzburg and Andy Lücking. I thought pointing is rude: A dialogue-semantic analysis of pointing at the addressee. In: Proceedings of Sinn und Bedeutung 25. Ed. by Patrick Grosz, Luisa Martí, Hazel Pearson, Yasutada Sudo and Sarah Zobel. SuB 25. Special Session: Gestures and Natural Language Semantics. University College London (Online), 2021, pp. 276–291. URL: https: //ojs.ub.uni-konstanz.de/sub/index.php/sub/article/view/937.

Analysing discourse pointing.

Jonathan Ginzburg and Andy Lücking. On Laughter and Forgetting and Reconversing: A neurologically-inspired model of conversational context. In: Proceedings of the 24th Workshop on the Semantics and Pragmatics of Dialogue. SemDial/WatchDial. Brandeis University, Waltham, New Jersey (Online), 2020. URL: http://semdial.org/anthology/Z20-Ginzburg_semdial_0008.pdf.

Context in dialogue semantics is memory structures.

Andy Lücking. Witness-loaded and Witness-free Demonstratives. In: Atypical Demonstratives. Syntax, Semantics and Pragmatics. Ed. by Marco Coniglio, Andrew Murphy, Eva Schlachter and Tonjes Veenstra. Linguistische Arbeiten 568. Berlin und Boston: De Gruyter, 2018, pp. 255–284. (Preprint)

Demonstration acts as search instructions.

Andy Lücking, Thies Pfeiffer and Hannes Rieser. Pointing and Reference Reconsidered. In: Journal of Pragmatics 77 (2015), pp. 56–79. DOI: 10.1016/j.pragma.2014.12.013.

There is no such thing as direct reference...

Andy Lücking. Ikonische Gesten. Grundzüge einer linguistischen Theorie. Berlin and Boston: De Gruyter, 2013.

Grounding iconic gesture meaning in semantic models by means of exemplification; connects gesture meaning to cognitive representations by means of psychophysics; sets up a phonetic-kinematic gesture interface in grammar.

Andy Lücking, Sebastian Ptock and Kirsten Bergmann. Assessing Agreement on Segmentations by Means of Staccato, the Segmentation Agreement Calculator according to Thomann. In: Gesture and Sign Language in Human-Computer Interaction and Embodied Communication. 9th International Gesture Workshop, GW 2011, Athens, Greece, May 2011, Revised Selected Papers. Ed. by Eleni Efthimiou, Georgios Kouroupetroglou and Stavroula-Evita Fotina. Lecture Notes in Artificial Intelligence 7206. Berlin und Heidelberg: Springer, 2012, pp. 129–138. DOI: 10.1007/978-3-642-34182-3_12.

A procedure and a tool to assess agreement on markings; Staccato is installed in the video annotation tool ELAN.

Andy Lücking, Kirsten Bergmann, Florian Hahn, Stefan Kopp and Hannes Rieser. The Bielefeld Speech and Gesture Alignment Corpus (SaGA). In: Multimodal Corpora: Advances in Capturing, Coding and Analyzing Multimodality. LREC 2010. 7th International Conference for Language Resources und Evaluation. Malta, 2010, pp. 92–98. DOI: 10.13140/2.1.4216.1922.

Andy Lücking, Alexander Mehler and Peter Menke. Taking Fingerprints of Speech-and-Gesture Ensembles: Approaching Empirical Evidence of Intrapersonal Alignmnent in Multimodal Communication. In: Proceedings of the 12th Workshop on the Semantics and Pragmatics of Dialogue. LonDial'08. King’s College London, 2008, pp. 157–164. URL: http://semdial.org/anthology/Z08-Lucking_semdial_0026.pdf.

Recurrent speech–gestures pairs undergo changes in form.

Workshops

Dublin 2004

A first session with the LKB system at Trinity College, Dublin (with Hannes Rieser).

Presentations

A few presentations which cover some work on gestures, quantification and alignment. The presentations also cover material on the spatial semantics of pointing gestures and repercussions on deferred reference, which is still not published properly...

Pointing: From reference to attention and back. Invited talk given at the Bochum Language Colloquium, May 3, 2022 [PDF]

I thought pointing is rude: A dialogue-semantic analysis of pointing at the addressee. Talk given at Sinn und Bedeutung 25, Sept. 01–02, 2020, Workshop on Gestures and Natural Language Semantics [PDF]

Turning context into meaning. Invited talk given at the Centre for Linguistic Theory and Studies in Probability (CLASP), Gothenburg University, December 11th, 2018 [PDF]

From Neural Activation to Symbolic Alignment. Talk given at the International Joint Conference on Neural Networks, San Jose, California, July 31–August 5, 2011 [PDF]

On Grammar

Grammars—interfaces between phonology, morphology, syntax and semantics—are the backbones of linguistic research. Accordingly, grammars play an important role in my work, too. This includes multimodal grammar extensions, as developed in my book Ikonische Gesten, as well as dialogue interfaces, as outlined in the handbook chapter Grammar in dialogue (available at the publisher's website). Meanwhile, this grammar work has been extended by the plural and quantifier theory RTT. Since the grammar fragment that implements RTT was a bit too much to be included in the main article, I put it here.

Projects

CC BY-SA 4.0, digmedia.lucdh.nl

GeMDiS

Virtual Reality Sustained Multimodal Distributional Semantics for Gestures in Dialogue (GeMDiS).

Due to a lack of apt corpora, "multimodal linguistics" and dialogue theory cannot participate in established distributional methods of corpus linguistics and computational semantics. The main reason for this is the difficulty of collecting multimodal data in an appropriate way and at an appropriate scale. Using the latest VR-based recording methods, the GeMDiS project aims to close this data gap and to investigate visual communication by means of machine-based methods and innovative use of neuronal and active learning for small data using the systematic reference dimensions of associativity and contiguity of the features of visual and non-visual communicative signs.

GeMDiS is part of the DFG priority programme Visual Communication.