About the Course

Spatial Gesture Semantics

ESSLLI 2025 Advanced Course, Week 1, July 28—August 1, 5.00pm—6.30pm, GA 03/142

Visual communication means such as manual gestures interact with speech meaning. At the same time, it is well-known that gestures are sublinguistic in the sense that their linguistic (possibly contrary to their visual contribution) are neither at the at-issue nor at the non-at-issue level. The course aims to unravel this puzzle. Firstly, the sublinguistic status of gestures is captured by spatial semantics. Secondly, the course aims to explain why it is possible to talk about gestures, for instance, as part of clarification interaction. Thirdly, the preceding explanation requires a semantic framework that departs from the standard Frege/Montague models and is based on perceptual classification (with some affinity to the philosophy of Nelson Goodman).

Lectures

The slides from the individual lectures can be obtained here:

  • Lecture 1

    Introduction: Visuo-spatial Level of Meaning

    A distinction is made between different types of gestures and modes of representation. A series of tests demonstrates the sublinguistic, visuo-spatial character of gestures. Euclidean, visuo-spatial extensions of model-theory following work by Joost Zwarts are introduced.
    Slides of Lect. 1

  • Lecture 2

    Spatial Gesture Semantics

    Gesture representations are translated into vector sequences that are interpreted in visuo-spatial models. Modifying geometric operations and hand shape specifications are introduced. A compositional, truth-functional semantics of iconic gestures is achieved.
    Slides of Lect. 2

  • Lecture 3

    Extemplification and Informational Evaluation

    The classification of gestures using linguistic labels is a heuristic process that we classify as interpretative heuristics in semantic theory. This heuristic is based on Goodman's philosophy of language and art, as well as on classifier-based approaches in computational semantics and cognitive science.
    Slides of Lect. 3

  • Lecture 4

    AI and Gesture Detection

    The computational stance of Lect. 3 is followed to state of the art: a basic introduction to the concepts of machine learning and multimodal is AI is given. Current AI methods of gesture detection and gesture classification are described.
    Slides of Lect. 4

  • Lecture 5

    Frame-based Speech—Gesture Integration

    Once a gesture has been given a linguistic interpretation (e.g., by working semanticists), the question arises as to how this interpretation interacts with verbal meaning. It will be shown how standard methods from dynamic semantics can also be applied to the integration of spoken language and gesture.
    Slides of Lect. 5

What to read

Essential references

  • Zwarts (1997). Vectors as Relative Positions: A Compositional Semantics of Modified PPs. Journal of semantics 14(1). DOI: 10.1093/jos/14.1.57
  • Zwarts (2003). Vectors Across Spatial Domains: From Place to Size, Orientation, Shape, and Parts. In: Representing Direction in Language and Space. Preprint
  • Müller (2014). Gestural modes of representation as techniques of depiction. In Body—Language—Communication: An International Handbook on Multimodality in Human Interaction.
  • Lücking, Henlein, Mehler (2024). Iconic Gesture Semantics. lingbuzz

Additional references

Contact

Andy Lücking is Privatdozent at Goethe University Frankfurt.

Alexander Henlein is PostDoc at Goethe University Frankfurt.

If you have questions, get in touch: luecking@em.uni-frankfurt.de