Cartoons can be understood without language. That is, a
suitably arranged scene of simple objects, with no accompanying text, is
often enough to make us laugh – evidence that thinking (mental
activity) happens before language. This raises the question of non-linguistic
diagrammatic representation of spatial humour, along with the mechanism of
neural computation. In particular, we raise following questions: (1) How can
we diagrammatically formalise spatial humour? (2) How can these diagrammatic
formalisms be processed by neural networks? (3) How can this neural
computation deliver high-level schema that are similar to the
script-opposition semantic theory of humour? The spatial knowledge encoded in
the scene can activate the necessary spatial and non- spatial knowledge. By
what neural associative mechanism or process of reasoning do we put this all
together to “get” the joke? During the seminar, we aimed to make
some headway towards establishing (1) exactly what sort of
scene-specific and common-sense knowledge is required to understand any given
cartoon, (2) what part of this knowledge could in principle be acquired
by existing machine learning (ML) techniques, and which could be acquired or
encoded through symbolic structures, (3) what activation process
acquires the rest of the knowledge required to interpret the humour, and
(4) whether there is a unified representation that could represent this
knowledge in a computer’s working memory.
@article{
miller2022can,
author = {Tristan Miller and Anthony Cohn and Tiansi Dong and
Christian Hempelmann and Siba Mohsen and Julia Rayz},
title = {Can We Diagram the Understanding of
Humour?},
journal = {Dagstuhl Reports},
volume = {11},
number = {8},
pages = {33},
year = {2022},
issn = {2192-5283},
}