Thursday 16 October 2008

Modelling Cognitive Behaviour

Last Friday (10 October 2009) I attended a small but very stimulating meeting organised by Bristol University on ‘Modelling Cognitive Behaviour - see http://patterns.enm.bris.ac.uk/modelling-cognitive-behaviour for the details. The line-up of speakers included Sue Blackmore (author of ‘The Meme Machine’) and Richard Gregory (author of ‘Eye and Brain’), and the meeting covered an impressive range of scientific topics and historical time – the latter arising from Owen Holland’s review of the post-war ‘Ratio Club’ and Richard Gregory’s reminiscences of the meeting on Artificial Intelligence that took place at the National Physical Laboratory exactly 50 years ago.

Although it was interesting to hear about the early work on cybernetics, and that ideas such as conditional probabilities, control systems and information theory had all been identified as key elements of intelligent machines, by far the most stunning talk was Tony Trewavas’s insight into the intelligence of plants! If anyone had any doubt that all living organisms share many of the behaviours that one might think to be unique to animals and/or human beings, then Tony soon put us right. Amongst much other behaviour, plants were shown to exhibit general information processing, adaptation and storage, active planning, communication, competitive behaviour and an ability to distinguish between self and others. In the final discussion, Tony delivered the memorable soundbite – “plants don't have brains, in some sense they are brains”!

Whilst it was generally agreed that plants may be said to have ‘intelligence’, it was felt that they could not be said to exhibit ‘cognition’. Plants certainly have complex self-regulated networks that communicate with the environment but, unlike many animals, they don’t seem to possess an internal representation of their external world. Also, as Richard Gregory observed: “plants have needs, but humans have needs and wants”. Richard’s remark followed an extensive discussion on the nature of ‘consciousness’ and the various explanations that had been put forward during the day, ranging from internal simulations of the self and the external world (Owen Holland), the cuing of episodic memory when the next actions aren’t obvious during search (Joanna Bryson) and that it was all an illusion anyway (Sue Blackmore).

So what, if any, were the implications for spoken language processing? I think it’s interesting to appreciate that some of the earliest ideas in machine intelligence (such as conditional probabilities) have subsequently become central to the design and implementation of contemporary state-of-the-art speech technology systems. It confirmed my opinion that not all old ideas are bad ideas. Indeed, the vital dependencies that exist between a speaker/listener and their interlocutors/environment are still understood very poorly; no-one models spoken language as part of a complex control-feedback process. Even plant behaviour appears to be based on fundamental cybernetic principles that seem to underpin the behaviour of all living systems. Maybe the only way to make progress in our understanding of spoken language processing is to revisit some those early ideas in cybernetics?