2010 lecture

2010 lecture

by Academy Admin

The 2010 lecture was delivered by Professor Yorick Wilks, Oxford Internet Institute at Balliol College.

Synopsis

'What will a companionable computational agent be like?'

The lecture begins by looking at the state of the art in modeling realistic conversation with computers over the last 40 years, and argues that there is real progress even though some systems of the late 60s were remarkably good, even though largely forgotten now. I then move on to ask what we would want in a long-term conversational agent that was designed for a long-term relationship with a user, rather than the carrying out of a single brief task, like buying a railway ticket. Such an agent I shall call “companionable”:

I shall distinguish several functions for such agents, but the feature they share will be that, in some definable sense, a computer Companion knows a great deal about its owner and can use that information. For this lecture, it will not be important what form, robotic or otherwise a Companion has, and I shall not focus on developments in speech understanding and generation but just assume the state of the art. The focus will be on what such a Companion should know and how it can gain and use such knowledge though the understanding of conversations.

COMPANIONS is an EU project (2006-2010) that aimed to change the way we think about the relationships of people to computers and the Internet by developing a virtual conversational Companion that stays with the user for long periods of time, developing a relationship and 'knowing' its owners preferences and wishes. The lecture describes the functionality and system modules of a Senior Companion (SC), one of two initial prototypes built in the first two years of the project.

The Senior Companion provides a multimodal interface for eliciting and retrieving personal information from the elderly user through a conversation about their photographs. The Companion, through conversation, elicits life memories, often prompted by discussion of their photographs.

It is a further assumption that most life information will be stored on the internet (as in the EPSRC Memories for Life project) and the SC is linked directly to photo inventories in Facebook, to gain initial information about people and relationships, as well as to Wikipedia to enable it to respond about places mentioned in conversations about images.

The demonstration is primitive but plausible and one of its key features is this ability to break out of the standard AI constraint on very limit pre-programmed knowledge worlds into a wider, unbounded world of knowledge in the Internet by capturing web knowledge in real time, again by Information Extraction methods.

The overall aim of the SC, not yet achieved, is to produce a coherent life narrative for its user from these materials, although its short term goals are to assist, amuse, entertain and gain the trust of the user. The SC uses well-established Information Extraction technology to get content from the speech input, rather than conventional parsing, and retains utterance content, extracted internet information and ontologies all in RDF formalism over which it does primitive reasoning about people.

It has a dialogue manager virtual machine intended to capture mixed initiative dialogue - where each partner can drive the conversation - between Companion and user, and which can be a basis for later replacement by learned components. The lecture discusses the prospects for machine learning in the conversational modeling field and progress to date on incorporating notions of emotion into AI systems.

Watch the lecture