The program will be available shortly. Please check back later.
Presenter: Nicholas Walker, Norsk Regnesentral
Abstract: Recent advances in dialogue systems have attracted increasing attention both inside of academic research and in the general public. Language models such as ChatGPT have demonstrated impressive capabilities, but many questions remain about their ability to reason over information and their tendency to “hallucinate” facts. In this talk, I will discuss outstanding issues with logical reasoning and hallucination in large language models and look at a graph-based approach to representing information for use with large language models. In this approach, entities in dialogue such as people, places,or sentences in the conversation are represented as nodes in a graph, with edges corresponding to semantic relations between them. Graph representation in this fashion enables the use of logical rules which can be used to generate text input derived from the graph, which is then made available to a language model. This approach enables explicit representation of logical reasoning about the world, and experiments have shown promise for improving model output and reducing model hallucinations.
In compliance with GDPR consent requirements, presentations given in a Visual Intelligence context may be recorded with the consent of the speaker. All recordings are edited to remove all faces, names and voices of other participants. Questions and comments by the audience will hence be removed and will not appear in the recording. With the freely given consent from the speaker, recorded presentation may be posted on the Visual Intelligence YouTube channel.
Nicholas Walker, Norsk Regnesentral
This seminar is open for members of the consortium. If you want to participate as a guest please sign up.