Julie Wall

Julie Wall

A Conversational AI Approach to Detecting Deception and Tackling Insurance Frau

Biography

Dr Julie Wall is a Reader in Computer Science, Director of Impact and Innovation for the School of Architecture, Computing and Engineering and leads the Intelligent Systems Research Group at the University of East London. Her current research focuses on developing machine learning and deep learning approaches for speech enhancement, natural language processing and natural language understanding and she maintains collaborative R&D links with industry. This has led to the successful acceptance of two Innovate UK grants with a combined total value of £2,273,177. Since starting her PhD in 2006, Julie has been exploring the overarching research area of designing intelligent systems for processing and modelling temporal data. This primarily involves investigating the architectures and learning algorithms of neural networks for a variety of data sources.

https://www.uel.ac.uk/research/intelligent-systems

Abstract

Speech and natural language technology have advanced at a rapid pace in recent years. This advance, a facet of the industry 4.0 era, has been driven in part by GPU hardware and the deep learning frameworks that use them, and by the adoption of open-source ​software by the academic and commercial AI community alike. The spirit of cooperation among researchers in the academic and commercial worlds has resulted in claims of human parity in speech recognition models, and the emergence of numerous architectures based on decision trees, DNNs, CNNs, RNNs and Transformers, to mention but a few. These developments have markedly impacted the way in which humans communicate with computers and are currently driving numerous commercial products that rely on speech, natural language processing and natural language understanding, loosely termed Conversational AI. This talk will present a real-world case study in the insurance domain that exploits speech and language to produce an explainable pipeline that identifies and justifies the behavioural elements of a fraudulent claim during a telephone report of an insured loss.

To detect the behavioural features of speech for deception detection, we have curated a robust set of acoustic and linguistic markers that potentially indicate deception in a conversation. Statistical measures and machine learning were used to identify these linguistic markers in the right context. The explainable pipeline means that the output of the decision-making element of the system provides transparent decision explainability, overcoming the “black-box” challenge of traditional AI systems. This patent-pending technology, made possible through the support of funding from UK Research and Innovation (UKRI), is now part of a real-world commercial system, called LexiQal. This talk will outline the LexiQal approach to address the need for an efficient data-driven deep learning transparent approach (Explainable AI) to call analytics, an automated approach to forensic statement analysis, where there is a need to interpret the context of the spoken utterances accurately.