Julie Wall Abstract

Julie Wall

A Conversational AI Approach to Detecting Deception and Tackling Insurance Frau

Abstract

Speech and natural language technology have advanced at a rapid pace in recent years. This advance, a facet of the industry 4.0 era, has been driven in part by GPU hardware and the deep learning frameworks that use them, and by the adoption of open-source ​software by the academic and commercial AI community alike. The spirit of cooperation among researchers in the academic and commercial worlds has resulted in claims of human parity in speech recognition models, and the emergence of numerous architectures based on decision trees, DNNs, CNNs, RNNs and Transformers, to mention but a few. These developments have markedly impacted the way in which humans communicate with computers and are currently driving numerous commercial products that rely on speech, natural language processing and natural language understanding, loosely termed Conversational AI. This talk will present a real-world case study in the insurance domain that exploits speech and language to produce an explainable pipeline that identifies and justifies the behavioural elements of a fraudulent claim during a telephone report of an insured loss.

To detect the behavioural features of speech for deception detection, we have curated a robust set of acoustic and linguistic markers that potentially indicate deception in a conversation. Statistical measures and machine learning were used to identify these linguistic markers in the right context. The explainable pipeline means that the output of the decision-making element of the system provides transparent decision explainability, overcoming the “black-box” challenge of traditional AI systems. This patent-pending technology, made possible through the support of funding from UK Research and Innovation (UKRI), is now part of a real-world commercial system, called LexiQal. This talk will outline the LexiQal approach to address the need for an efficient data-driven deep learning transparent approach (Explainable AI) to call analytics, an automated approach to forensic statement analysis, where there is a need to interpret the context of the spoken utterances accurately.