
Truth or Falsehood: Fact-Checking Techniques and Methods for Generative AI Outputs
Li-Chen Fu
Distinguished Professor, Department of Electrical Engineering and Computer Science and Information Engineering, National Taiwan University
Abstract
In today's rapid advancement of generative artificial intelligence, the issue of "hallucination" has emerged as a major challenge in everyday applications. *Hallucination* refers to the generation of content by AI models that is inconsistent with facts or lacks a reliable basis. This often occurs in tasks such as question answering, summarization, and dialogue, posing a significant threat to user trust.
This talk will begin by introducing the types and causes of hallucination problems, followed by a review of current mainstream countermeasures, including claim extraction, information retrieval, and fact verification. We will explain the mechanisms, suitable contexts, and limitations of each approach.
Next, we will share the challenges and difficulties encountered when applying fact-checking in the TAIHU project. Specifically, we will discuss how to verify facts when interpretations of humanistic developments and historical events vary due to different archival sources.
Finally, we will present the actual implementation and design considerations of the fact-checking tools used in this project, as well as their future development direction. The goal is to establish a flexible yet accurate verification system within the overall framework.