Seng & Mason on Artificial Intelligence & Evidence

Daniel Kiat Boon Seng (Director, Centre for Technology, Robotics, AI and the Law, Faculty of Law, National University of Singapore) & Stephen Mason (Digital Evidence Journal) have posted Artificial Intelligence and Evidence (Singapore Academy of Law Special Issue on Law and Technology, (2021) 33 SAcLJ 241-279) on SSRN.  Here is the abstract:

The proliferation and use of artificial intelligence (“AI”) systems that are powered by machine learning (“ML”) to gather and process information means that admitting such evidence will raise issues not only about the admissibility of electronic evidence but also about the limitations inherent in ML. The treatment of the presumption of reliability of computer systems, including AI systems, as a conclusive, legal presumption fails to understand that software systems can produce subtle mistakes that are not obvious. This is compounded by the fact that the non-procedural nature of ML and AI systems amplifies the difficulty of proving or disproving the reliability of AI systems. The fact that ML and AI systems produce results from datasets that contain embedded human assertions also means that the application of the hearsay rule to AI output may be more apposite than previously thought. The authentication of electronic evidence should be subject to a clear procedure to be developed by the courts, especially in an era of “deepfakes” and other digitally manipulated data. The article concludes with a look at the issues in discovery and disclosure related to voluminous electronic evidence empowered by the use of predictive coding and the need to conduct an “examination” of software code.