Review || Global Chair Lecture 76: Artificial Intelligence and Justice by Benjamin Liebman
Date:2024-11-14
On March 26, 2024, Professor Benjamin Liebman, Global Chair Scholar at Peking University Law School, Robert L. Lieff Chair Professor at Columbia University and Director of the Center for Chinese Law, gave a lecture on “Artificial Intelligence and Justice”. The lecture was hosted by Dai Xin, associate dean of Peking University Law School, associate professor, deputy secretary of the Party committee of Peking University Law School, deputy director of the Institute of Artificial Intelligence, Yang Xiaolei, associate professor Yang Ming, associate professor Hu Ling, assistant professor Wu Yuhao as a talk, more than one hundred teachers and students inside and outside of the school attended the lecture, the activities of the response was warm.
This article presents the core points of the lecture in the form of text transcript.
Benjamin Liebman:
I. Introduction
My interest in artificial intelligence stems from a short story. I had written an article on the use of AI in Chinese courts, and as a result, I was invited by the U.S. Court of Appeals for the State of New York last year to share my experiences about Chinese courts. I have been a professor in the U.S. for more than 20 years, and this was the first time I heard that U.S. courts wanted to learn from the experience of Chinese courts. So in July of last year, we had a meeting in Brooklyn, New York, and I learned that this is the busiest court in the United States, with judges handling more than 800 cases each per year, and I became more concerned about the potential for AI technology to reduce the burden on judges.
A lot of people in the U.S. legal community have been talking about AI lately, and the general consensus is that AI will cause significant changes to legal scholarship and practice, but not much, at least so far. In preparing for the PPT, I asked GPT a couple questions. I began by asking GPT, “Will you replace lawyers?” The GPT answered that it “will not replace lawyers completely, but it can assist lawyers in their work,” and it answered in a way that was very much like a trained lawyer. My second question was, “Do we need law schools when we have AI?” GPT's answer was interesting. It said, “GPT can provide general guidance and explanations, but because lack of experience is not a complete substitute, it is advisable to consult a licensed attorney.” This response was also very much like the words of an attorney.GPT knows how attorneys avoid liability. Later, I asked GPT to give me a speech outline on “How Artificial Intelligence is Changing the U.S. Legal System,” and GPT gave me a very informative but slapdash outline, with a tip to thank the audience at the end of the speech I gave in China.
I don't think I'm as smart as GPT and can't touch on that many issues in one talk. Therefore, I will focus on three issues in this talk, namely, emerging legal issues brought by AI, AI and legal decision-making, and AI and legal practice.
II, Emerging Legal Issues Brought by Artificial Intelligence
Intellectual property is the main field of emerging legal issues brought by artificial intelligence. The New Yorker (New Yorker) once had such a headline article: “Is Artificial Intelligence the Death of Intellectual Property? (IS A.I. THE DEATH OF I.P.?)” . Artificial intelligence technology has brought many new cases to the intellectual property arena, such as New York Times v. Open AI. The New York Times argued that Open AI used New York Times articles for training, but Open AI argued that it was fair use; the New York Times asserted that many of GPT's answers were similar to those of the New York Times, and that it could compete with the New York Times in the future. Similarly there have been other cases, such as some writers arguing that their novels and other literary works have been used by GPT to train AI. There is currently a case where an artist is suing an AI company, but the work being used to train the AI in that case is explicitly copyrighted, which is factually different from many other cases.
Then there is tort law. Many people today believe that AI technology has had a significant impact on tort law, for example, there is often a discussion about whether algorithms are considered a “product,” and this has implications for the applicability of product liability law. Artificial intelligence technology has also affected the framework of tort law, with some arguing that the original framework of tort law can be used to regulate artificial intelligence, but others arguing that new laws should be designed for artificial intelligence. In legal practice, there are many cases related to Autopilot, such as Tesla, where the courts still tend to consider infringement as a question of fact and therefore leave it to the jury to find.
In addition there is the area of criminal and constitutional law, which is mainly concerned with due process issues. For example, when the issue of facial recognition algorithms was discussed in the United States, it was found that the algorithms misidentified blacks and Asians 10-100 times more often than whites, and many people have argued that facial recognition does not qualify as due process for lawful arrests as a result. Further, more than 60 jurisdictions in the U.S. currently go to algorithms for risk assessment, but there are potential violations of due process and racial discrimination in this way. There have also been responses to such criticisms that judges themselves are just as biased as the AI, so the question is not whether the AI is biased, but whose bias is greater compared to the judge's bias.
There are also regulatory measures. Compared to the EU, the pace of technological development in the US has outpaced the development of regulatory measures. As things stand, President Biden has issued an executive order requiring AI companies to share information with the government, sought to enhance safety by developing standards and tools, and Congress has passed a blueprint for AI legislation. But the status quo is still basically a lot of discussion and little time, and there is no actual federal legislation addressing AI at this time. There has been some action in the states, with 18 states now passing state legislation, most of which require the creation of commissions to advise on AI development. In January, a total of 211 AI-related bills were introduced in states, half of which focused on Deepfake; California and New York have seen the most regulatory action, and AI legislation in large states is likely to be followed by essentially all large companies later on, much like environmental legislation. Florida passed interesting legislation yesterday that prohibits minors under the age of 14 from using social media even if their parents consent; for minors between the ages of 14-18, social media companies are required to revoke the accounts of those minors if their parents do not consent to their use of social media.
III. Artificial Intelligence and Legal Decision Making
This question focuses on whether courts will use AI to make adjudication decisions, which I personally think is unlikely. Compared to countries like China, the pace of using AI in American courts is slower. U.S. courts are very concerned with procedural law issues such as due process, and thus it is difficult to use technology such as AI to change the procedures of U.S. courts.
When I talked to judges in Brooklyn, I found that they were more unified in thinking that AI might have some room for development in making records or providing evidence, but were less optimistic about using AI directly for adjudication. I think U.S. courts should learn from China. For example, in New York to apply for a page of trial transcript (transcript) costs $ 7, the reason for such expensive pricing is the need for professionals involved, and behind these professionals there are labor unions involved in bargaining. If you learn Chinese courts to use AI to make transcripts, the cost will be much lower.
Overall U.S. courts use AI less, the executive branch instead may be more happy to use AI, for example, the SEC has begun to use AI to identify the potential risk of violations by the relevant persons, and state administrations are trying to use AI to identify enforcement priorities in areas such as health law.
IV. Artificial Intelligence and the Practice of Law
This issue has sparked interest mainly because of a number of interesting individual cases. For example, a young lawyer used GPT to write an appellate brief (brief) for his partner, and the partner filed that brief with the court, but in the end, it turned out that many of the precedents cited in the GPT-written brief did not exist, and the partner ended up being fined $5,000. Of course there are different approaches beyond fines, such as the case in the Federal Southern District Court in New York, where the judge found no bad faith on the part of the attorney and therefore did not penalize the attorney. Some jurisdictions already have clear rules that either require attorneys to submit materials with assurances that AI has not been used, or require attorneys to have performed post hoc verifications if AI has been used, with the goal of ensuring that each precedent is genuine. There was also a case where the court awarded the defendant the plaintiff's attorney's fees, and the attorney argued for a reasonable amount of attorney's fees by asking for a GPT, which the judge angrily found to be completely unpersuasive. All of these cases illustrate that AI plays a relatively limited role in actual legal retrieval, and perhaps, as I've said before, may be used primarily in the future to reduce the litigation burden of the judicial process.
In discussing the impact of AI on the law, many Americans believe that AI will reduce the time needed to do legal searches and may make the legal system more fair and just. In the U.S., poor people may not be able to afford good lawyers, and lawyers won't want to waste too much time on poor people's cases; in contrast, wealthy large corporations are able to hire better lawyers to provide them with better legal services. If AI can simplify the process of legal retrieval, the poor are more likely to have access to the law. Of course there are those who believe that AI could exacerbate legal inequality because the rich can use good AI but the poor can only use poor AI. another area that could be affected by AI technology is the due diligence practice of lawyers. This type of work is often performed by junior lawyers, and leaving it to AI would allow lawyers to handle more difficult legal research, but it could also lead to fewer jobs for junior lawyers.
Commentary Session
Xiaolei Yang:
I understand the issue of AI and law to have two dimensions. The first is the governance of AI, i.e., how to set rules for AI in order to make it consistent with basic human value judgments and needs from the perspective of a legal person. The other dimension is if the law is empowered with the help of AI technology to make it more consistent with human values and needs in terms of efficiency and effectiveness. Nowadays, many people are exploring and worrying about whether AI can replace certain legal professions, and I personally understand that such a replacement will definitely happen, but complete replacement seems unlikely at present. Fundamentally, the core of AI technology is to imitate human intelligence. To imitate human intelligence, we must first understand the intelligence itself, but the nature of human intelligence has not yet been scientifically demystified. ai has played a great role in the level of information and knowledge, and even in the law and other fields have surpassed human beings; however, human intelligence is not only in the level of information and knowledge, there is also free will, value judgment, and other deeper issues, which is the current ai technology temporarily unable to touch the part. However, human intelligence is not only on the level of information and knowledge, but also on deeper issues such as free will and value judgment, which are beyond the reach of current AI technology.
To date, I understand that the work of artificial intelligence and law has gone through three stages, namely, legal informatization, legal data and legal intelligence. The so-called legal informatization means that individuals or groups no longer need to go to the library, but can access legal information in databases, which greatly improves the efficiency of legal searches; the group here is not only the legal people, but also the common people. Legal informatization technology has replaced the tangible carriers of laws, regulations, and codes. The era of legal datatization took place in China from 2004 to 2006, and the mode of production of legal knowledge changed. For example, in a court trial, the plaintiff and defendant lawyers who found more legal data were more likely to construct more effective and persuasive legal knowledge in the process. By the era of legal big data, not only can people find the major and minor premises of legal reasoning more easily, technologies such as ChatGPT can also directly combine the major and minor premises to make an intelligent reasoning. We can say that AI law has entered an era of intelligent reasoning, and ChatGPT and our self-developed legal system can thus process a lot of isomorphic intellectual reasoning, so that the people no longer need to consult professional lawyers or jurists for many common legal problems they face, but can directly let the AI system answer them. In terms of the development of law, such a path is consistent with the value of the purpose of the law, because it allows more groups in society to be empowered by the use of rules. Current AI can only go so far, and AI technology under the current path cannot contribute more to issues such as emotional value judgments.
The traditional cognitive model generally divides humans into instrumental and value levels, and this generation of AI technology will not replace humans at the value level. At the instrumental level, such as the social division of labor, we will cede a portion of our capabilities into a public good. The reason why man invented tools was to replace a part of man: from the very beginning when they made man's limbs powerful, to now when AI technology makes a part of the human brain powerful. AI technology and man will form a companionable evolution, just as we didn't have smartphones before, but when each of us is holding a smartphone, this has become an integral part of our lives and production. Whether it's the Brooklyn judge mentioned by Prof. Liebman who hears 800 cases a year, or some of the Chinese judges I found in my research who deal with 1,000 cases a year, many of them are the same type of repetitive work. If judges are assisted by an AI system, it will certainly reduce their workload greatly. However, it is only an auxiliary system and cannot replace humans in making the final judgment. This is when humans have a new job or ability to work together, i.e., to utilize such an auxiliary system. This is the general trend and cannot be changed. In this transition phase, consideration should be given to how to adjust our work, at least machines are currently unable to value judgment, make choices, and make strategies.
Yang Ming:
I will mainly talk about some ideas from the perspective of intellectual property protection. In the last one or two years, especially after the two cases adjudicated by Beijing Internet Court (“Beihu”) and Guangzhou Internet Court (“Guanghou”) one after another, it can be said that the most lively keyword in China's intellectual property academic and practical circles is AI.
The two cases of Beihu and Guanghou involved two different issues. The Beihu case mainly discussed whether AI generated works are works protected by the copyright law. The presiding judge understood the AI content generation technology in great detail and made a more cautious judgment after understanding it, but it still triggered many controversies in the academic circles. The Guanghut case mainly involved the issue of duty of care of the AI providing platform. after the AI algorithms were developed and commercially utilized there were users who went to the platform to generate the content, and the platform was then sued by the copyright owner of the original material trained by the algorithms, but the copyright owner did not sue the users. In this case, Guanghou's first instance judgment found that the platform had violated its duty of care, but only awarded a very low amount of compensation; in the decision, the court pointed out that the platform needed to fulfill its duty of care, but also made it clear that it would only award a lower amount of compensation in order to encourage the development of the technology. This case also triggered a lot of discussion, and all walks of life are looking forward to the second trial decision.
For AI model training, the legal risk also lies in the fact that there is an input side, which requires input material to train the algorithm. The plaintiff argued that the input process has the act of storing, and only after that new text is generated. On this issue, the plaintiff's lawyers were pointed out that there were technical issues and it was said that there was no storage process for model training. The plaintiff also argued that the generated images reflected the substantial features of the plaintiff's copyrighted work, such as the fact that the generated avatars had the same features as the original artwork. I understand that the core dispute still lies in the competition for market interest. It is the existence of a competitive market that gives rise to disputes such as the Guanghou case. This kind of dispute involves two sides and multiple subjects, one side is the developer, provider and user of AI, and the other side is the copyright owner of the original material, except the user, both sides are competing for the same market. Why haven't the copyright holders sued the users? I understand its to get to the root of the problem, which is to challenge the legitimacy of the AI algorithms themselves.
The training and use of AI involves “breaking up” and “reorganizing” the input material. First, the input material needs to be coded, with different code load feature vectors or so-called elements, features, this “break-up” process will not store or copy the input material, whether infringement should be discussed from the perspective of the end of the distribution of benefits, and the use of AI training involves the “reorganization”, i.e., the user inputs information through the input. The use of AI after training involves “reorganization”, i.e., the user generates new works by inputting commands, which is also the most controversial aspect at present. For example, in the process of users continuously issuing commands, whether it can be recognized that they are working towards a specific goal, and thus new works may be copyrighted. In terms of the industrial chain, from the technology developer, the technology provider to the final end-user, this action itself involves a lot of benefits and risk distribution, and who should we let bear the risk. Everyone has a different view, and I personally prefer the end user to bear the risk. From the source, the process of machine learning is mainly to read feature vectors, and does not involve a specific end commercial market, should not impose too much duty of care; of course, does not exclude the developer in the algorithm development stage anchored in a specific market, in this case can not be excluded from the duty of care. When generating images, the user needs to keep entering instructions. If the instructions are general and abstract, the generated image may be similar to many images and should not be recognized as infringing; however, if multiple instructions result in a directional image, it may be recognized as infringing. Therefore, whether it is the technology developer, service provider or end-user, different obligations of care should be recognized according to the degree of their involvement in the production process and the competition for interests in the end commercial market. The general conclusion that technology developers should not be held liable should not be drawn for the sake of so-called technological neutrality or in order to promote technological progress.
Finally, regarding artificial intelligence and legal decision-making, I personally believe that AI can be used to assist judges and patent examiners in their work. When I conducted research in the courts many years ago, I found that the per capita caseload of intellectual property judges in some courts was more than 800 cases; patent examiners were in a similar situation, with many cases to be processed each year. At this time, we can consider introducing AI to assist their work. Of course, we also need to pay attention to the potential risks of AI-assisted work, which is a matter of balance.
Hu Ling:
I will talk a little bit about my own thoughts on the application of court-related AI technology. I think that whether it is AI or other data technology, it is grafted on the production organization of the Internet. For example, the Internet platform has formed a unique production method and business model. The same is true for traditional organizations, such as courts, governments, and universities. Thinking in terms of management processes, traditional organizations need sectionalized management techniques, such as the need for an information system.While AI-assisted case management techniques are important, they are relatively peripheral to the operation of the courts themselves, and require thinking about what role AI plays in organized activities.
Organizational managers are most concerned about how low-cost technology can be used to better coordinate resources, making them more efficiently transferred to those who need them most, and how to make the organization more efficient. This is the Court's primary concern and allows us to understand more about the problem. The courts have been working on internal informatization for a long time, using intranets and other means that will connect the various sections, which allows the organization to be integrated in a more efficient way. The trend towards vertical management of the courts just got stronger and stronger, with more and more emphasis on this quantitative assessment metrics, which means that the upper and lower courts have to process the data faster. This way we can understand why there are many big companies that provide powerful digital systems for courts. Quantitative management within the courts is very detailed, and individual judges themselves may not be able to understand what specific cases mean in terms of overall statistics; however, from a management point of view, data such as the annual completion rate and the ratio of first and second instance trials are very important.
A related reform is the post system. The post system has resulted in a reduction in the number of judges who would otherwise be available to try cases, and the courts, faced with the problem of a large number of cases and a small number of staff, should consider how to better incentivize post judges to conduct trial business. This is a matter of matching efficiency and does require some technical assistance. China's courts are a huge system, and applying limited resources in a limited space is the incentive for courts to use AI technology to improve efficiency.
Wu Yuhao:
I'm more of an AI user than a researcher, so I'll talk about some experiences from that perspective.
Prof. Liebman's talk about AI-assisted decision-making for judges is very interesting. I personally understand that judges' decision-making is divided into two aspects, one of which is decision-making on facts that have already been generated. Currently judges in all countries are more cautious about AI assistance, and all countries require judges to reason in their decisions. The other part of decision-making may be the prediction of the future, which is concentrated in the field of criminal law such as pre-trial detention. It is difficult for a judge to reason out the probability that a particular person will reoffend in the future, and AI is to some extent a very good reasoning tool for the judge, who can leave the decision-making that the law makes clear by reasoning to the judgment of AI. There has been a lot of research on this in the United States, for example, Human Decisions and Machine Predictions by Jon Kleinberg and others, who argue that the use of AI in pre-trial detention decisions will greatly improve the effectiveness of decision-making. There are also empirical studies showing that the use of AI to predict recidivism risk of offenders in the United States may be suspected of racial discrimination. There are also many courts in China that advocate doing social risk assessment based on AI technology, which involves the balance between instrumentality and value that Mr. Yang Xiaolei previously talked about. If AI is allowed to do it, AI will infinitely highlight its instrumentality, which can greatly improve efficiency, but may ignore value judgment.
I myself currently use AI in the most scenarios is teaching. For example, before I teach students to process data, it may be difficult to find the data to accomplish a specific teaching task, and that's when I ask AI to generate data for me. From this perspective, AI works well as a tool for teaching. Of course, in the long run, AI's prediction of outcomes may bring about a certain crisis in the prediction and explanation of the social sciences as a whole. Traditionally, social sciences predict phenomena by proposing theories and then using empirical studies to verify the theories. If the theory is proved by empirical research, the theory can be used to explain or predict new phenomena in the future. Nowadays, the development of artificial intelligence technology makes the prediction of social phenomena no longer rely on social science theories, as long as a variety of social factors into the AI can have a good prediction. Criminologists used to use a variety of criminological theories to explain why a certain place has a high crime rate, but now it seems that all criminologists' predictions are not as accurate as the predictions of algorithmic engineers specializing in AI, and then it is necessary to reflect on the value of social science theories in explaining and predicting phenomena.
Liebman:
Let me respond briefly to the issue of social risk assessment. There are two main issues within the U.S. The first is the use of algorithms to determine the likelihood of danger itself, and the second is whether defendants have the right to review the algorithms. Because the algorithms themselves are protected by intellectual property rights, they may not be disclosed to the defendant. Many believe that whether or not an algorithm is used to determine whether a perpetrator should be incarcerated, the perpetrator should at least be given the opportunity to examine the algorithm. There are historical reasons for these ideas, and early algorithms in the U.S. 10 years ago had racially discriminatory detection results that differed greatly between blacks and whites.
Q&A session
Hong Zhao:
I think the central question is who bears the responsibility when AI causes risk and damage. Personally, I tend to think that developers bear a great deal of responsibility because giving AI the ability to act is the starting point of the event. The kind of material and data used to train the algorithm is a core step, but it is still a black box protected by intellectual property rights. The United Nations released a declaration on AI yesterday, hoping that AI improves human development and well-being, but AI running in black boxes is difficult to regulate. Today AI has been shown to be anti-human and racist, with different proportions of data collected by AI leading to differences in AI's attitudes towards different groups of people. We need to think about how good human values are reflected in AI. How should we regulate the values of AI?
In addition, there may be risks associated with the use of AI-assisted adjudication. In practice, many small criminal cases in the UK and the US are dealt with swiftly and parties are unable to substantively contest them; after all, judges are concerned with big and important cases. How can the parties in these cases protect their interests? Based on my personal experience in the Appellate Body, I very much understand the contribution of AI to consistency and modularity in adjudication, but the law has never been free of differing views, and there are conflicting opinions, both from the U.S. Supreme Court and from various international tribunals. The core issue of which decision or which opinion to use in different situations is still how the algorithms are designed, because there is no way we can ensure that the algorithms are fair and serve the interests of the majority. In addition, we have to discuss how to regulate AI, i.e., what categorization should be used to absorb AI legal issues and so on.
Liebman:
Couldn't agree more with Mr. Hong Zhao. In the U.S., the issue of fairness and justice after companies hold algorithms is very complex, and the courts may provide a good platform for argumentation.
Shen:
Are there arguments against regulating AI in the US?
Liebman:
The U.S. is regulating AI, but there is opposition to regulation in the U.S. in any area.
Opening Scholar Bio:
Professor Benjamin L. Liebman is the Director of the Center for the Study of Chinese Law at Columbia University School of Law. His research areas involve Chinese securities law, environmental law, tort law, criminal procedure law, etc. He is a well-known and authoritative expert on Chinese legal issues in the U.S. legal community.
Translated by: Wei Mingxuan
Edited by: Ren Zhiyi