Skip to main content Skip to secondary navigation
Page Content

Law, Policy, & AI Update: Does Section 230 Cover Generative AI?

Meanwhile, the FTC tells AI companies to be careful with their hype, and class action lawsuits follow for unauthorized practice of law by AI.

Image
Image of the Supreme Court in magenta colors

This month's legal and policy developments have brought both challenges and opportunities for the AI industry. GPT-4 clearly has new capabilities––including allegedly passing the bar exam––and AI is rapidly being deployed throughout legal contexts. The application of AI in legal decision-making is gaining attention, as evidenced by judges in Colombia and the United States referencing ChatGPT in their opinions (though in very different ways). And companies like DoNotPay are forging ahead with law-related deployments in the real world. Governments are also heavily investing in a highly competitive AI landscape, with the U.K. putting £900 million into supercomputers and proposing innovative regulatory sandboxes.

With these deployments, however, come risks and a rapidly changing regulatory landscape. In the United States, the debate over whether generative AI, such as GPT-4, falls under Section 230's liability shield could have significant implications for companies relying on these models, as they may be held liable for generated content. The FTC's warning to keep AI claims in check highlights the growing scrutiny of AI applications by regulators. And class action litigation against DoNotPay demonstrates the regulatory complexity of AI-assisted services. We can expect that as capabilities grow and the amount of uses expand, there will be more litigation and regulatory action against AI to constrain potential harms.

Law

  • Legislators who helped write Section 230 in the United States stated that they do not believe generative AI like ChatGPT or GPT4 is covered by the law’s liability shield. Supreme Court Justice Neil Gorsuch suggested the same thing earlier in oral arguments for Gonzalez v. Google. If true, this means that companies would be liable for content generated by these models, potentially leading to an increasing amount of litigation against generative AI companies.
  • A judge in the First Circuit Court in the city of Cartagena, Colombia, stated that he used ChatGPT to help make a decision in a recent case. And in the United States, a judge issued an opinion with perhaps the first mention of ChatGPT, though it is less flattering. In the opinion the judge questions the validity of the allegations stating, “The problem with these allegations is not that there are too few of them, or even that they lack detail. The problem is that they read like what an artificial intelligence tool [footnote: See, e.g., OpenAI, ChatGPT, https://chat.openai.com] might come up with if prompted to allege training violations in a jail according to Twombly-Iqbal pleading standards; in other words, a result that appears facially sufficient provided one does not read very carefully.”
  • The FTC issued a warning to companies to “Keep [their] AI claims in check,” noting that “false or unsubstantiated claims are [the FTC’s] bread and butter.” In subsequent interviews Samuel Levine, director of the Bureau of Consumer Protection at the FTC, stated, “What we’re trying to remind the marketplace is that [claims about AI] need to be truthful, they need to be substantiated or we’re prepared to hold those companies accountable.” Later, the FTC followed up with a similar warning aimed at AI chatbots, deepfakes, and voice clones.
  • Class action litigation has been filed against DoNotPay, a company claiming to use a Robot Lawyer to help users “Fight Corporations, Beat Bureaucracy, Find Hidden Money, Sue Anyone, Automatically Cancel Your Free Trials.” The litigation cites unauthorized practice of law (via Cal. Bus. & Prof. Code § 17200) as the main cause of action. Only licensed attorneys can practice law in the United States.
  • The U.S. Copyright Office stated that some AI-generated works may be copyrighted depending on whether the generated work was “the result of mechanical reproduction," or reflected the author’s "own mental conception."
  • Italy’s Data Protection Authority ordered AI startup Replika, which builds chatbots aimed at providing a “virtual companion,” to stop processing data from users in Italy. The regulator cited ineffective measures to provide “enhanced safeguards [that] children and vulnerable individuals are entitled to” under Italian law.

Policy

  • The government of Singapore has released their “AI Verify” toolkit which seeks to provide companies with a technical tool that verifies if their system complies with “internationally accepted AI ethics principles.”
  • The United States Patent and Trademark office has asked for comments on issues of AI Inventorship to help the office navigate thorny patent law issues. 
  • The United Kingdom will invest £900 million in a supercomputer to build foundation models locally.
  • Canada released a companion document to its Artificial Intelligence and Data Act (AIDA).
  • The U.S. Chamber of Commerce called for a risk-based AI regulatory framework, similar to frameworks proposed in the E.U.
  • The House of Representatives and Senate both held committee hearings on issues of AI safety.
  • The U.S. State Department released the Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy. This document outlines best practices for the ethical and accountable development, deployment, and use of military AI capabilities, including autonomous systems, while emphasizing adherence to international law and maintaining human control. The goal is to build a consensus among states, constraining military uses of AI.
  • The U.K. issued a report titled Pro-innovation Regulation of Technologies Review Digital Technologies calling, among other things, for regulatory sandboxes for AI and issuing a clear position on the relationship between AI and intellectual property rights.
  • The U.K. Medicines and Healthcare products Regulatory Agency (MHRA) issued a statement that Large Language Models “that are developed for, or adapted, modified or directed toward specifically medical purposes are likely to qualify as medical devices.”

Academic Roundup

  • GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models by Tyna Eloundou, Sam Manning, Pamela Mishkin, Daniel Rock. Outlines the significant impacts on labor from GPT-like models, assessing that high-wage jobs are the most likely to be impacted. 
  • Algorithmic Black Swans by Noam Kolt. This article highlights the growing risks of AI systems to society and advocates for "algorithmic preparedness," a roadmap of five principles to guide regulations addressing long-term, large-scale risks and potential "algorithmic black swans."
  • GPT-4 Passes the Bar Exam by Daniel Martin Katz, Michael James Bommarito, Shang Gao, Pablo Arredondo. This paper evaluates GPT-4's zero-shot performance on the bar exam, where it outperforms some humans for multiple components.
  • Regulating Machine Learning: The Challenge of Heterogeneity by Cary Coglianese. Arguing that machine learning's vast heterogeneity necessitates agile, specialized regulatory agencies employing data science expertise and flexible strategies, including the use of machine-learning tools for effective governance and public protection.
  • The French Supreme Administrative Court Finds the Use of Facial Recognition by Law Enforcement Agencies to Support Criminal Investigations 'Strictly Necessary' and Proportional by Theodore Christakis, Alexandre Lodie. Describing a case in France where it was assessed whether facial recognition systems could be used by the police under Article 10 of the Law Enforcement Directive. The court ruled in favor of using facial recognition in criminal investigations, deeming it 'absolutely necessary' and proportional to the goals of such investigations.
  • The Law of AI for Good by Orly Lobel. This article provides a counter-balance to current regulatory approaches for AI systems, proposing a shift from an absolutist approach to one of comparing costs and benefits against the current status quo. For example, the article argues that there might be some situations where someone may want a right to keep a human out of the loop versus requiring a human to be in the loop.

Who am I? I’m a PhD (Machine Learning)-JD candidate at Stanford University and Stanford RegLab fellow (you can learn more about my research here). Each month I round up interesting news and events somewhere at the intersection of Law, Policy, and AI. Feel free to send me things that you think should be highlighted @PeterHndrsn. Also… just in case, none of this is legal advice, and any views I express here are purely my own and are not those of any entity, organization, government, or other person.

Stanford HAI’s mission is to advance AI research, education, policy and practice to improve the human condition. Learn more

More News Topics