logo
Published on

AI and Section 230

Authors
  • avatar
    Name
    Strategic Machines
    Twitter
stack

Protecting the Customer from Entanglement

This article is worth your attention. It is centered on Section 230. It is also on point with our point about testing AI Apps. In our previous post, we highlighted the obvious by stating that AI models are not like application software. Language models are probabilistic, built off vast datasets, and excel at fiction writing (when least expected). Application software is written to a set of fixed requirements and are designed for predictable outcomes. That’s not to say language models are not useful. Quite the opposite. With the right use cases and supervision, they deliver enormous value.

Which brings us back to the article. The author, Christopher Mims of the WSJ, captures the legal complexity facing model makers and users. Mims interviewed legal experts on the potential liability facing companies using AI to generate speech or make products. We wrote about the Air Canada case here, underscoring the liability of a chatbot making a policy error. Mims explores the issue of legal liability further by noting:

Section 230 of the Communications Decency Act of 1996 has long protected internet platforms from being held liable for the things we say on them. (In short, if you say something defamatory about your neighbor on Facebook , they can sue you, but not Meta.) This law was foundational to the development of the early internet and is, arguably, one reason that many of today’s biggest tech companies grew in the U.S., and not elsewhere.

But Section 230 doesn’t cover speech that a company’s AI generates, says Graham Ryan, a litigator at Jones Walker who will soon be publishing a paper in the Harvard Journal of Law and Technology on the topic.

The legal tangle intensifies because these same companies that may be arguing Section 230 protection, may also claim that the content they scrape for training purposes is not a copywrite violation because they “substantially transform’ the content. Mims points out that if that is true, then they must be “substantial co-creators” in the content generated. Which creates a problem for Section 230 protection.

Which brings us back to AI Testing.

Every CIO knows that before pushing code to production, you better test, test, test. That ancient wisdom has never been more important than with AI Apps. And it is not just because of the risk of legal entanglements, but more so for Brand protection, product quality and customer service. However, the test of an AI App is nothing like the test of traditional software. As we explained here, it is hard to test something that you cannot describe. Stay tuned, as we go deeper in future posts on this most critical topic of testing.