Facebook Headquarter by Minette Lontsie, CC BY-SA 4.0 , via Wikimedia Commons

Facebook Headquarter by Minette Lontsie, CC BY-SA 4.0 , via Wikimedia Commons

Columns

Why the Verdict on Social Media Defective Design Harming Children Gets the Instinct Right But the Law Wrong

A California jury’s decision last week to hold Meta and YouTube liable for harms to a young woman’s mental health has been greeted as a watershed moment. Child safety advocates have called it Big Tech’s “Big Tobacco moment.” Parents who lost children to what they attribute to social media addiction embraced outside the courthouse. Commentators who have long argued that social media companies bear responsibility for the damage their services inflict on young users see the verdict as vindication.

My Globe and Mail op-ed notes that the instinct behind the decision is understandable. The evidence at trial was damning, as internal Meta documents showed the company knew Instagram was harming adolescents but continued targeting them anyway. But the legal theory the jury endorsed – that social media platforms are defectively designed products – is the wrong tool for a real problem, and building on it risks undermining the very accountability the strategy seeks to deliver.

The case turned on a creative bit of legal engineering. The plaintiff’s lawyers argued that features like infinite scroll, autoplay, push notifications, beauty filters, and algorithmic recommendations are design defects that make Instagram and YouTube unreasonably dangerous, much like a car with a faulty gas tank. Under this theory, the platforms aren’t being held responsible for the content users see but rather for the architecture that delivers it.

If that sounds like a bit of a stretch, it’s because the lawyers needed one. U.S. law has for decades shielded platforms from liability for content posted by users and for how that content is organized. That has effectively blocked conventional negligence claims. The plaintiff’s lawyers repackaged a negligence case as a product liability one, arguing the harm arose from the platform’s own engineering rather than from content.

The problem is that Instagram’s infinite scroll scrolls infinitely because that is what it was designed to do. The algorithm recommends engaging content because that is its function. These features are not defects nor inherently bad. Indeed, a social media platform without algorithmic recommendations, engagement-driven feeds, and notifications is not a better-designed version of the same product. Treating the core architecture of a content platform as a product defect exposes the verdict to reversal on appeal and sets a precedent that could reach any attention-competing technology, such as streaming services that use autoplay, news apps that push alerts, and video games designed to retain players.

None of this means the companies should escape accountability. The evidence at trial points to a genuine wrong, but it is not a design defect. It is more straightforward: these companies knew their services posed foreseeable risks to young users and failed to take reasonable steps to mitigate those risks. That is a duty-of-care problem, and it calls for a duty-of-care solution.

Canada’s proposed Online Harms Act offers a framework that might provide a solution. It does not treat platforms as defective products, nor does it resort to the blunt instrument of outright social media bans for those under 16, measures that are easily circumvented, strip agency from both young users and their parents, and ignore the genuine benefits many young people derive from online connection. Instead, the act would establish a regulatory structure in which platforms must assess risks and demonstrate they are taking reasonable measures to protect users from foreseeable harms.

There have been misguided proposals to stretch the Act beyond this foundation, including calls to require proactive content monitoring and police reporting for AI chatbots. Those proposals run counter to the act’s design. But it could require social media companies to disclose what they know about the harms their services cause, establish meaningful safety plans for young users, strengthen those plans when they prove insufficient, and open themselves to independent research and reporting. Further, it would create liability for failure to meet these requirements, framed as a duty to act responsibly.

The California verdict will generate headlines and may accelerate settlements in thousands of pending U.S. cases, including those filed by Canadian school boards. But a jury finding built on a doctrinal workaround is not a durable foundation for platform accountability. A legislative duty to act responsibly is.

Leave a Reply

Your email address will not be published.

*

*