New Report from Marc Pfeiffer – First, Do No Harm: Algorithms, AI, and Digital Product Liability

September 21, 2023

The relatively recent introduction of publicly accessible artificial intelligence-driven chatbots (e.g., Bard, Bing, ChatGPT, Claude) have focused public attention on the broader individual and societal harms that can result from algorithms that are embedded in digital technology goods and services. The potential for algorithmic harm(s) are commonly reported to be found in (but are not limited to) technologies such as generative artificial intelligence chatbots, social media, virtual reality, Internet of Things, surveillance tech, robots, etc.

This new report provides a pathway to reduce algorithmic harms by incentivizing developers to first, do no harm as opposed to work fast and break things.

It is hoped that the concepts it advocates will contribute to and advance the discussions currently underway in governments and civil society around the world looking to understand and develop structures for managing their impact and risks. It requires developers to identify and mitigate potential algorithmic harms before new products are released and remediate existing ones when harms are discovered. It reflects current trends in cybersecurity, where developers are expected to build security into products before they are released and quickly remediate existing products when new threats are found.

  • It requires the thoughtful development of definitions of digital products and algorithmic harms. There is a rich trove of academic, non-profit, and corporate research discussing the range of harms. Defined harms must be serious enough to affect the public interest.
  • Developers accused of creating unanticipated harm will have a “safe harbor” from severe financial penalties if they prove they used contemporary best practices to mitigate any foreseeable potential harms. They would be offered time to remediate them and reduced penalties.
  • This process will likely slow development of some digital products. It requires developers to ensure that products are thoroughly tested and that potential adverse outcomes are mitigated before deployment. That may delay or limit returns on investment or extend development cycles. In some cases, application creators may decide to abandon products mid-development if harms cannot be sufficiently managed.

Key elements include:

  1. Expanding traditional legal liability principles by enhancing legal standards for negligence and product liability to include algorithmic harms:
    • Negligence liability: expand the requirement of “duty of care” for developers of digital products to include preventing algorithmic harm in the product.
    • Product liability: include algorithmic harm as a type of product defect, injury, or harm.
  2. Authorize federal and state regulatory and justice agencies to accept and bring liability complaints of algorithmic harm caused by developer negligence in fulfilling their duty of care by offering defective products.
    • Permit class actions to be brought by third parties on behalf of groups or society at large.
    • Provide the judiciary any necessary authority to manage and consolidate like cases. Federal definitions might preempt individual state policies.
    • Provisions may be necessary to allow new laws or regulations to be negotiated and enacted to address potential harms for new digital products as they develop.
  3. Establish matrices of harms and penalties that address the range of harms, from incidental to substantial and from individual to societal. At the extreme end of substantial societal harm, they must be significant enough to discourage undue liability risk-taking.
  4. Developers would be incentivized by their liability insurers to engage in harm prevention during development and deployment. Liability insurers would require that sound harm mitigation standards be met to secure and maintain coverage.

This framework requires technology policy and legal subject matter experts to elaborate on and refine the details. Input, balance, and compromise from societal, financial, and technological interests are at the core of its potential.

Additionally, these ideas do not have to stand alone. They can be integrated into other solutions being discussed. This is particularly important as these issues are currently top of mind for many federal and state lawmakers.

While focused on the United States and its liability practices, the model will likely have value in other countries if adapted to local circumstances.

Society needs sound, algorithmic-focused public policies that incentivize harm prevention.

We should stop breaking things.

Read the report here

Marc Pfeiffer is currently a Sr. Policy Fellow and Assistant Director at Bloustein Local, a unit of the Center for Urban Policy Research, part of the Edward J. Bloustein School of Planning and Public Policy at Rutgers University. marc.pfeiffer@rutgers.edu.

Recent Posts

NJSPL: Identifying & Examining NJ Corporate Home Ownership

The phrase "corporate landlord" is often used to refer to large corporate entities backed by private equity funds and Real Estate Investment Trusts. In researching corporate home ownership throughout seven municipalities in New Jersey, researchers found that some...

New Williams et al. Research on Improving Survey Inference

Improving Survey Inference Using Administrative Records Without Releasing Individual-Level Continuous Data Abstract Probability surveys are challenged by increasing nonresponse rates, resulting in biased statistical inference. Auxiliary information about populations...

Heldrich Policy Brief: Approaches to Workplace DEI Policies

What’s Next? Using Workplace Divided Data to Help Incorporate Workers’ Perceptions in Workplace Discrimination and Diversity, Equity, and Inclusion Policies Since 2022, the Heldrich Center for Workforce Development has explored workers’ perceptions of and experiences...

Prof. Toney and Lina Moe Named St. Louis Fed Fellows

St. Louis Fed Announces 2024-25 Institute for Economic Equity Research Fellows The Federal Reserve Bank of St. Louis has announced a new cohort of nine research fellows selected to conduct research while in residence at the Institute for Economic Equity. “The...

Prof. Joel Cantor Reflects on Dr. Oz’s Nomination

Bracing for the Dr. Oz effect on health care Read the original post on NJ Spotlight News, November 21, 2024 The health insurance coverage for 3.5 million vulnerable New Jersey residents will be overseen by a cardiologist and former television personality known for...

Upcoming Events

Event Series DEIB

Bloustein DEIB Committee Holiday Toy Drive

Bloustein School, Civic Square Building 33 Livingston Avenue, New Brunswick, NJ, United States

The Bloustein School Diversity, Equity, Inclusion and Belonging Committee invites you to participate in a Holiday Toy Drive benefitting the Harmony Family Success Center. Donate new, unwrapped toys for kids […]

Event Series CAREERS

Virtual Career Drop-ins

Virtual

Stop by virtually on Mondays (except for holidays) beginning September 9th through December 16th between 11 am and 1 pm to ask a quick (15 min) career-related question of Bloustein […]

Event Series Student Services

Bloustein Librarian Open Office Hours

Bloustein School, Civic Square Building 33 Livingston Avenue, New Brunswick, NJ, United States

Have a research or library question you need assistance with? Visit Open Office Hours with Bloustein Librarian Julia Maxwell. Every Monday from 12:00 - 1:00 pm unless otherwise noted. Can't […]