small-logo
ProfessionalsCapabilitiesInsights & NewsCareersLocations
About UsAlumniOpportunity & InclusionPro BonoCorporate Social Responsibility
Stay Connected:
facebookinstagramlinkedintwitteryoutube
  1. Product Liability & Mass Torts Digest

Blog

EDNY Allows Expert’s Use of ChatGPT to Confirm Alternative Design in Product Liability Lawsuit

  • PDFPDF
    • Email
    • LinkedIn
    • Facebook
    • Twitter
    Share this page
  • PDFPDF
    • Email
    • LinkedIn
    • Facebook
    • Twitter
    Share this page

Blog

EDNY Allows Expert’s Use of ChatGPT to Confirm Alternative Design in Product Liability Lawsuit

  • PDFPDF
    • Email
    • LinkedIn
    • Facebook
    • Twitter
    Share this page

4 Min Read

Authors

Bryce CooperSarah E. HarmonHeather Donato

Related Capabilities

Product Liability & Mass Torts
Artificial Intelligence (AI)
Technology, Media & Telecommunications

May 8, 2025

The U.S. District Court for the Eastern District of New York recently allowed an expert’s use of ChatGPT to confirm a proposed alternative design in a product liability lawsuit, denying the defendant’s motion to exclude the expert testimony for failing the reliability requirement of Federal Rule of Evidence (FRE) 702.[1]

The plaintiff brought a lawsuit against Harbor Freight Tools USA, Inc., alleging that an axe he purchased from the defendant was defectively designed because the head of the axe separated from the handle, struck the plaintiff, and caused injuries to his nose and eye. To support his defective design claim, the plaintiff sought to offer expert testimony that the handle and head of the axe were weakly bound with adhesive, leading to the accident. Plaintiff’s expert opined that a good design of the axe requires securely attaching the head and handle by drilling a small hole through the side of the head, through the handle, and inserting a pin through the head to reduce the possibility of separation.

The defendant sought to exclude the expert’s testimony on various grounds,[2] including that the expert’s opinion did not reflect a reliable application of the principles and methods to the facts of the case under FRE 702. Specifically, the defendant argued that the expert’s testimony was unreliable because, after completing his report, he entered a query into ChatGPT about the best way to secure a “hammer head to the handle,” which produced a response consistent with his expert opinion.

After a Daubert hearing, the Court denied the defendant’s motion to exclude, finding there was “little risk” that the use of ChatGPT impaired the expert’s judgment regarding proper methods for securing the axe’s head to its handle. The Court reasoned that the expert only used ChatGPT to “confirm what he had already opined” in his finished report, which he had written based on his decades of professional manufacturing and engineering experience.[3] Thus, the expert “did not rely on ChatGPT” in forming his opinion.[4]The Court contrasted this expert’s use of artificial intelligence with other uses that have been found to be unreliable, for example, citing to non-existent ChatGPT-generated cases or academic articles—where the “attorneys and experts abdicate[d] their independent judgment and critical thinking skills in favor of ready-made AI-generated answers.”[5]

This ruling highlights many of the same considerations currently being deliberated by the Federal Judicial Conference’s Advisory Committee on Evidence Rules regarding the use of AI in the courtroom. The Committee has been working to address evidentiary challenges raised by AI, broadly categorizing the issues as “(1) whether changes to the authenticity rules are necessary to deal with ‘deepfakes’; and (2) whether a change is needed to Article 7 to give courts authority to regulate evidence that is the product of machine learning when no expert witness on the machine learning is proffered to testify.”[6]

At the Committee’s meeting in November 2024, it agreed to draft a formal proposed new rule, which if adopted, would become FRE 707.[7] The proposed rule states:

Where the output of a process or system would be subject to Rule 702 if testified to by a human witness, the court must find that the output satisfies the requirements of Rule 702 (a)-(d). This rule does not apply to the output of basic scientific instruments.[8]

The Committee met again on May 2, 2025, where it voted to advance the proposed rule to the Judicial Conference’s Committee on Rules of Practice and Procedure, which will vote in June regarding whether to publish the proposed rule for public comment.[9]The proposed  rule would require federal courts to apply Rule 702’s requirements to screen AI-generated evidence, requiring the proponent of evidence to demonstrate that sufficient facts or data were used as inputs for the AI program and the AI program used reliable principles and methods.

In the meantime, while there continues to be an absence of a uniform evidentiary rule, attorneys, experts, and litigants should continue to exercise careful judgment when using AI-generated information and ensure they thoroughly investigate AI tools before attempting to admit AI-generated evidence.


[1] Ferlito, v. Harbor Freight Tools USA, INC., No. CV 20-5615 (GRB) (SIL), 2025 WL 1181699 (E.D.N.Y. Apr. 23, 2025).

[2] The defendant also moved to preclude entirely the expert’s testimony on the grounds that he was unqualified because he lacked engineering degrees and his experience was limited to designing power tools and not manual tools. Id. at *1-2. The Court disagreed, finding that the expert’s professional experience and familiarity with joining dissimilar materials met the “modest standards to qualify as an expert.” Id. at *2.

[3] Id. at *4.

[4] Id.

[5] Id. (quoting Kohls v. Ellison, No. 24-CV-3754, 2025 WL 66514, at *4 (D. Minn. Jan. 10, 2025)).

[6] Federal Judicial Conference, Advisory Committee on Evidence Rules, Agenda for Committee Meeting (May 2, 2025) at 1, available at https://www.uscourts.gov/forms-rules/records-rules-committees/agenda-books/advisory-committee-evidence-rules-may-2025.

[7] Id. at Tab 1A, Section III.B, at 14-15.

[8] Id. at Tab 3, at 17.

[9] Nate Raymond, US judicial panel advances proposal to regulate AI-generated evidence, Reuters, May 2, 2025, https://www.reuters.com/legal/government/us-judicial-panel-advances-proposal-regulate-ai-generated-evidence-2025-05-02/.

Related Professionals

Related Professionals

Bryce Cooper

Sarah E. Harmon

Heather Donato

Bryce Cooper

Sarah E. Harmon

Heather Donato

This entry has been created for information and planning purposes. It is not intended to be, nor should it be substituted for, legal advice, which turns on specific facts.

Logo
facebookinstagramlinkedintwitteryoutube

Copyright © 2025. Winston & Strawn LLP

AlumniCorporate Transparency Act Task ForceDEI Compliance Task ForceEqual Rights AmendmentLaw GlossaryThe Oval UpdateWinston MinutePrivacy PolicyCookie PolicyFraud & Scam AlertsNoticesSubscribeAttorney Advertising