Tech & Innovation - December 04, 2024

Misinformation Expert Accused of Using AI for Legal Document

Image related to the article
Jeff Hancock, the founder of the Stanford Social Media Lab, has been accused of using AI to generate a legal document in support of Minnesota's Use of Deep Fake Technology to Influence an Election law. This has led to criticisms and calls for the document to be excluded from consideration due to citation errors, popularly referred to as 'hallucinations'.

Read more at source.

The Accusation

Hancock submitted the affidavit in support of Minnesota's Use of Deep Fake Technology to Influence an Election law, which is being challenged by conservative YouTuber Christopher Khols and Minnesota state Rep. Mary Franson. After discovering citation errors in the document, the attorneys for Khols and Franson deemed it unreliable and requested its exclusion from consideration.

Hancock's Response

In a subsequent declaration, Hancock admitted to using ChatGPT to draft the document but denied using it to write anything. He expressed regret for any confusion caused by the citation errors, but stood firmly behind the substantive points made in the declaration.

The Role of AI Technology

Hancock used Google Scholar and GPT-4o to identify articles relevant to the declaration. The AI tool was used to create a citation list, leading to two citation errors and incorrect authors added to another citation. Despite these errors, Hancock stands by the impact of AI technology on misinformation and its societal effects.

I did not intend to mislead the Court or counsel, Hancock wrote in his most recent filing. I express my sincere regret for any confusion this may have caused. That said, I stand firmly behind all the substantive points in the declaration.