Tech & Innovation - January 08, 2025

Artificial Intelligence Exploited in Extremist Activities...

Image related to the article
Matthew Livelsberger, a highly decorated US Army Green Beret, used a series of prompts to consult an artificial intelligence tool on the best ways to turn a rented Cybertruck into a vehicle-borne explosive. This alarming incident highlights the potential misuse of AI tools by extremists, a concern that US intelligence analysts have been issuing warnings about over the past year. The threat of AI being used to commit serious crimes, including terrorism, is a growing concern among US law enforcement.

Read more at source.

The Role of AI in Livelsberger's Plan

Livelsberger used OpenAI's ChatGPT to gather information on how to amass as much explosive material as he legally could while en route to Las Vegas, as well as how best to set it off using a Desert Eagle gun. He specifically sought information on Tannerite, a reactive compound typically used for target practice. Livelsberger's interaction with ChatGPT highlights the potential misuse of AI tools in terrorist activities.

Concerns Among US Law Enforcement

The Department of Homeland Security has issued warnings about domestic extremists who are relying on AI technology to generate bomb-making instructions and develop tactics for conducting attacks against the United States. Violent extremists are increasingly turning to tools like ChatGPT to help stage attacks aimed at collapsing American society through acts of domestic terror.

The Vulnerability of US Critical Infrastructure

The memos highlight the vulnerability of the US power grid, a popular target among extremists populating Terrorgram, a network of encrypted chatrooms that host a range of violent, racially-motivated individuals bent on the destruction of American democratic institutions. The threat of AI being used to help commit serious crimes, including terrorism, is a growing concern among US law enforcement.

OpenAI's Response to the Incident

OpenAI expressed sadness over the incident in Las Vegas and reiterated its commitment to seeing AI tools used responsibly. The company's models are designed to refuse harmful instructions and minimize harmful content. In the case of Livelsberger, ChatGPT responded with information already publicly available on the internet and provided warnings against harmful or illegal activities.

We knew that AI was going to change the game at some point or another in, really, all of our lives, Sheriff Kevin McMahill of the Las Vegas Metropolitan Police Department told reporters on Tuesday. Absolutely, its a concerning moment for us.