In the collection of controversial queries to smart chats, an unexpected chapter has just been written: Matthew Livelsberger, the soldier who died inside a Tesla Cybertruck in front of the Trump International hotel in Las Vegas, consulted ChatGPT about explosives, how to activate them by firing weapons fire and where to buy them. He also asked numerous questions about the firearms he used in his suicide inside that vehicle, according to Las Vegas police, who have published screenshots of Livelsberger’s queries to the OpenAI program. The company defends itself by saying that all the information it provided to the subject was publicly available.
“Could that detonate Tannerite [una marca de explosivos]?”, “How much can you buy Tannerite in Arizona?” and “How is that different from dynamite?” are some of the questions that Livelsberger asked the popular intelligent chatbot, according to documents published by law enforcement authorities. along with videos and photographs of moments before the detonation and the car that burned on January 1. The questions to ChatGPT were asked on December 27, shortly before the explosion that caused a spectacular explosion in front of the Trump hotel.
OpenAI has responded to the police revelation through a statement from its spokesperson, Liz Bourgeois: “We are saddened by this incident and committed to ensuring that artificial intelligence tools are used responsibly. “Our models are designed to reject harmful instructions and minimize harmful content.” And he added, according to The Verge: “In this case, ChatGPT responded with information already publicly available on the internet and provided warnings against harmful or illegal activities. “We are working with authorities to support their investigation.”
Additionally, police confirmed that the individual, a 37-year-old active duty US Army soldier, had a “possible manifesto” saved on his phone, as well as an email to a podcaster and other letters with which the motivations for his suicide could be deduced, with allusions to the war in Afghanistan.
Recently, other conversational artificial intelligence programs have generated controversy due to the lack of control over the consequences derived from their interactions with users. For example, when a teenager committed suicide after the words dedicated to him by an artificial Character.AI character with whom he had supposedly fallen in love. Or when a young British man showed up at Windsor Castle ready to assassinate the Queen of England.