Was the video of the Venezuelan drug boat attack made with AI? Here's what the experts think.

The United States continues to pressure the Maduro regime. Late Tuesday, President Donald Trump announced that US forces had carried out an attack on a suspected drug boat belonging to the Tren de Aragua narco-terrorist group. The government also shared a video of the attack, in which the boat exploded and was recorded using night vision. Just hours later, Venezuela questioned the authenticity of the document, suggesting it had been created using artificial intelligence .
"It seems Marco Rubio continues to lie to his president: after leading him into a dead end, he now offers him a video with AI (thus proven) as 'proof,'" Venezuelan Communications Minister Freddy Ñáñez stated on his Telegram channel. To reach this conclusion, Ñáñez simply uploaded the video to the artificial intelligence application Gemini, created by Google, and asked if "that video was made with AI." The machine told him it was likely; but it could just as easily have told him the opposite.
"I did exactly the same thing with the recording, and Gemini told me it doesn't seem like synthetic content created with generative artificial intelligence," Miguel Lucas, global director of innovation at the firm Llorente y Cuenca and an expert in AI, explained in a conversation with ABC. Because that's what happens with this technology: it's just as likely to tell you one thing at a given moment, only to later retract it.
Lucas points out that " generative AI like Gemini cannot be used as evidence " because "they hallucinate, offer erroneous data, or make it up." "It should never be used this way, because just as it tells you that something could have been created with AI, it tells you the opposite." This is because Google's app, which works the same as ChatGPT, has not been designed to determine whether or not content has been created by a machine. Furthermore, despite attempts, no technology company has been able to create a verification tool reliable enough to directly indicate that content has been created using artificial intelligence.
The video shared by the United States government also has its peculiarities , as it is a low-definition recording, which makes it more difficult for a machine, or a person, to verify its authenticity. "Gemini's response is unreliable. It shouldn't be taken into account to say it's a fake document. Furthermore, if you want to be serious about analyzing the possibility of an altered video, you don't use general citizen technology that is available to everyone, like ChatGPT or Gemini. You resort to dedicated tools that aren't available to everyone," Lucas concludes.
Sergio Álvarez-Teleña, CEO of SciTheWorld, one of the most cutting-edge AI companies in Spain, is following a similar path. Speaking to this newspaper, he points out that the quality of the video explains why Gemini might, at a given moment, consider it fake: "The video is somewhat unusual for Gemini, because the tool has certainly been trained with much higher-quality videos. The tool is surprised because it's probably not trained with images from special operations of this kind."
Although, at first glance, no human expert consulted sees any signs of falsification in the video, they warn that with the advancement of artificial intelligence, it will become increasingly difficult to distinguish what is real from what is not . "This is going to become constant, and we will only be able to trust what we see, not even what governments say, and this will only get worse," Juan Ignacio Rouyet, a professor and expert in AI and Ethics at the International University of La Rioja, explains to this newspaper.
ABC.es