Around 17:00 Moscow time, a number of Telegram channels and the media reported on a powerful explosion that thundered in Washington near the Pentagon building. It was accompanied by a photograph with puffs of smoke. After checking, it became clear that it was a fake.
Initially, the picture with smoke began to spread on conservative American Facebook pages (the owner, Meta, was recognized as extremist in Russia and banned). Then the message was picked up by Russian military correspondents, as well as other Telegram news channels, including “Before everyone else, well, almost,” which has almost 1.5 million subscribers. After about 20 minutes, some journalists and channels wrote that the image was fake and most likely refers to 2001.
In confirmation of this, Washington Post journalist Yunus Paksoy posted a photo taken on Pennsylvania Northwest Avenue next to the International Trade Center. Ronald Reagan. The picture shows the characteristic dome of the center, the Washington Monument and the panorama of the right bank of the Potomac River, where the Pentagon building is located. There are no signs of an explosion in the photo.
The correspondent of RIA Novosti also posted a video with a circular panorama of the surroundings of the Pentagon, where there is no evidence of an explosion or fire.close 100% Washington Post correspondent Yunus Paksoy/Twitter
A few minutes later, the US Defense Department’s security service officially announced that it had not recorded any incidents near the ministry building.
Journalists who carefully studied the initial image with puffs of smoke suggested that it was generated by a neural network based on a real photograph of the attack on the Pentagon on September 11, 2001, when one of the planes hijacked by terrorists crashed into the ministry building.
Several elements point to the artificiality of the photograph: the displacement of a street lamp relative to its base, the continuation of a part of the fence on the sidewalk. Such artifacts often appear in images taken with 360-degree cameras – we can see them in street view on online maps. Or in cases where the image is generated by a neural network.
Fake news, including in major publications (or on their behalf), is increasingly becoming a cause for public concern and destabilization.
One of the recent examples of such stuffing was the message on the Twitter account of the British newspaper The Guardian about the death of Queen Elizabeth II on September 8, 2022. It turned out that the publication’s account was fake, on the real pages of this and other British newspapers there were no posts about the death of the monarch. The Queen really died on this day, but at the time of the fake publication, Buckingham Palace had not yet made an official statement, it was only known that Elizabeth was not feeling well.
A few days later it became known that the queen had died at 15:10 local time, the public was officially informed about this at 18:30. The post about the death in the fake account of The Guardian appeared just in this time period, that is, in fact the message was not false.
In recent months, world politicians, the media and the public have repeatedly raised the problem of fakes created by artificial intelligence or based on content generated by it. For example, in March 2023, the Midjourney neural network generated photographs of the “destructive flood of 2001 that hit the US West Coast”, made in reportage format and published on the Reddit forum.
Fake pictures, which showed not only destruction, but also real people (including the then president of the country, George W. Bush), quickly dispersed in the American segment of the Internet. They provoked a lot of comments on social networks: young people born after 2001 were horrified by the extent of the destruction, and the older generation was perplexed because they did not remember such a flood.
Then, in March, fake photos of the arrest of former US President Donald Trump circulated around the network. On them, a politician runs away from a crowd of policemen, who eventually twist him hard. In connection with the distribution of neural network-generated images, the NYPD had to make an official statement that they did not arrest Trump.
In April, German photographer Boris Eldagsen, who received the 2023 Sony World Photography Awards in the Free Creativity category, admitted that the photo titled “Pseudoamnesia: Electrician” was completely generated by AI. The organizing committee later justified that they knew about the use of a neural network to create a frame, but believed that this was a joint work of a living author and artificial intelligence. Eldagsen himself immediately refused the award and explained that the work of neural networks is not a photograph. And he submitted an application for the competition precisely in order to draw attention to the problem of using AI in creative fields.