Italian opposition file complaint over far-right deputy PM party’s use of ‘racist’ AI images

Italian opposition file complaint over far-right deputy PM party’s use of ‘racist’ AI images

Opposition parties in Italy have complained to the communications watchdog about a series of AI-generated images published on social media by deputy prime minister Matteo Salvini’s far-right party, calling them “racist, Islamophobic and xenophobic”, the Guardian has learned. The centre-left Democratic party (PD), with the Greens and Left Alliance, filed a complaint on Thursday with Agcom, the Italian communications regulatory authority, alleging the fake images used by the League contained “almost all categories of hate speech”. Over the past month, dozens of apparently AI‑generated photos have appeared on the League’s social channels, including on Facebook, Instagram and X. The images frequently depict men of colour, often armed with knives, attacking women or police officers. Antonio Nicita, a PD senator, said: “In the images published by Salvini’s party and generated by AI there are almost all categories of hate speech, from racism and xenophobia to Islamophobia. They are using AI to target specific categories of people – immigrants, Arabs – who are portrayed as potential criminals, thieves and rapists. “These images are not only violent but also deceptive: by blurring the faces of the victims it is as if they want to protect the identity of the person attacked, misleading users into believing the photo is real. These are images that incite hatred.” “This is serious,” said Francesco Emilio Borrelli, an MP for the Greens and Left Alliance. “AI generates content based on our instructions, and in this case it was clearly instructed to generate images of black people robbing an elderly woman or a frightened woman. It is part of their strategy to create fear among citizens.” A spokesperson for Salvini’s party confirmed that “some of the pictures” featured on their social media channels had been “generated digitally”. In a statement it said: “The point is not the image. The point is the fact. Each post is based on true reports from Italian newspapers, with names, dates and places. If reality seems too harsh, do not blame those who report it, but those who make it so. If it is about a crime, it is hard to accompany the news with cheerful or reassuring images.” Salvatore Romano, the head of research at the nonprofit AI Forensics, said the League pictures bore “all the hallmarks of artificial intelligence”. “They are out‑of‑context photos in which the subject is in the foreground and the rest is entirely blurred. What worries me is that these AI‑generated images are becoming ever more realistic.” The complaint to Agcom cites several examples of images thought to have been digitally generated, saying they have appeared alongside the branding of reputable mainstream media outlets which have reported on the crimes mentioned but not used images of the alleged perpetrators. In one case, the League’s post says: “A foreigner attacks the train conductor” and pairs the text with an image of a man of colour with his fist raised. The original headline in Il Resto del Carlino reads: “He attacks the [female] train conductor and sparks panic on board.” The article makes no mention of the suspect’s nationality beyond calling him a “foreigner”. There was no photograph of the alleged attack. Another image featured in the complaint shows a mother and father in Islamic dress appearing to shout angrily at a girl, “thus feeding racial and Islamophobic prejudice”. Il Giorno, the newspaper that is cited, makes no reference in its report to the religion of either the family or the girl allegedly abused by her parents, beyond saying the child had attended Arabic language school. There was no photograph of the family. The use of AI‑generated images for propaganda by far‑right parties is a growing phenomenon that entered the mainstream around last year’s European elections, when images designed to stoke fears over immigration or demonise leaders such as Emmanuel Macron began circulating on social media. “Then came the American elections with Donald Trump and Elon Musk, who effectively normalised this trend,” said Romano. “Today we see that far‑right parties have not only continued to generate fake images for propaganda but have also increased their use at a time when AI tools have improved content quality, making the phenomenon all the more worrying.” Despite social platforms being obliged to take steps to anticipate these risks – for example by adding a label specifying that an image has been generated by AI – Romano says that, in practice, this mechanism is almost always ineffective. Asked if the League was aware that the images could generate hate speech, a spokesperson for Salvini’s party said: “We are sorry, but our solidarity goes to the victims, not the perpetrators. If denouncing crimes committed by foreigners means ‘xenophobia’, perhaps the problem is not the word but those who use it to censor debate. We will continue to denounce, with strong words and images, what others prefer to ignore.’’ If Agcom deems the flagged content offensive it can, under the EU’s Digital Services Act, order posts to be taken down, accounts to be removed and social media platforms to be fined for failing to police user behaviour. In 2023, Agcom fined Meta €5.85m and ordered the removal of dozens of accounts for breaching the ban on gambling advertising. Meta, the owner of Facebook and Instagram, was approached for comment. A spokesperson for X said: “We are not under any obligation legally to label every AI-generated image. It’s pretty clear these posts are straightforward politicking. “Rest assured, we believe that it is critical to maintain the authenticity of the conversation on X, and we make sure that we are well equipped to fight against any manipulated media – including the rising trend of ‘deepfakes’ – and put visible labels on any such content that has been debunked by a credible source.”

Author: Lorenzo Tondo in Palermo