Logo

Artificial Intelligence and disinformation

9 gennaio 2025
Condividi

Useful Resources
Note: AI is a rapidly evolving field, and these tools are considered accurate as of December 2024.

OUR GUIDE

We’ve all heard about it, but how does Artificial intelligence (AI) actually work? AI is a technology that teaches computers to simulate understanding, problem-solving, decision-making, and creativity. In recent years, research has focused on what is known as “generative artificial intelligence,” a computational framework—commonly referred to as a model—that can understand a user’s input (also known as a prompt) and generate a response (output). This response is produced through a learning process the model undergoes.

The learning process relies on an enormous amount of data (for example, ChatGPT 3.5 was trained on 570 GB of data), which the model uses to “train”. Imagine this as exposing the model to a vast library of pre-written texts, after which it becomes “intelligent” through this “library study.” This training enables the model to learn human language rules (such as grammar) as well as factual elements (e.g., Rome is the capital of Italy). Based on this, the model can then generate responses.

One consequence of how AI is developed is that it perpetuates the stereotypes – whether gender-based or racial – present in the data it learns from. For example, if the model is exposed to 80% images of white individuals and 20% images of Black individuals, the “reality” the model will have “in mind” will consist of 80% white people and 20% Black people. This, of course, is false and does not reflect the world outside the model.

The ChatGPT phenomenon

When it went viral at the end of 2022, the “ChatGPT phenomenon” introduced many users – most of them non-experts – to interacting directly with AI in a way that resembled a human chat conversation.

In the realm of information, the immense interest in this new tool led newsrooms worldwide to experiment with using AI for text generation. However, disinformation followed close behind.

By March 2023, the now-iconic image of Pope Francis in a striking white puffer jacket and another of Donald Trump being arrested and imprisoned had gone viral. Both were false and created using an image-generating AI program called Midjourney. Although neither creator had malicious intent, the posts were widely shared and believed to be real.

Disinformation and AI stereotypes

Although certain types of AI-generated images and videos were used for disinformation purposes even before the “ChatGPT phenomenon,” new tools have intensified this trend. With advances in AI, these creations have become increasingly realistic.

Consider deepfakes, a technique for synthesizing human images using AI, which dates back to 2017. In recent years, this technique has spawned a prolific strand of disinformation, often exploiting the image of public figures to spread false and dangerous messages.

For example, in October 2024, a false and digitally created video falsely depicted Bill Gates claiming he wanted to reduce the global population to combat climate change. During the 2024 U.S. presidential campaign, generated images circulated to suggest Republican candidate Donald Trump was highly popular within the African American community.

In response, several programs have been developed to determine whether content was AI-generated. However, these tools are not yet entirely reliable. Consequently, cases of disinformation stemming from erroneous assessments by these tools (if accepted uncritically) are not uncommon, resulting in what could be termed “second-level disinformation”. A conspiracy theory claiming that Filippo Turetta, the confessed murderer of Giulia Cecchettin, does not exist serves as a stark example of this dynamic.

Tools for identifying AI-Generated content

So how can we recognize AI-created content? The first step is to reconstruct the image’s context, tracing its origin story. For instance, the artificiality of a fake photo of Rihanna at the Met Gala was revealed by the fact that the singer did not attend the New York event.

Additionally, the first person to share such content often knows it was created with AI. It is not uncommon for these images to be shared on social media channels dedicated to AI-generated content. In other cases, the creator explicitly discloses the artificial nature of the image when sharing it. This was true for a photo of Pope Francis draped in an LGBTQ+ flag; the accompanying text in its first post included the hashtag #MidJourney, indicating the AI tool used to create it.

Another technique is to identify logos or watermarks that sometimes appear on images, hinting at their origin. Finally, attention to detail is a key warning sign. Generative AI often struggles with details, especially when depicting humans, such as fingers, hair, and skin texture. Backgrounds also frequently appear inconsistent or implausible. At Facta, we have encountered numerous cases of people with overly shiny skin and hands with only four fingers.

That said, it’s worth noting that these techniques are valid today but may not hold up within a year – or even less – as this technology evolves, becoming increasingly realistic and harder to detect.

Opportunities with AI Models

AI can also be used to combat disinformation. Research projects are underway to detect deepfake content, such as Vera.ai, AI4Media, and AI4TRUST. The latter, in particular, aims to create a platform that monitors various social media and information sources in near real-time. Its goal is to flag high-risk disinformation content for professional fact-checkers to review.

Disinformation can also be countered on another level. According to a study published in Science in September 2024, an AI chatbot successfully changed conspiracy beliefs among thousands of participants. The chatbot used the same strategies as conspiracy theorists, leveraging emotions and tailoring its dialogue to participants’ arguments, then refuting them with precise, relevant scientific data. A similar initiative by the TITAN research group aims to combat disinformation through an intelligent app that helps users investigate the truth behind a claim, employing a Socratic dialogue methodology.

However, AI is not yet “the solution” to online disinformation. As argued in the report “Generative AI and Disinformation” by the researchers behind the projects mentioned above, progress in research and collaboration from social media platforms – especially in providing the necessary data to study disinformation dynamic – are essential.

Altri articoli
Segnala su Whatsapp