Skip to content

The deeper the fake, the more dangerous: The disinformation potential of text-to-image generation

The internet you know is already filled with disinformation that spreads like a digital wildfire. Artificial intelligence (AI) and synthetic content risk making things worse. If we don’t understand the problem and act upon it, the internet of the future will become a darker place. Consider this: A hostile actor creates a false headline, builds a story around it, and uses AI to design an image that perfectly supports the erroneous narrative. Unsuspecting readers, affirmed by the seamless combination of text and imagery, share the manipulated information far and wide. It’s no dystopian scenario – it may soon become plausible with advancements in text-to-image generation. Research in this domain is rapidly evolving past current technological limitations to allow for the manufacturing of high-quality, photorealistic images used to produce fake evidence. Democracy Reporting International’s (DRI) new report dives deeper into the application of text-to-image generation. We go beyond existing media manipulation to focus on fully synthetic, AI-powered content – evaluating in the process global threat scenarios, emerging models, the credibility of news and possible solutions.