Article Image

IPFS News Link • Robots and Artificial Intelligence

Can You Tell Who's Real? Nearly 40% Fooled by AI-generated Faces

• https://www.activistpost.com

It might be much harder than you think. Researchers from the University of Waterloo in Canada are exposing the astonishing difficulty people face in distinguishing between real and AI-generated human images. Overall, nearly 40 percent of real people are unable to tell who is fake.

This revelation comes at a time when AI-generated imagery is becoming more sophisticated, raising concerns over the potential for misuse in disinformation campaigns.

The research involved 260 participants, who were presented with 20 images devoid of any labels to indicate their origin. Among these, half were photographs of real people obtained via Google searches, while the other half were crafted by Stable Diffusion and DALL-E — two of the most advanced AI image-generation programs available today. The task was simple: identify which images were real and which were products of AI.

Surprisingly, only 61 percent of participants could accurately distinguish between the two, a figure significantly lower than the researchers' anticipated accuracy rate of 85 percent.

"People are not as adept at making the distinction as they think they are," says study lead author Andreea Pocol, a PhD candidate in computer science at the University of Waterloo, in a university release.

This finding underscores a growing concern over our collective ability to discern truth in the digital realm.

Participants based their judgments on specific details such as the appearance of fingers, teeth, and eyes — features they believed would betray the artificial nature of the images. However, these indicators were not as reliable as hoped. The study's design allowed for meticulous examination of each photo, a luxury not afforded to casual internet browsers or those quickly scrolling through content, a practice colloquially known as "doomscrolling."

"The extremely rapid rate at which AI technology is developing makes it particularly difficult to understand the potential for malicious or nefarious action posed by AI-generated images," Pocol adds.


thelibertyadvisor.com/declare