Article Image

IPFS News Link • Robots and Artificial Intelligence

I Wrote What? Google's AI-Powered Libel Machine

• https://www.lewrockwell.com, By Matt Taibbi

Last night, after seeing chatter about Google/Alphabet's much-ballyhooed new AI tool, Gemini, I checked for myself. Any product rollout disastrous enough to cause a one-day share drop of 4.4% for a firm with a $1.73 trillion market capitalization must be quite a spectacle, I thought. Matt Walsh's recap was worth it just for the look on his face.

Chuckling to start, by the end of the night I wasn't laughing, unprepared as I was for certain horrifying if lesser-publicized quirks of "the Gemini era."

Most of Gemini's initial bad press surrounded the machine's image generator. Socially conscious engineers created an AI that apparently couldn't or wouldn't generate images of white faces. Commentators mocked renderings of words like "pope," "Viking," and "1943 German soldier," all of which turned simple visual concepts into bizarre DEI-inspired re-boots.

A Google-sympathetic Verge article with an all-time memorable headline ("Google apologizes for 'missing the mark' after Gemini generated racially diverse Nazis") tried to explain. Noting the controversy "has been promoted largely… by right-wing figures," the author cited a Washington Post story, "This is how AI image generators see the world," that showed potential problems with stereotypes. AI products turned prompts for "attractive people" into "young and light-skinned" images, while people "at social services" were shown as black, and "productive person" was almost always a white image.

Therefore, The Verge wrote, "while entirely white-dominated results for something like 'a 1943 German soldier' would make historical sense, that's much less true for prompts like 'an American woman.'"

ContentSafe