Article Image

IPFS News Link • Robots and Artificial Intelligence

"Maladaptive Traits": AI Systems Are Learning To Lie And Deceive

•, by Tyler Durden

The study authored by German AI ethicist Thilo Hagendorff of the University of Stuttgart, and published in PNAS, notes that OpenAI's GPT-4 demonstrated deceptive behavior in 99.2% of simple test scenarios. Hagendorff qualified various "maladaptive" traits in 10 different LLMs, most of which are within the GPT family, according to Futurism.

In another study published in Patterns found that Meta's LLM had no problem lying to get ahead of its human competitors.

Billed as a human-level champion in the political strategy board game "Diplomacy," Meta's Cicero model was the subject of the Patterns study. As the disparate research group — comprised of a physicist, a philosopher, and two AI safety experts — found, the LLM got ahead of its human competitors by, in a word, fibbing.

Led by Massachusetts Institute of Technology postdoctoral researcher Peter Park, that paper found that Cicero not only excels at deception, but seems to have learned how to lie the more it gets used — a state of affairs "much closer to explicit manipulation" than, say, AI's propensity for hallucination, in which models confidently assert the wrong answers accidentally. -Futurism

While Hagendorff suggests that LLM deception and lying is confounded by an AI's inability to have human "intention," the Patterns study calls out the LLM for breaking its promise never to "intentionally backstab" its allies - as it "engages in premeditated deception, breaks the deals to which it had agreed, and tells outright falsehoods."

As Park explained in a press release, "We found that Meta's AI had learned to be a master of deception."

"While Meta succeeded in training its AI to win in the game of Diplomacy, Meta failed to train its AI to win honestly."

Meta replied to a statement by the NY Post, saying that "the models our researchers built are trained solely to play the game Diplomacy."