AI model aims to predict human behavior

Artificial intelligence (AI) in the form of language models appears to be increasingly able to replicate human behavior. But are such AI models also capable of predicting human decisions? An international research team at the Institute for Human-Centered AI at the Helmholtz Zentrum München sought to determine this – and developed the new language model Centaur.
The team used an open-source language model from Meta AI as a basis. The researchers then programmed the Centaur AI using data from 160 psychological experiments. Around 60,000 test subjects were given tasks such as classifying objects or making decisions in gambling.
Ten million decisions as AI trainingIn total, the dataset for Centaur contains more than ten million decisions. The AI was trained with 90 percent of the result data. The results of the remaining ten percent remained unknown. The researchers then used this data to test their new language model: Would Centaur be able to predict the test subjects' behavior?
The result: The AI model was able to predict the actual decisions with an accuracy of up to 64 percent. Cenatur continued to deliver good results even when the experimental setup was slightly modified—that is, when it was asked to make predictions about situations for which it hadn't been specifically trained.

What's new about Centaur is that AI can be applied to "behavioral data," says Clemens Stachl, director of the Institute of Behavioral Science and Technology at the University of St. Gallen. "This was achieved by translating the results of classic decision-making experiments into language."
AI models like Centaur could also be applied beyond the social and behavioral sciences, says Stachl. "For example, wherever human behavior needs to be analyzed and predicted, such as in shopping, education, or the military."
The behavioral scientist considers practical application to be obvious, given that these types of AI models were developed by industry. Centaur, for example, uses Google 's basic architecture and Meta's pre-trained base.
"We can assume that large technology companies are already using similar models to predict our decision-making behavior and preferences – for example, when shopping online or on social media."
Stachl cites the language model ChatGPT and the social media platform TikTok as examples. "These models have become very good. Consider, for example, how well TikTok suggests videos to keep users in the app for as long as possible."

Other experts believe it will take some time before AI applications like Centaur are used outside of laboratories.
The psychological tests used to train the AI covered only a tiny portion of human behavior, says Markus Langer, who heads the Department of Industrial and Organizational Psychology at the University of Freiburg. "That doesn't say much about predicting 'natural' or 'everyday' human behavior."
He sees the main risk of this kind of research as being that the results could be overinterpreted—along the lines of: 'Wow, now we can finally predict human behavior precisely.' That is simply not the case yet, says Langer. One must also ask whether the accuracy of Centaur's predictions, at around 64 percent, can really be considered "good."
Should AI even be able to interpret human behavior?The Centaur model and the results of the study should be understood primarily as a contribution to basic research, says behavioral scientist Stachl. Models of this kind could, in principle, help solve complex societal challenges, for example in the health sector.
"At the same time, however, there is a risk that they will make us increasingly predictable and lead us into a form of digital dependency or even 'digital slavery.' " Stachl continued, adding that our everyday media consumption and use of digital technologies produce new behavioral data every day, which contributes to the further improvement of such models.
For the behavioral scientist, how to deal with this technology is a question that "our society as a whole must answer. In this regard, science, but especially lawyers and political decision-makers, will be more challenged in the future."
dw