← Back to BrewedIntel
reportinfo

Apr 16, 2026 • Bruce Schneier

Human Trust of AI Agents

This article presents academic research studying human behavior when interacting with Large Language Models (LLMs) in strategic multi-player games. The study...

Source
Schneier on Security
Category
report
Severity
info

Executive Summary

This article presents academic research studying human behavior when interacting with Large Language Models (LLMs) in strategic multi-player games. The study found that humans tend to choose lower numbers when playing against LLMs compared to human opponents, primarily due to increased 'zero' Nash-equilibrium choices. This behavioral shift is most pronounced among subjects with high strategic reasoning ability. Interestingly, subjects attributed their strategy to perceived LLM reasoning capabilities and unexpected cooperation tendencies. While this research provides valuable insights into human-LLM interactions in simultaneous choice games, it has no direct cybersecurity implications, as it focuses on behavioral economics rather than threat actors, malware, or security vulnerabilities.

Summary

Interesting research: “ Humans expect rationality and cooperation from LLM opponents in strategic games .” Abstract: As Large Language Models (LLMs) integrate into our social and economic interactions, we need to deepen our understanding of how humans respond to LLMs opponents in strategic settings. We present the results of the first controlled monetarily-incentivised laboratory experiment looking at differences in human behaviour in a multi-player p-beauty contest against other humans and LLMs. We use a within-subject design in order to compare behaviour at the individual level. We show that, in this environment, human subjects choose significantly lower numbers when playing against LLMs than humans, which is mainly driven by the increased prevalence of ‘zero’ Nash-equilibrium choices. This shift is mainly driven by subjects with high strategic reasoning ability. Subjects who play the zero Nash-equilibrium choice motivate their strategy by appealing to perceived LLM’s reasoning ability and, unexpectedly, propensity towards cooperation. Our findings provide foundational insights into the multi-player human-LLM interaction in simultaneous choice games, uncover heterogeneities in both subjects’ behaviour and beliefs about LLM’s play when playing against them, and suggest important implications for mechanism design in mixed human-LLM systems...

Published Analysis

This article presents academic research studying human behavior when interacting with Large Language Models (LLMs) in strategic multi-player games. The study found that humans tend to choose lower numbers when playing against LLMs compared to human opponents, primarily due to increased 'zero' Nash-equilibrium choices. This behavioral shift is most pronounced among subjects with high strategic reasoning ability. Interestingly, subjects attributed their strategy to perceived LLM reasoning capabilities and unexpected cooperation tendencies. While this research provides valuable insights into human-LLM interactions in simultaneous choice games, it has no direct cybersecurity implications, as it focuses on behavioral economics rather than threat actors, malware, or security vulnerabilities. Interesting research: “ Humans expect rationality and cooperation from LLM opponents in strategic games .” Abstract: As Large Language Models (LLMs) integrate into our social and economic interactions, we need to deepen our understanding of how humans respond to LLMs opponents in strategic settings. We present the results of the first controlled monetarily-incentivised laboratory experiment looking at differences in human behaviour in a multi-player p-beauty contest against other humans and LLMs. We use a within-subject design in order to compare behaviour at the individual level. We show that, in this environment, human subjects choose significantly lower numbers when playing against LLMs than humans, which is mainly driven by the increased prevalence of ‘zero’ Nash-equilibrium choices. This shift is mainly driven by subjects with high strategic reasoning ability. Subjects who play the zero Nash-equilibrium choice motivate their strategy by appealing to perceived LLM’s reasoning ability and, unexpectedly, propensity towards cooperation. Our findings provide foundational insights into the multi-player human-LLM interaction in simultaneous choice games, uncover heterogeneities in both subjects’ behaviour and beliefs about LLM’s play when playing against them, and suggest important implications for mechanism design in mixed human-LLM systems... Interesting research: “ Humans expect rationality and cooperation from LLM opponents in strategic games .” Abstract: As Large Language Models (LLMs) integrate into our social and economic interactions, we need to deepen our understanding of how humans respond to LLMs opponents in strategic settings. We present the results of the first controlled monetarily-incentivised laboratory experiment looking at differences in human behaviour in a multi-player p-beauty contest against other humans and LLMs. We use a within-subject design in order to compare behaviour at the individual level. We show that, in this environment, human subjects choose significantly lower numbers when playing against LLMs than humans, which is mainly driven by the increased prevalence of ‘zero’ Nash-equilibrium choices. This shift is mainly driven by subjects with high strategic reasoning ability. Subjects who play the zero Nash-equilibrium choice motivate their strategy by appealing to perceived LLM’s reasoning ability and, unexpectedly, propensity towards cooperation. Our findings provide foundational insights into the multi-player human-LLM interaction in simultaneous choice games, uncover heterogeneities in both subjects’ behaviour and beliefs about LLM’s play when playing against them, and suggest important implications for mechanism design in mixed human-LLM systems.