Apr 16, 2026 • Bruce Schneier
Human Trust of AI Agents
This article presents academic research regarding human behavioral responses to Large Language Models (LLMs) in strategic game settings, specifically a...
Executive Summary
This article presents academic research regarding human behavioral responses to Large Language Models (LLMs) in strategic game settings, specifically a multi-player p-beauty contest. The study indicates humans adjust their strategies when competing against AI, often expecting rationality and cooperation. There are no identified cyber threats, malware families, or threat actors associated with this content. Consequently, there is no immediate security impact or risk to organizational infrastructure based on this text. No mitigation strategies are required as this is benign social science research concerning human-LLM interaction dynamics. Organizations monitoring threat intelligence should disregard this content as it does not pertain to cybersecurity incidents, vulnerabilities, or malicious campaigns. It serves purely as informational material regarding AI integration in social and economic interactions.
Summary
Interesting research: “ Humans expect rationality and cooperation from LLM opponents in strategic games .” Abstract: As Large Language Models (LLMs) integrate into our social and economic interactions, we need to deepen our understanding of how humans respond to LLMs opponents in strategic settings. We present the results of the first controlled monetarily-incentivised laboratory experiment looking at differences in human behaviour in a multi-player p-beauty contest against other humans and LLMs. We use a within-subject design in order to compare behaviour at the individual level. We show that, in this environment, human subjects choose significantly lower numbers when playing against LLMs than humans, which is mainly driven by the increased prevalence of ‘zero’ Nash-equilibrium choices. This shift is mainly driven by subjects with high strategic reasoning ability. Subjects who play the zero Nash-equilibrium choice motivate their strategy by appealing to perceived LLM’s reasoning ability and, unexpectedly, propensity towards cooperation. Our findings provide foundational insights into the multi-player human-LLM interaction in simultaneous choice games, uncover heterogeneities in both subjects’ behaviour and beliefs about LLM’s play when playing against them, and suggest important implications for mechanism design in mixed human-LLM systems...
Published Analysis
This article presents academic research regarding human behavioral responses to Large Language Models (LLMs) in strategic game settings, specifically a multi-player p-beauty contest. The study indicates humans adjust their strategies when competing against AI, often expecting rationality and cooperation. There are no identified cyber threats, malware families, or threat actors associated with this content. Consequently, there is no immediate security impact or risk to organizational infrastructure based on this text. No mitigation strategies are required as this is benign social science research concerning human-LLM interaction dynamics. Organizations monitoring threat intelligence should disregard this content as it does not pertain to cybersecurity incidents, vulnerabilities, or malicious campaigns. It serves purely as informational material regarding AI integration in social and economic interactions. Interesting research: “ Humans expect rationality and cooperation from LLM opponents in strategic games .” Abstract: As Large Language Models (LLMs) integrate into our social and economic interactions, we need to deepen our understanding of how humans respond to LLMs opponents in strategic settings. We present the results of the first controlled monetarily-incentivised laboratory experiment looking at differences in human behaviour in a multi-player p-beauty contest against other humans and LLMs. We use a within-subject design in order to compare behaviour at the individual level. We show that, in this environment, human subjects choose significantly lower numbers when playing against LLMs than humans, which is mainly driven by the increased prevalence of ‘zero’ Nash-equilibrium choices. This shift is mainly driven by subjects with high strategic reasoning ability. Subjects who play the zero Nash-equilibrium choice motivate their strategy by appealing to perceived LLM’s reasoning ability and, unexpectedly, propensity towards cooperation. Our findings provide foundational insights into the multi-player human-LLM interaction in simultaneous choice games, uncover heterogeneities in both subjects’ behaviour and beliefs about LLM’s play when playing against them, and suggest important implications for mechanism design in mixed human-LLM systems... Interesting research: “ Humans expect rationality and cooperation from LLM opponents in strategic games .” Abstract: As Large Language Models (LLMs) integrate into our social and economic interactions, we need to deepen our understanding of how humans respond to LLMs opponents in strategic settings. We present the results of the first controlled monetarily-incentivised laboratory experiment looking at differences in human behaviour in a multi-player p-beauty contest against other humans and LLMs. We use a within-subject design in order to compare behaviour at the individual level. We show that, in this environment, human subjects choose significantly lower numbers when playing against LLMs than humans, which is mainly driven by the increased prevalence of ‘zero’ Nash-equilibrium choices. This shift is mainly driven by subjects with high strategic reasoning ability. Subjects who play the zero Nash-equilibrium choice motivate their strategy by appealing to perceived LLM’s reasoning ability and, unexpectedly, propensity towards cooperation. Our findings provide foundational insights into the multi-player human-LLM interaction in simultaneous choice games, uncover heterogeneities in both subjects’ behaviour and beliefs about LLM’s play when playing against them, and suggest important implications for mechanism design in mixed human-LLM systems.