this post was submitted on 31 Oct 2023
1 points (100.0% liked)

Machine Learning

1 readers
1 users here now

Community Rules:

founded 2 years ago
MODERATORS
 

Paper: https://arxiv.org/abs/2310.15421

Code: https://github.com/skywalker023/fantom

Blog: https://hyunw.kim/fantom/

Abstract:

Theory of mind (ToM) evaluations currently focus on testing models using passive narratives that inherently lack interactivity. We introduce FANToM ๐Ÿ‘ป, a new benchmark designed to stress-test ToM within information-asymmetric conversational contexts via question answering. Our benchmark draws upon important theoretical requisites from psychology and necessary empirical considerations when evaluating large language models (LLMs). In particular, we formulate multiple types of questions that demand the same underlying reasoning to identify illusory or false sense of ToM capabilities in LLMs. We show that FANToM is challenging for state-of-the-art LLMs, which perform significantly worse than humans even with chain-of-thought reasoning or fine-tuning.

โ€‹

https://preview.redd.it/mxb85o2vkexb1.png?width=1367&format=png&auto=webp&s=8749cddd15e6740e69ae47ef5edf3a1da96d89c2

top 1 comments
sorted by: hot top controversial new old

Found 1 relevant code implementation for "FANToM: A Benchmark for Stress-testing Machine Theory of Mind in Interactions".

If you have code to share with the community, please add it here ๐Ÿ˜Š๐Ÿ™

--

To opt out from receiving code links, DM me.