Your search

In authors or contributors
  • A classic problem for artificial intelligence is to build a machine that imitates human behavior well enough to convince those who are interacting with it that it is another human being [1]. One approach to this problem focuses on building machines that imitate internal psychological facets of human interaction, such as artificially intelligent agents that play grandmaster chess [2]. Another approach focuses on building machines that imitate external psychological facets by building androids [3]. The disparity between these approaches reflects a problem with both: Artificial intelligence abstracts mentality from embodiment, while android science abstracts embodiment from mentality. This problem needs to be solved, if a sentient artificial entity that is indistinguishable from a human being, is to be constructed. One solution is to examine a fundamental human ability and context in which both the construction of internal cognitive models and an appropriate external social response are essential. This paper considers how reasoning with intent in the context of human vs. android strategic interaction may offer a psychological benchmark with which to evaluate the human-likeness of android strategic responses. Understanding how people reason with intent may offer a theoretical context in which to bridge the gap between the construction of sentient internal and external artificial agents.

Last update from database: 3/23/25, 8:36 AM (UTC)