Your search

In authors or contributors
  • It is widely agreed that possession of consciousness contributes to an entity’s moral status. Therefore, if we could identify consciousness in a machine, this would be a compelling argument for considering it to possess at least a degree of moral status. However, as Elisabeth Hildt explains, our third person perspective on artificial intelligence means that determining if a machine is conscious will be very difficult. In this commentary, I argue that this epistemological question cannot be conclusively answered, rendering artificial consciousness as morally irrelevant in practice. I also argue that Hildt’s suggestion that we avoid developing morally relevant forms of machine consciousness is impractical. Instead, we should design artificial intelligences so they can communicate with us. We can use their behavior to assign them what I call an artificial moral status, where we treat them as if they had moral status equivalent to that of a living organism with similar behavior.

Last update from database: 3/23/25, 8:36 AM (UTC)